OpenSearch/docs/reference/ml/df-analytics/apis/evaluateresources.asciidoc

98 lines
3.2 KiB
Plaintext

[role="xpack"]
[testenv="platinum"]
[[ml-evaluate-dfanalytics-resources]]
=== {dfanalytics-cap} evaluation resources
Evaluation configuration objects relate to the <<evaluate-dfanalytics>>.
[discrete]
[[ml-evaluate-dfanalytics-properties]]
==== {api-definitions-title}
`evaluation`::
(object) Defines the type of evaluation you want to perform. The value of this
object can be different depending on the type of evaluation you want to
perform.
+
--
Available evaluation types:
* `binary_soft_classification`
* `regression`
--
`query`::
(object) A query clause that retrieves a subset of data from the source index.
See <<query-dsl>>. The evaluation only applies to those documents of the index
that match the query.
[[binary-sc-resources]]
==== Binary soft classification configuration objects
Binary soft classification evaluates the results of an analysis which outputs
the probability that each {dataframe} row belongs to a certain class. For
example, in the context of outlier detection, the analysis outputs the
probability whether each row is an outlier.
[discrete]
[[binary-sc-resources-properties]]
===== {api-definitions-title}
`actual_field`::
(string) The field of the `index` which contains the `ground truth`.
The data type of this field can be boolean or integer. If the data type is
integer, the value has to be either `0` (false) or `1` (true).
`predicted_probability_field`::
(string) The field of the `index` that defines the probability of
whether the item belongs to the class in question or not. It's the field that
contains the results of the analysis.
`metrics`::
(object) Specifies the metrics that are used for the evaluation.
Available metrics:
`auc_roc`::
(object) The AUC ROC (area under the curve of the receiver operating
characteristic) score and optionally the curve.
Default value is {"includes_curve": false}.
`precision`::
(object) Set the different thresholds of the {olscore} at where the metric
is calculated.
Default value is {"at": [0.25, 0.50, 0.75]}.
`recall`::
(object) Set the different thresholds of the {olscore} at where the metric
is calculated.
Default value is {"at": [0.25, 0.50, 0.75]}.
`confusion_matrix`::
(object) Set the different thresholds of the {olscore} at where the metrics
(`tp` - true positive, `fp` - false positive, `tn` - true negative, `fn` -
false negative) are calculated.
Default value is {"at": [0.25, 0.50, 0.75]}.
[[regression-evaluation-resources]]
==== {regression-cap} evaluation objects
{regression-cap} evaluation evaluates the results of a {regression} analysis
which outputs a prediction of values.
[discrete]
[[regression-evaluation-resources-properties]]
===== {api-definitions-title}
`actual_field`::
(string) The field of the `index` which contains the `ground truth`. The data
type of this field must be numerical.
`predicted_field`::
(string) The field in the `index` that contains the predicted value,
in other words the results of the {regression} analysis.
`metrics`::
(object) Specifies the metrics that are used for the evaluation. Available
metrics are `r_squared` and `mean_squared_error`.