2019-06-25 13:29:11 -04:00
|
|
|
--
|
|
|
|
:api: evaluate-data-frame
|
|
|
|
:request: EvaluateDataFrameRequest
|
|
|
|
:response: EvaluateDataFrameResponse
|
|
|
|
--
|
2019-09-10 11:26:56 -04:00
|
|
|
[role="xpack"]
|
2019-06-25 13:29:11 -04:00
|
|
|
[id="{upid}-{api}"]
|
2019-09-16 13:00:44 -04:00
|
|
|
=== Evaluate {dfanalytics} API
|
2019-06-25 13:29:11 -04:00
|
|
|
|
2019-09-16 13:00:44 -04:00
|
|
|
Evaluates the {ml} algorithm that ran on a {dataframe}.
|
2019-06-25 13:29:11 -04:00
|
|
|
The API accepts an +{request}+ object and returns an +{response}+.
|
|
|
|
|
|
|
|
[id="{upid}-{api}-request"]
|
2019-09-16 13:00:44 -04:00
|
|
|
==== Evaluate {dfanalytics} request
|
2019-06-25 13:29:11 -04:00
|
|
|
|
|
|
|
["source","java",subs="attributes,callouts,macros"]
|
|
|
|
--------------------------------------------------
|
|
|
|
include-tagged::{doc-tests-file}[{api}-request]
|
|
|
|
--------------------------------------------------
|
|
|
|
<1> Constructing a new evaluation request
|
|
|
|
<2> Reference to an existing index
|
2019-08-22 05:14:26 -04:00
|
|
|
<3> The query with which to select data from indices
|
2019-10-11 04:19:55 -04:00
|
|
|
<4> Evaluation to be performed
|
|
|
|
|
|
|
|
==== Evaluation
|
|
|
|
|
|
|
|
Evaluation to be performed.
|
|
|
|
Currently, supported evaluations include: +BinarySoftClassification+, +Classification+, +Regression+.
|
|
|
|
|
|
|
|
===== Binary soft classification
|
|
|
|
|
|
|
|
["source","java",subs="attributes,callouts,macros"]
|
|
|
|
--------------------------------------------------
|
|
|
|
include-tagged::{doc-tests-file}[{api}-evaluation-softclassification]
|
|
|
|
--------------------------------------------------
|
|
|
|
<1> Constructing a new evaluation
|
|
|
|
<2> Name of the field in the index. Its value denotes the actual (i.e. ground truth) label for an example. Must be either true or false.
|
|
|
|
<3> Name of the field in the index. Its value denotes the probability (as per some ML algorithm) of the example being classified as positive.
|
|
|
|
<4> The remaining parameters are the metrics to be calculated based on the two fields described above
|
|
|
|
<5> https://en.wikipedia.org/wiki/Precision_and_recall#Precision[Precision] calculated at thresholds: 0.4, 0.5 and 0.6
|
|
|
|
<6> https://en.wikipedia.org/wiki/Precision_and_recall#Recall[Recall] calculated at thresholds: 0.5 and 0.7
|
|
|
|
<7> https://en.wikipedia.org/wiki/Confusion_matrix[Confusion matrix] calculated at threshold 0.5
|
|
|
|
<8> https://en.wikipedia.org/wiki/Receiver_operating_characteristic#Area_under_the_curve[AuC ROC] calculated and the curve points returned
|
|
|
|
|
|
|
|
===== Classification
|
|
|
|
|
|
|
|
["source","java",subs="attributes,callouts,macros"]
|
|
|
|
--------------------------------------------------
|
|
|
|
include-tagged::{doc-tests-file}[{api}-evaluation-classification]
|
|
|
|
--------------------------------------------------
|
|
|
|
<1> Constructing a new evaluation
|
|
|
|
<2> Name of the field in the index. Its value denotes the actual (i.e. ground truth) class the example belongs to.
|
|
|
|
<3> Name of the field in the index. Its value denotes the predicted (as per some ML algorithm) class of the example.
|
|
|
|
<4> The remaining parameters are the metrics to be calculated based on the two fields described above
|
2019-11-21 09:01:18 -05:00
|
|
|
<5> Accuracy
|
|
|
|
<6> Multiclass confusion matrix of size 3
|
2019-10-11 04:19:55 -04:00
|
|
|
|
|
|
|
===== Regression
|
|
|
|
|
|
|
|
["source","java",subs="attributes,callouts,macros"]
|
|
|
|
--------------------------------------------------
|
|
|
|
include-tagged::{doc-tests-file}[{api}-evaluation-regression]
|
|
|
|
--------------------------------------------------
|
|
|
|
<1> Constructing a new evaluation
|
|
|
|
<2> Name of the field in the index. Its value denotes the actual (i.e. ground truth) value for an example.
|
|
|
|
<3> Name of the field in the index. Its value denotes the predicted (as per some ML algorithm) value for the example.
|
|
|
|
<4> The remaining parameters are the metrics to be calculated based on the two fields described above
|
|
|
|
<5> https://en.wikipedia.org/wiki/Mean_squared_error[Mean squared error]
|
|
|
|
<6> https://en.wikipedia.org/wiki/Coefficient_of_determination[R squared]
|
2019-06-25 13:29:11 -04:00
|
|
|
|
|
|
|
include::../execution.asciidoc[]
|
|
|
|
|
|
|
|
[id="{upid}-{api}-response"]
|
|
|
|
==== Response
|
|
|
|
|
|
|
|
The returned +{response}+ contains the requested evaluation metrics.
|
|
|
|
|
|
|
|
["source","java",subs="attributes,callouts,macros"]
|
|
|
|
--------------------------------------------------
|
|
|
|
include-tagged::{doc-tests-file}[{api}-response]
|
|
|
|
--------------------------------------------------
|
|
|
|
<1> Fetching all the calculated metrics results
|
2019-10-11 04:19:55 -04:00
|
|
|
|
|
|
|
==== Results
|
|
|
|
|
|
|
|
===== Binary soft classification
|
|
|
|
|
|
|
|
["source","java",subs="attributes,callouts,macros"]
|
|
|
|
--------------------------------------------------
|
|
|
|
include-tagged::{doc-tests-file}[{api}-results-softclassification]
|
|
|
|
--------------------------------------------------
|
|
|
|
|
|
|
|
<1> Fetching precision metric by name
|
|
|
|
<2> Fetching precision at a given (0.4) threshold
|
|
|
|
<3> Fetching confusion matrix metric by name
|
|
|
|
<4> Fetching confusion matrix at a given (0.5) threshold
|
|
|
|
|
|
|
|
===== Classification
|
|
|
|
|
|
|
|
["source","java",subs="attributes,callouts,macros"]
|
|
|
|
--------------------------------------------------
|
|
|
|
include-tagged::{doc-tests-file}[{api}-results-classification]
|
|
|
|
--------------------------------------------------
|
|
|
|
|
2019-11-21 09:01:18 -05:00
|
|
|
<1> Fetching accuracy metric by name
|
|
|
|
<2> Fetching the actual accuracy value
|
|
|
|
<3> Fetching multiclass confusion matrix metric by name
|
|
|
|
<4> Fetching the contents of the confusion matrix
|
|
|
|
<5> Fetching the number of classes that were not included in the matrix
|
2019-10-11 04:19:55 -04:00
|
|
|
|
|
|
|
===== Regression
|
|
|
|
|
|
|
|
["source","java",subs="attributes,callouts,macros"]
|
|
|
|
--------------------------------------------------
|
|
|
|
include-tagged::{doc-tests-file}[{api}-results-regression]
|
|
|
|
--------------------------------------------------
|
|
|
|
|
|
|
|
<1> Fetching mean squared error metric by name
|
|
|
|
<2> Fetching the actual mean squared error value
|
|
|
|
<3> Fetching R squared metric by name
|
|
|
|
<4> Fetching the actual R squared value
|