diff --git a/docs/en/ml/aggregations.asciidoc b/docs/en/ml/aggregations.asciidoc index 99011ba99e1..7739dd142c0 100644 --- a/docs/en/ml/aggregations.asciidoc +++ b/docs/en/ml/aggregations.asciidoc @@ -33,7 +33,7 @@ PUT _xpack/ml/anomaly_detectors/farequote "field_name":"responsetime", "by_field_name":"airline" }], - "summary_count_field": "doc_count" + "summary_count_field_name": "doc_count" }, "data_description": { "time_field":"time" diff --git a/docs/en/ml/functions/count.asciidoc b/docs/en/ml/functions/count.asciidoc index 7248021dce5..d47b366513d 100644 --- a/docs/en/ml/functions/count.asciidoc +++ b/docs/en/ml/functions/count.asciidoc @@ -89,7 +89,7 @@ compared to its past behavior. [source,js] -------------------------------------------------- { - "summary_count_field" : "events_per_min", + "summary_count_field_name" : "events_per_min", "detectors" [ { "function" : "count" } ] @@ -98,7 +98,7 @@ compared to its past behavior. If you are analyzing an aggregated `events_per_min` field, do not use a sum function (for example, `sum(events_per_min)`). Instead, use the count function -and the `summary_count_field` property. +and the `summary_count_field_name` property. //TO-DO: For more information, see <>. [float] diff --git a/docs/en/ml/getting-started.asciidoc b/docs/en/ml/getting-started.asciidoc index 978d9984d47..d582381da22 100644 --- a/docs/en/ml/getting-started.asciidoc +++ b/docs/en/ml/getting-started.asciidoc @@ -331,7 +331,7 @@ requests over time. It is therefore logical to start by creating a single metric job for this KPI. TIP: If you are using aggregated data, you can create an advanced job -and configure it to use a `summary_count_field`. The {ml} algorithms will +and configure it to use a `summary_count_field_name`. The {ml} algorithms will make the best possible use of summarized data in this case. For simplicity, in this tutorial we will not make use of that advanced functionality.