2016-02-15 07:12:16 -05:00
[[search-aggregations-bucket-diversified-sampler-aggregation]]
2016-11-07 05:45:35 -05:00
=== Diversified Sampler Aggregation
2015-12-14 06:54:41 -05:00
2017-01-31 04:45:25 -05:00
Like the `sampler` aggregation this is a filtering aggregation used to limit any sub aggregations' processing to a sample of the top-scoring documents.
The `diversified_sampler` aggregation adds the ability to limit the number of matches that share a common value such as an "author".
NOTE: Any good market researcher will tell you that when working with samples of data it is important
that the sample represents a healthy variety of opinions rather than being skewed by any single voice.
Enforce that responses in docs are valid json (#26249)
All of the snippets in our docs marked with `// TESTRESPONSE` are
checked against the response from Elasticsearch but, due to the
way they are implemented they are actually parsed as YAML instead
of JSON. Luckilly, all valid JSON is valid YAML! Unfurtunately
that means that invalid JSON has snuck into the exmples!
This adds a step during the build to parse them as JSON and fail
the build if they don't parse.
But no! It isn't quite that simple. The displayed text of some of
these responses looks like:
```
{
...
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
Note the `...` which isn't valid json but we like it anyway and want
it in the output. We use substitution rules to convert the `...`
into the response we expect. That yields a response that looks like:
```
{
"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
That is what the tests consume but it isn't valid JSON! Oh no! We don't
want to go update all the substitution rules because that'd be huge and,
ultimately, wouldn't buy much. So we quote the `$body.took` bits before
parsing the JSON.
Note the responses that we use for the `_cat` APIs are all converted into
regexes and there is no expectation that they are valid JSON.
Closes #26233
2017-08-17 09:02:10 -04:00
The same is true with aggregations and sampling with these diversify settings can offer a way to remove the bias in your content (an over-populated geography,
a large spike in a timeline or an over-active forum spammer).
2017-01-31 04:45:25 -05:00
2015-12-14 06:54:41 -05:00
.Example use cases:
* Tightening the focus of analytics to high-relevance matches rather than the potentially very long tail of low-quality matches
* Removing bias from analytics by ensuring fair representation of content from different sources
* Reducing the running cost of aggregations that can produce useful results using only samples e.g. `significant_terms`
Enforce that responses in docs are valid json (#26249)
All of the snippets in our docs marked with `// TESTRESPONSE` are
checked against the response from Elasticsearch but, due to the
way they are implemented they are actually parsed as YAML instead
of JSON. Luckilly, all valid JSON is valid YAML! Unfurtunately
that means that invalid JSON has snuck into the exmples!
This adds a step during the build to parse them as JSON and fail
the build if they don't parse.
But no! It isn't quite that simple. The displayed text of some of
these responses looks like:
```
{
...
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
Note the `...` which isn't valid json but we like it anyway and want
it in the output. We use substitution rules to convert the `...`
into the response we expect. That yields a response that looks like:
```
{
"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
That is what the tests consume but it isn't valid JSON! Oh no! We don't
want to go update all the substitution rules because that'd be huge and,
ultimately, wouldn't buy much. So we quote the `$body.took` bits before
parsing the JSON.
Note the responses that we use for the `_cat` APIs are all converted into
regexes and there is no expectation that they are valid JSON.
Closes #26233
2017-08-17 09:02:10 -04:00
A choice of `field` or `script` setting is used to provide values used for de-duplication and the `max_docs_per_value` setting controls the maximum
2017-01-31 04:45:25 -05:00
number of documents collected on any one shard which share a common value. The default setting for `max_docs_per_value` is 1.
The aggregation will throw an error if the choice of `field` or `script` produces multiple values for a single document (de-duplication using multi-valued fields is not supported due to efficiency concerns).
2015-12-14 06:54:41 -05:00
Example:
2017-01-31 04:45:25 -05:00
We might want to see which tags are strongly associated with `#elasticsearch` on StackOverflow
forum posts but ignoring the effects of some prolific users with a tendency to misspell #Kibana as #Cabana.
2019-09-05 10:11:25 -04:00
[source,console]
2015-12-14 06:54:41 -05:00
--------------------------------------------------
2017-01-31 04:45:25 -05:00
POST /stackoverflow/_search?size=0
2015-12-14 06:54:41 -05:00
{
"query": {
2017-01-31 04:45:25 -05:00
"query_string": {
"query": "tags:elasticsearch"
2015-12-14 06:54:41 -05:00
}
},
"aggs": {
2017-01-31 04:45:25 -05:00
"my_unbiased_sample": {
2016-11-07 05:45:35 -05:00
"diversified_sampler": {
2015-12-14 06:54:41 -05:00
"shard_size": 200,
Enforce that responses in docs are valid json (#26249)
All of the snippets in our docs marked with `// TESTRESPONSE` are
checked against the response from Elasticsearch but, due to the
way they are implemented they are actually parsed as YAML instead
of JSON. Luckilly, all valid JSON is valid YAML! Unfurtunately
that means that invalid JSON has snuck into the exmples!
This adds a step during the build to parse them as JSON and fail
the build if they don't parse.
But no! It isn't quite that simple. The displayed text of some of
these responses looks like:
```
{
...
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
Note the `...` which isn't valid json but we like it anyway and want
it in the output. We use substitution rules to convert the `...`
into the response we expect. That yields a response that looks like:
```
{
"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
That is what the tests consume but it isn't valid JSON! Oh no! We don't
want to go update all the substitution rules because that'd be huge and,
ultimately, wouldn't buy much. So we quote the `$body.took` bits before
parsing the JSON.
Note the responses that we use for the `_cat` APIs are all converted into
regexes and there is no expectation that they are valid JSON.
Closes #26233
2017-08-17 09:02:10 -04:00
"field" : "author"
2015-12-14 06:54:41 -05:00
},
"aggs": {
"keywords": {
"significant_terms": {
2017-01-31 04:45:25 -05:00
"field": "tags",
"exclude": ["elasticsearch"]
2015-12-14 06:54:41 -05:00
}
}
}
}
}
}
--------------------------------------------------
2017-01-31 04:45:25 -05:00
// TEST[setup:stackoverflow]
2015-12-14 06:54:41 -05:00
Response:
2019-09-06 16:09:09 -04:00
[source,console-result]
2015-12-14 06:54:41 -05:00
--------------------------------------------------
{
...
2017-01-31 04:45:25 -05:00
"aggregations": {
"my_unbiased_sample": {
2017-06-02 03:45:15 -04:00
"doc_count": 151,<1>
2015-12-14 06:54:41 -05:00
"keywords": {<2>
2017-06-02 03:45:15 -04:00
"doc_count": 151,
"bg_count": 650,
2015-12-14 06:54:41 -05:00
"buckets": [
{
2017-01-31 04:45:25 -05:00
"key": "kibana",
"doc_count": 150,
"score": 2.213,
"bg_count": 200
}
]
}
}
}
2015-12-14 06:54:41 -05:00
}
--------------------------------------------------
2017-01-31 04:45:25 -05:00
// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/]
// TESTRESPONSE[s/2.213/$body.aggregations.my_unbiased_sample.keywords.buckets.0.score/]
2015-12-14 06:54:41 -05:00
2017-06-02 03:45:15 -04:00
<1> 151 documents were sampled in total.
2017-01-31 04:45:25 -05:00
<2> The results of the significant_terms aggregation are not skewed by any single author's quirks because we asked for a maximum of one post from any one author in our sample.
2015-12-14 06:54:41 -05:00
2017-01-31 04:45:25 -05:00
==== Scripted example:
2015-12-14 06:54:41 -05:00
Enforce that responses in docs are valid json (#26249)
All of the snippets in our docs marked with `// TESTRESPONSE` are
checked against the response from Elasticsearch but, due to the
way they are implemented they are actually parsed as YAML instead
of JSON. Luckilly, all valid JSON is valid YAML! Unfurtunately
that means that invalid JSON has snuck into the exmples!
This adds a step during the build to parse them as JSON and fail
the build if they don't parse.
But no! It isn't quite that simple. The displayed text of some of
these responses looks like:
```
{
...
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
Note the `...` which isn't valid json but we like it anyway and want
it in the output. We use substitution rules to convert the `...`
into the response we expect. That yields a response that looks like:
```
{
"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
That is what the tests consume but it isn't valid JSON! Oh no! We don't
want to go update all the substitution rules because that'd be huge and,
ultimately, wouldn't buy much. So we quote the `$body.took` bits before
parsing the JSON.
Note the responses that we use for the `_cat` APIs are all converted into
regexes and there is no expectation that they are valid JSON.
Closes #26233
2017-08-17 09:02:10 -04:00
In this scenario we might want to diversify on a combination of field values. We can use a `script` to produce a hash of the
2017-01-31 04:45:25 -05:00
multiple values in a tags field to ensure we don't have a sample that consists of the same repeated combinations of tags.
2015-12-14 06:54:41 -05:00
2019-09-05 10:11:25 -04:00
[source,console]
2015-12-14 06:54:41 -05:00
--------------------------------------------------
2017-01-31 04:45:25 -05:00
POST /stackoverflow/_search?size=0
2015-12-14 06:54:41 -05:00
{
2017-01-31 04:45:25 -05:00
"query": {
"query_string": {
"query": "tags:kibana"
}
},
"aggs": {
"my_unbiased_sample": {
"diversified_sampler": {
"shard_size": 200,
"max_docs_per_value" : 3,
"script" : {
"lang": "painless",
2018-11-27 14:30:13 -05:00
"source": "doc['tags'].hashCode()"
Enforce that responses in docs are valid json (#26249)
All of the snippets in our docs marked with `// TESTRESPONSE` are
checked against the response from Elasticsearch but, due to the
way they are implemented they are actually parsed as YAML instead
of JSON. Luckilly, all valid JSON is valid YAML! Unfurtunately
that means that invalid JSON has snuck into the exmples!
This adds a step during the build to parse them as JSON and fail
the build if they don't parse.
But no! It isn't quite that simple. The displayed text of some of
these responses looks like:
```
{
...
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
Note the `...` which isn't valid json but we like it anyway and want
it in the output. We use substitution rules to convert the `...`
into the response we expect. That yields a response that looks like:
```
{
"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
That is what the tests consume but it isn't valid JSON! Oh no! We don't
want to go update all the substitution rules because that'd be huge and,
ultimately, wouldn't buy much. So we quote the `$body.took` bits before
parsing the JSON.
Note the responses that we use for the `_cat` APIs are all converted into
regexes and there is no expectation that they are valid JSON.
Closes #26233
2017-08-17 09:02:10 -04:00
}
2017-01-31 04:45:25 -05:00
},
"aggs": {
"keywords": {
"significant_terms": {
"field": "tags",
"exclude": ["kibana"]
}
}
2015-12-14 06:54:41 -05:00
}
}
}
}
--------------------------------------------------
2017-01-31 04:45:25 -05:00
// TEST[setup:stackoverflow]
2015-12-14 06:54:41 -05:00
2017-01-31 04:45:25 -05:00
Response:
2015-12-14 06:54:41 -05:00
2019-09-06 16:09:09 -04:00
[source,console-result]
2015-12-14 06:54:41 -05:00
--------------------------------------------------
{
2017-01-31 04:45:25 -05:00
...
"aggregations": {
"my_unbiased_sample": {
2017-06-02 03:45:15 -04:00
"doc_count": 6,
2017-01-31 05:27:06 -05:00
"keywords": {
2017-06-02 03:45:15 -04:00
"doc_count": 6,
"bg_count": 650,
2017-01-31 04:45:25 -05:00
"buckets": [
{
"key": "logstash",
"doc_count": 3,
"score": 2.213,
"bg_count": 50
},
{
"key": "elasticsearch",
"doc_count": 3,
"score": 1.34,
"bg_count": 200
Enforce that responses in docs are valid json (#26249)
All of the snippets in our docs marked with `// TESTRESPONSE` are
checked against the response from Elasticsearch but, due to the
way they are implemented they are actually parsed as YAML instead
of JSON. Luckilly, all valid JSON is valid YAML! Unfurtunately
that means that invalid JSON has snuck into the exmples!
This adds a step during the build to parse them as JSON and fail
the build if they don't parse.
But no! It isn't quite that simple. The displayed text of some of
these responses looks like:
```
{
...
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
Note the `...` which isn't valid json but we like it anyway and want
it in the output. We use substitution rules to convert the `...`
into the response we expect. That yields a response that looks like:
```
{
"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
That is what the tests consume but it isn't valid JSON! Oh no! We don't
want to go update all the substitution rules because that'd be huge and,
ultimately, wouldn't buy much. So we quote the `$body.took` bits before
parsing the JSON.
Note the responses that we use for the `_cat` APIs are all converted into
regexes and there is no expectation that they are valid JSON.
Closes #26233
2017-08-17 09:02:10 -04:00
}
2017-01-31 04:45:25 -05:00
]
2015-12-14 06:54:41 -05:00
}
}
}
}
--------------------------------------------------
2017-01-31 04:45:25 -05:00
// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/]
// TESTRESPONSE[s/2.213/$body.aggregations.my_unbiased_sample.keywords.buckets.0.score/]
// TESTRESPONSE[s/1.34/$body.aggregations.my_unbiased_sample.keywords.buckets.1.score/]
==== shard_size
The `shard_size` parameter limits how many top-scoring documents are collected in the sample processed on each shard.
The default value is 100.
==== max_docs_per_value
The `max_docs_per_value` is an optional parameter and limits how many documents are permitted per choice of de-duplicating value.
The default setting is "1".
2015-12-14 06:54:41 -05:00
==== execution_hint
2017-01-31 04:45:25 -05:00
The optional `execution_hint` setting can influence the management of the values used for de-duplication.
2015-12-14 06:54:41 -05:00
Each option will hold up to `shard_size` values in memory while performing de-duplication but the type of value held can be controlled as follows:
Enforce that responses in docs are valid json (#26249)
All of the snippets in our docs marked with `// TESTRESPONSE` are
checked against the response from Elasticsearch but, due to the
way they are implemented they are actually parsed as YAML instead
of JSON. Luckilly, all valid JSON is valid YAML! Unfurtunately
that means that invalid JSON has snuck into the exmples!
This adds a step during the build to parse them as JSON and fail
the build if they don't parse.
But no! It isn't quite that simple. The displayed text of some of
these responses looks like:
```
{
...
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
Note the `...` which isn't valid json but we like it anyway and want
it in the output. We use substitution rules to convert the `...`
into the response we expect. That yields a response that looks like:
```
{
"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
That is what the tests consume but it isn't valid JSON! Oh no! We don't
want to go update all the substitution rules because that'd be huge and,
ultimately, wouldn't buy much. So we quote the `$body.took` bits before
parsing the JSON.
Note the responses that we use for the `_cat` APIs are all converted into
regexes and there is no expectation that they are valid JSON.
Closes #26233
2017-08-17 09:02:10 -04:00
2015-12-14 06:54:41 -05:00
- hold field values directly (`map`)
- hold ordinals of the field as determined by the Lucene index (`global_ordinals`)
- hold hashes of the field values - with potential for hash collisions (`bytes_hash`)
Enforce that responses in docs are valid json (#26249)
All of the snippets in our docs marked with `// TESTRESPONSE` are
checked against the response from Elasticsearch but, due to the
way they are implemented they are actually parsed as YAML instead
of JSON. Luckilly, all valid JSON is valid YAML! Unfurtunately
that means that invalid JSON has snuck into the exmples!
This adds a step during the build to parse them as JSON and fail
the build if they don't parse.
But no! It isn't quite that simple. The displayed text of some of
these responses looks like:
```
{
...
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
Note the `...` which isn't valid json but we like it anyway and want
it in the output. We use substitution rules to convert the `...`
into the response we expect. That yields a response that looks like:
```
{
"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
That is what the tests consume but it isn't valid JSON! Oh no! We don't
want to go update all the substitution rules because that'd be huge and,
ultimately, wouldn't buy much. So we quote the `$body.took` bits before
parsing the JSON.
Note the responses that we use for the `_cat` APIs are all converted into
regexes and there is no expectation that they are valid JSON.
Closes #26233
2017-08-17 09:02:10 -04:00
2015-12-14 06:54:41 -05:00
The default setting is to use `global_ordinals` if this information is available from the Lucene index and reverting to `map` if not.
The `bytes_hash` setting may prove faster in some cases but introduces the possibility of false positives in de-duplication logic due to the possibility of hash collisions.
Please note that Elasticsearch will ignore the choice of execution hint if it is not applicable and that there is no backward compatibility guarantee on these hints.
==== Limitations
2019-04-30 10:19:09 -04:00
[[div-sampler-breadth-first-nested-agg]]
2015-12-14 06:54:41 -05:00
===== Cannot be nested under `breadth_first` aggregations
2017-01-31 04:45:25 -05:00
Being a quality-based filter the diversified_sampler aggregation needs access to the relevance score produced for each document.
2015-12-14 06:54:41 -05:00
It therefore cannot be nested under a `terms` aggregation which has the `collect_mode` switched from the default `depth_first` mode to `breadth_first` as this discards scores.
In this situation an error will be thrown.
===== Limited de-dup logic.
2017-01-31 04:45:25 -05:00
The de-duplication logic applies only at a shard level so will not apply across shards.
2015-12-14 06:54:41 -05:00
2019-04-30 10:19:09 -04:00
[[spec-syntax-geo-date-fields]]
2015-12-14 06:54:41 -05:00
===== No specialized syntax for geo/date fields
2016-06-29 17:02:15 -04:00
Currently the syntax for defining the diversifying values is defined by a choice of `field` or
`script` - there is no added syntactical sugar for expressing geo or date units such as "7d" (7
days). This support may be added in a later release and users will currently have to create these
sorts of values using a script.