OpenSearch/docs/reference/aggregations/metrics/t-test-aggregation.asciidoc

115 lines
3.9 KiB
Plaintext

[role="xpack"]
[testenv="basic"]
[[search-aggregations-metrics-ttest-aggregation]]
=== TTest Aggregation
A `t_test` metrics aggregation that performs a statistical hypothesis test in which the test statistic follows a Student's t-distribution
under the null hypothesis on numeric values extracted from the aggregated documents or generated by provided scripts. In practice, this
will tell you if the difference between two population means are statistically significant and did not occur by chance alone.
==== Syntax
A `t_test` aggregation looks like this in isolation:
[source,js]
--------------------------------------------------
{
"t_test": {
"a": "value_before",
"b": "value_after",
"type": "paired"
}
}
--------------------------------------------------
// NOTCONSOLE
Assuming that we have a record of node start up times before and after upgrade, let's look at a t-test to see if upgrade affected
the node start up time in a meaningful way.
[source,console]
--------------------------------------------------
GET node_upgrade/_search
{
"size": 0,
"aggs" : {
"startup_time_ttest" : {
"t_test" : {
"a" : {"field": "startup_time_before"}, <1>
"b" : {"field": "startup_time_after"}, <2>
"type": "paired" <3>
}
}
}
}
--------------------------------------------------
// TEST[setup:node_upgrade]
<1> The field `startup_time_before` must be a numeric field
<2> The field `startup_time_after` must be a numeric field
<3> Since we have data from the same nodes, we are using paired t-test.
The response will return the p-value or probability value for the test. It is the probability of obtaining results at least as extreme as
the result processed by the aggregation, assuming that the null hypothesis is correct (which means there is no difference between
population means). Smaller p-value means the null hypothesis is more likely to be incorrect and population means are indeed different.
[source,console-result]
--------------------------------------------------
{
...
"aggregations": {
"startup_time_ttest": {
"value": 0.1914368843365979 <1>
}
}
}
--------------------------------------------------
// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/]
<1> The p-value.
==== T-Test Types
The `t_test` aggregation supports unpaired and paired two-sample t-tests. The type of the test can be specified using the `type` parameter:
`"type": "paired"`:: performs paired t-test
`"type": "homoscedastic"`:: performs two-sample equal variance test
`"type": "heteroscedastic"`:: performs two-sample unequal variance test (this is default)
==== Script
The `t_test` metric supports scripting. For example, if we need to adjust out load times for the before values, we could use
a script to recalculate them on-the-fly:
[source,console]
--------------------------------------------------
GET node_upgrade/_search
{
"size": 0,
"aggs" : {
"startup_time_ttest" : {
"t_test" : {
"a": {
"script" : {
"lang": "painless",
"source": "doc['startup_time_before'].value - params.adjustment", <1>
"params" : {
"adjustment" : 10 <2>
}
}
},
"b": {
"field": "startup_time_after" <3>
},
"type": "paired"
}
}
}
}
--------------------------------------------------
// TEST[setup:node_upgrade]
<1> The `field` parameter is replaced with a `script` parameter, which uses the
script to generate values which percentiles are calculated on
<2> Scripting supports parameterized input just like any other script
<3> We can mix scripts and fields