2018-12-13 14:47:20 -05:00
---
2019-08-21 00:48:59 -04:00
id: distinctcount
2018-12-13 14:47:20 -05:00
title: "DistinctCount Aggregator"
---
2018-11-13 12:38:37 -05:00
<!--
~ Licensed to the Apache Software Foundation (ASF) under one
~ or more contributor license agreements. See the NOTICE file
~ distributed with this work for additional information
~ regarding copyright ownership. The ASF licenses this file
~ to you under the Apache License, Version 2.0 (the
~ "License"); you may not use this file except in compliance
~ with the License. You may obtain a copy of the License at
~
~ http://www.apache.org/licenses/LICENSE-2.0
~
~ Unless required by applicable law or agreed to in writing,
~ software distributed under the License is distributed on an
~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
~ KIND, either express or implied. See the License for the
~ specific language governing permissions and limitations
~ under the License.
-->
2016-03-23 23:11:11 -04:00
2020-01-03 12:33:19 -05:00
To use this Apache Druid extension, make sure to [include ](../../development/extensions.md#loading-extensions ) the `druid-distinctcount` extension.
2016-03-30 20:14:58 -04:00
Additionally, follow these steps:
2019-07-15 12:55:18 -04:00
1. First, use a single dimension hash-based partition spec to partition data by a single dimension. For example visitor_id. This to make sure all rows with a particular value for that dimension will go into the same segment, or this might over count.
2. Second, use distinctCount to calculate the distinct count, make sure queryGranularity is divided exactly by segmentGranularity or else the result will be wrong.
2016-03-23 23:11:11 -04:00
2016-11-28 14:49:30 -05:00
There are some limitations, when used with groupBy, the groupBy keys' numbers should not exceed maxIntermediateRows in every segment. If exceeded the result will be wrong. When used with topN, numValuesPerPass should not be too big. If too big the distinctCount will use a lot of memory and might cause the JVM to go our of memory.
2016-03-23 23:11:11 -04:00
Example:
2019-08-21 00:48:59 -04:00
## Timeseries query
2016-03-23 23:11:11 -04:00
```json
{
"queryType": "timeseries",
"dataSource": "sample_datasource",
"granularity": "day",
"aggregations": [
{
"type": "distinctCount",
"name": "uv",
"fieldName": "visitor_id"
}
],
"intervals": [
"2016-03-01T00:00:00.000/2013-03-20T00:00:00.000"
]
}
```
2019-08-21 00:48:59 -04:00
## TopN query
2016-03-23 23:11:11 -04:00
```json
{
"queryType": "topN",
"dataSource": "sample_datasource",
"dimension": "sample_dim",
"threshold": 5,
"metric": "uv",
"granularity": "all",
"aggregations": [
{
"type": "distinctCount",
"name": "uv",
"fieldName": "visitor_id"
}
],
"intervals": [
"2016-03-06T00:00:00/2016-03-06T23:59:59"
]
}
```
2019-08-21 00:48:59 -04:00
## GroupBy query
2016-03-23 23:11:11 -04:00
```json
{
"queryType": "groupBy",
"dataSource": "sample_datasource",
2020-06-29 23:59:56 -04:00
"dimensions": ["sample_dim"],
2016-03-23 23:11:11 -04:00
"granularity": "all",
"aggregations": [
{
"type": "distinctCount",
"name": "uv",
"fieldName": "visitor_id"
}
],
"intervals": [
"2016-03-06T00:00:00/2016-03-06T23:59:59"
]
}
```