2018-12-13 14:47:20 -05:00
---
2019-08-21 00:48:59 -04:00
id: segmentmetadataquery
title: "SegmentMetadata queries"
sidebar_label: "SegmentMetadata"
2018-12-13 14:47:20 -05:00
---
2018-11-13 12:38:37 -05:00
<!--
~ Licensed to the Apache Software Foundation (ASF) under one
~ or more contributor license agreements. See the NOTICE file
~ distributed with this work for additional information
~ regarding copyright ownership. The ASF licenses this file
~ to you under the Apache License, Version 2.0 (the
~ "License"); you may not use this file except in compliance
~ with the License. You may obtain a copy of the License at
~
~ http://www.apache.org/licenses/LICENSE-2.0
~
~ Unless required by applicable law or agreed to in writing,
~ software distributed under the License is distributed on an
~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
~ KIND, either express or implied. See the License for the
~ specific language governing permissions and limitations
~ under the License.
-->
2020-04-15 19:12:20 -04:00
> Apache Druid supports two query languages: [Druid SQL](sql.md) and [native queries](querying.md).
> This document describes a query
> type that is only available in the native language. However, Druid SQL contains similar functionality in
> its [metadata tables](sql.md#metadata-tables).
2018-12-12 23:42:12 -05:00
2016-01-05 23:32:38 -05:00
Segment metadata queries return per-segment information about:
2015-05-05 17:07:32 -04:00
2015-11-20 02:12:39 -05:00
* Number of rows stored inside the segment
2015-05-05 17:07:32 -04:00
* Interval the segment covers
2020-08-13 17:55:32 -04:00
* Estimated total segment byte size in if it was stored in a 'flat format' (e.g. a csv file)
2015-05-05 17:07:32 -04:00
* Segment id
2020-08-13 17:55:32 -04:00
* Is the segment rolled up
* Detailed per column information such as:
- type
- cardinality
- min/max values
- presence of null values
- estimated 'flat format' byte size
2015-05-05 17:07:32 -04:00
```json
{
"queryType":"segmentMetadata",
"dataSource":"sample_datasource",
"intervals":["2013-01-01/2014-01-01"]
}
```
There are several main parts to a segment metadata query:
|property|description|required?|
|--------|-----------|---------|
2020-01-03 12:33:19 -05:00
|queryType|This String should always be "segmentMetadata"; this is the first thing Apache Druid looks at to figure out how to interpret the query|yes|
2019-08-21 00:48:59 -04:00
|dataSource|A String or Object defining the data source to query, very similar to a table in a relational database. See [DataSource ](../querying/datasource.md ) for more information.|yes|
2015-09-10 22:15:12 -04:00
|intervals|A JSON Object representing ISO-8601 Intervals. This defines the time ranges to run the query over.|no|
2015-05-05 17:07:32 -04:00
|toInclude|A JSON Object representing what columns should be included in the result. Defaults to "all".|no|
|merge|Merge all individual segment metadata results into a single result|no|
2019-08-21 00:48:59 -04:00
|context|See [Context ](../querying/query-context.md )|no|
|analysisTypes|A list of Strings specifying what column properties (e.g. cardinality, size) should be calculated and returned in the result. Defaults to ["cardinality", "interval", "minmax"], but can be overridden with using the [segment metadata query config ](../configuration/index.md#segmentmetadata-query-config ). See section [analysisTypes ](#analysistypes ) for more details.|no|
2016-01-19 20:05:46 -05:00
|lenientAggregatorMerge|If true, and if the "aggregators" analysisType is enabled, aggregators will be merged leniently. See below for details.|no|
2015-05-05 17:07:32 -04:00
The format of the result is:
```json
[ {
"id" : "some_id",
"intervals" : [ "2013-05-13T00:00:00.000Z/2013-05-14T00:00:00.000Z" ],
"columns" : {
2020-08-13 17:55:32 -04:00
"__time" : { "type" : "LONG", "hasMultipleValues" : false, "hasNulls": false, "size" : 407240380, "cardinality" : null, "errorMessage" : null },
"dim1" : { "type" : "STRING", "hasMultipleValues" : false, "hasNulls": false, "size" : 100000, "cardinality" : 1944, "errorMessage" : null },
"dim2" : { "type" : "STRING", "hasMultipleValues" : true, "hasNulls": true, "size" : 100000, "cardinality" : 1504, "errorMessage" : null },
"metric1" : { "type" : "FLOAT", "hasMultipleValues" : false, "hasNulls": false, "size" : 100000, "cardinality" : null, "errorMessage" : null }
2015-05-05 17:07:32 -04:00
},
2016-01-19 20:05:46 -05:00
"aggregators" : {
"metric1" : { "type" : "longSum", "name" : "metric1", "fieldName" : "metric1" }
},
2016-05-03 14:31:10 -04:00
"queryGranularity" : {
"type": "none"
},
2015-11-20 02:12:39 -05:00
"size" : 300000,
"numRows" : 5000000
2015-05-05 17:07:32 -04:00
} ]
```
2020-08-13 17:55:32 -04:00
Dimension columns will have type `STRING` , `FLOAT` , `DOUBLE` , or `LONG` .
Metric columns will have type `FLOAT` , `DOUBLE` , or `LONG` , or the name of the underlying complex type such as `hyperUnique` in case of COMPLEX metric.
2015-05-05 17:07:32 -04:00
Timestamp column will have type `LONG` .
2016-01-20 14:39:53 -05:00
If the `errorMessage` field is non-null, you should not trust the other fields in the response. Their contents are
undefined.
2020-08-13 17:55:32 -04:00
Only columns which are dictionary encoded (i.e., have type `STRING` ) will have any cardinality. Rest of the columns (timestamp and metric columns) will show cardinality as `null` .
2015-05-05 17:07:32 -04:00
2019-10-24 14:17:39 -04:00
## intervals
2015-09-10 22:15:12 -04:00
If an interval is not specified, the query will use a default interval that spans a configurable period before the end time of the most recent segment.
2019-01-30 22:41:07 -05:00
The length of this default time period is set in the Broker configuration via:
2015-09-10 22:15:12 -04:00
druid.query.segmentMetadata.defaultHistory
2019-10-24 14:17:39 -04:00
## toInclude
2015-05-05 17:07:32 -04:00
There are 3 types of toInclude objects.
2019-10-24 14:17:39 -04:00
### All
2015-05-05 17:07:32 -04:00
The grammar is as follows:
``` json
"toInclude": { "type": "all"}
```
2019-10-24 14:17:39 -04:00
### None
2015-05-05 17:07:32 -04:00
The grammar is as follows:
``` json
"toInclude": { "type": "none"}
```
2019-10-24 14:17:39 -04:00
### List
2015-05-05 17:07:32 -04:00
The grammar is as follows:
``` json
"toInclude": { "type": "list", "columns": [< string list of column names > ]}
```
2015-09-17 21:53:03 -04:00
2019-10-24 14:17:39 -04:00
## analysisTypes
2015-09-17 21:53:03 -04:00
This is a list of properties that determines the amount of information returned about the columns, i.e. analyses to be performed on the columns.
2017-05-26 15:12:39 -04:00
By default, the "cardinality", "interval", and "minmax" types will be used. If a property is not needed, omitting it from this list will result in a more efficient query.
2019-01-30 22:41:07 -05:00
The default analysis types can be set in the Broker configuration via:
2017-05-26 15:12:39 -04:00
`druid.query.segmentMetadata.defaultAnalysisTypes`
2015-09-17 21:53:03 -04:00
2016-05-03 14:31:10 -04:00
Types of column analyses are described below:
2015-09-17 21:53:03 -04:00
2019-10-24 14:17:39 -04:00
### cardinality
2015-09-17 21:53:03 -04:00
2020-11-19 13:38:27 -05:00
* `cardinality` in the result will return the size of the bitmap index or dictionary encoding for string dimensions, or null for other dimension types.
If `merge` was set, the result will be the max of this value across segments. Only relevant for dimension columns.
2015-09-17 21:53:03 -04:00
2019-10-24 14:17:39 -04:00
### minmax
2016-01-05 23:32:38 -05:00
* Estimated min/max values for each column. Only relevant for dimension columns.
2019-10-24 14:17:39 -04:00
### size
2015-09-17 21:53:03 -04:00
2016-01-19 20:05:46 -05:00
* `size` in the result will contain the estimated total segment byte size as if the data were stored in text format
2015-12-17 15:45:14 -05:00
2019-10-24 14:17:39 -04:00
### interval
2015-12-17 15:45:14 -05:00
2016-01-19 20:05:46 -05:00
* `intervals` in the result will contain the list of intervals associated with the queried segments.
2019-10-24 14:17:39 -04:00
### timestampSpec
2016-07-25 18:45:30 -04:00
* `timestampSpec` in the result will contain timestampSpec of data stored in segments. this can be null if timestampSpec of segments was unknown or unmergeable (if merging is enabled).
2019-10-24 14:17:39 -04:00
### queryGranularity
2016-05-03 14:31:10 -04:00
* `queryGranularity` in the result will contain query granularity of data stored in segments. this can be null if query granularity of segments was unknown or unmergeable (if merging is enabled).
2019-10-24 14:17:39 -04:00
### aggregators
2016-01-19 20:05:46 -05:00
* `aggregators` in the result will contain the list of aggregators usable for querying metric columns. This may be
null if the aggregators are unknown or unmergeable (if merging is enabled).
* Merging can be strict or lenient. See *lenientAggregatorMerge* below for details.
* The form of the result is a map of column name to aggregator.
2019-10-24 14:17:39 -04:00
### rollup
2016-08-02 14:13:05 -04:00
* `rollup` in the result is true/false/null.
* When merging is enabled, if some are rollup, others are not, result is null.
2019-10-24 14:17:39 -04:00
## lenientAggregatorMerge
2016-01-19 20:05:46 -05:00
Conflicts between aggregator metadata across segments can occur if some segments have unknown aggregators, or if
two segments use incompatible aggregators for the same column (e.g. longSum changed to doubleSum).
Aggregators can be merged strictly (the default) or leniently. With strict merging, if there are any segments
with unknown aggregators, or any conflicts of any kind, the merged aggregators list will be `null` . With lenient
merging, segments with unknown aggregators will be ignored, and conflicts between aggregators will only null out
the aggregator for that particular column.
2019-09-17 15:47:30 -04:00
In particular, with lenient merging, it is possible for an individual column's aggregator to be `null` . This will not
2016-01-19 20:05:46 -05:00
occur with strict merging.