|VARCHAR|STRING|`''`|Druid STRING columns are reported as VARCHAR. Can include [multi-value strings](#multi-value-strings) as well.|
|DECIMAL|DOUBLE|`0.0`|DECIMAL uses floating point, not fixed point math|
|FLOAT|FLOAT|`0.0`|Druid FLOAT columns are reported as FLOAT|
|REAL|DOUBLE|`0.0`||
|DOUBLE|DOUBLE|`0.0`|Druid DOUBLE columns are reported as DOUBLE|
|BOOLEAN|LONG|`false`||
|TINYINT|LONG|`0`||
|SMALLINT|LONG|`0`||
|INTEGER|LONG|`0`||
|BIGINT|LONG|`0`|Druid LONG columns (except `__time`) are reported as BIGINT|
|TIMESTAMP|LONG|`0`, meaning 1970-01-01 00:00:00 UTC|Druid's `__time` column is reported as TIMESTAMP. Casts between string and timestamp types assume standard SQL formatting, e.g. `2000-01-02 03:04:05`, _not_ ISO8601 formatting. For handling other formats, use one of the [time functions](sql-scalar.md#date-and-time-functions).|
|DATE|LONG|`0`, meaning 1970-01-01|Casting TIMESTAMP to DATE rounds down the timestamp to the nearest day. Casts between string and date types assume standard SQL formatting, e.g. `2000-01-02`. For handling other formats, use one of the [time functions](sql-scalar.md#date-and-time-functions).|
|ARRAY|ARRAY|`NULL`|Druid native array types work as SQL arrays, and multi-value strings can be converted to arrays. See the [`ARRAY` details](#arrays).|
<sup>*</sup> Default value applies if `druid.generic.useDefaultValueForNull = true` (the default mode). Otherwise, the default value is `NULL` for all types.
> SQL behavior of multi-value dimensions may change in a future release to more closely align with their behavior
> in native queries, but the [multi-value string functions](./sql-multivalue-string-functions.md) should be able to provide
> nearly all possible native functionality.
## Arrays
Druid supports `ARRAY` types constructed at query time, though it currently lacks the ability to store them in
segments. `ARRAY` types behave as standard SQL arrays, where results are grouped by matching entire arrays. This is in
contrast to the implicit `UNNEST` that occurs when grouping on multi-value dimensions directly or when used with the
multi-value functions. You can convert multi-value dimensions to standard SQL arrays either by explicitly by converting
them with `MV_TO_ARRAY` or implicitly when used within the [array functions](./sql-array-functions.md). Arrays may
also be constructed from multiple columns using the array functions.
## Multi-value strings behavior
The behavior of Druid [multi-value string dimensions](multi-value-dimensions.md) varies depending on the context of
their usage.
When used with standard `VARCHAR` functions which expect a single input value per row, such as `CONCAT`, Druid will map
the function across all values in the row. If the row is null or empty, the function receives `NULL` as its input.
When used with the explicit [multi-value string functions](./sql-multivalue-string-functions.md), Druid processes the
row values as if they were `ARRAY` typed. Any operations which produce null and empty rows are distinguished as
separate values (unlike implicit mapping behavior). These multi-value string functions, typically denoted with an `MV_`
prefix, retain their `VARCHAR` type after the computation is complete. Note that Druid multi-value columns do _not_
distinguish between empty and null rows. An empty row will never appear natively as input to a multi-valued function,
but any multi-value function which manipulates the array form of the value may produce an empty array, which is handled
separately while processing.
> Do not mix the usage of multi-value functions and normal scalar functions within the same expression, as the planner will be unable
> to determine how to properly process the value given its ambiguous usage. A multi-value string must be treated consistently within
> an expression.
When converted to `ARRAY` or used with [array functions](./sql-array-functions.md), multi-value strings behave as standard SQL arrays and can no longer
be manipulated with non-array functions.
Druid serializes multi-value `VARCHAR` results as a JSON string of the array, if grouping was not applied on the value.
If the value was grouped, due to the implicit `UNNEST` behavior, all results will always be standard single value
`VARCHAR`. `ARRAY` typed results will be serialized into stringified JSON arrays if the context parameter
`sqlStringifyArrays` is set, otherwise they remain in their array format.
Druid supports storing nested data structures in segments using the native `COMPLEX<json>` type. See [Nested columns](./nested-columns.md) for more information.
You can interact with nested data using [JSON functions](./sql-json-functions.md), which can extract nested values, parse from string, serialize to string, and create new `COMPLEX<json>` structures.
`COMPLEX` types have limited functionality outside the specialized functions that use them, so their behavior is undefined when:
* Grouping on complex values.
* Filtering directly on complex values, such as `WHERE json is NULL`.
* Used as inputs to aggregators without specialized handling for a specific complex type.
In many cases, functions are provided to translate `COMPLEX` value types to `STRING`, which serves as a workaround solution until `COMPLEX` type functionality can be improved.