2.9 KiB
id | title | description |
---|---|---|
migration-guide | Migration guides | How to migrate from legacy features to get the most from Druid updates |
In general, when we introduce new features and behaviors into Apache Druid, we make every effort to avoid breaking existing features when introducing new behaviors. However, sometimes there are either bugs or performance limitations with the old behaviors that are not possible to fix in a backward-compatible way. In these cases, we must introduce breaking changes for the future maintainability of Druid.
The guides in this section outline breaking changes introduced in Druid 25.0.0 and later. Each guide provides instructions to migrate to new features.
Migrate from multi-value dimensions to arrays
Druid now supports SQL-compliant array types. Whenever possible, you should use the array type over multi-value dimensions. See Migration guide: MVDs to arrays.
Migrate to front-coded dictionary encoding
Druid encodes string columns into dictionaries for better compression. Front-coded dictionary encoding reduces storage and improves performance by optimizing for strings that share similar beginning substrings. See Migration guide: front-coded dictionaries for more information.
Migrate from maxSubqueryRows
to maxSubqueryBytes
Druid allows you to set a byte-based limit on subquery size to prevent Brokers from running out of memory when handling large subqueries. The byte-based subquery limit overrides Druid's row-based subquery limit. We recommend that you move towards using byte-based limits starting in Druid 30.0.0. See Migration guide: subquery limit for more information.
Migrate to SQL compliant null handling mode
By default, the Druid null handling mode is now compliant with ANSI SQL. This guide provides strategies for Druid operators and users who rely on the legacy Druid null handling behavior in their applications to transition to ANSI SQL compliant mode. See Migration guide: SQL compliant mode for more information.