2018-12-13 14:47:20 -05:00
---
2019-08-21 00:48:59 -04:00
id: datasource
2018-12-13 14:47:20 -05:00
title: "Datasources"
---
2023-08-16 22:01:21 -04:00
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
2018-11-13 12:38:37 -05:00
<!--
~ Licensed to the Apache Software Foundation (ASF) under one
~ or more contributor license agreements. See the NOTICE file
~ distributed with this work for additional information
~ regarding copyright ownership. The ASF licenses this file
~ to you under the Apache License, Version 2.0 (the
~ "License"); you may not use this file except in compliance
~ with the License. You may obtain a copy of the License at
~
~ http://www.apache.org/licenses/LICENSE-2.0
~
~ Unless required by applicable law or agreed to in writing,
~ software distributed under the License is distributed on an
~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
~ KIND, either express or implied. See the License for the
~ specific language governing permissions and limitations
~ under the License.
-->
2020-04-15 19:12:20 -04:00
Datasources in Apache Druid are things that you can query. The most common kind of datasource is a table datasource,
and in many contexts the word "datasource" implicitly refers to table datasources. This is especially true
2020-12-17 16:37:43 -05:00
[during data ingestion ](../ingestion/index.md ), where ingestion is always creating or writing into a table
2020-04-15 19:12:20 -04:00
datasource. But at query time, there are many other types of datasources available.
2015-05-05 17:07:32 -04:00
2020-04-15 19:12:20 -04:00
The word "datasource" is generally spelled `dataSource` (with a capital S) when it appears in API requests and
responses.
2015-05-05 17:07:32 -04:00
2020-04-15 19:12:20 -04:00
## Datasource type
2015-05-05 17:07:32 -04:00
2020-04-15 19:12:20 -04:00
### `table`
2023-08-16 22:01:21 -04:00
< Tabs >
< TabItem value = "1" label = "SQL" >
2020-04-15 19:12:20 -04:00
```sql
SELECT column1, column2 FROM "druid"."dataSourceName"
```
2023-08-16 22:01:21 -04:00
< / TabItem >
< TabItem value = "2" label = "Native" >
2020-04-15 19:12:20 -04:00
```json
{
"queryType": "scan",
"dataSource": "dataSourceName",
"columns": ["column1", "column2"],
"intervals": ["0000/3000"]
}
```
2023-08-16 22:01:21 -04:00
< / TabItem >
< / Tabs >
2020-04-15 19:12:20 -04:00
The table datasource is the most common type. This is the kind of datasource you get when you perform
2020-12-17 16:37:43 -05:00
[data ingestion ](../ingestion/index.md ). They are split up into segments, distributed around the cluster,
2020-04-15 19:12:20 -04:00
and queried in parallel.
2022-05-17 19:56:31 -04:00
In [Druid SQL ](sql.md#from ), table datasources reside in the `druid` schema. This is the default schema, so table
2020-04-15 19:12:20 -04:00
datasources can be referenced as either `druid.dataSourceName` or simply `dataSourceName` .
In native queries, table datasources can be referenced using their names as strings (as in the example above), or by
using JSON objects of the form:
```json
"dataSource": {
"type": "table",
"name": "dataSourceName"
}
```
To see a list of all table datasources, use the SQL query
`SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'druid'` .
### `lookup`
2023-08-16 22:01:21 -04:00
< Tabs >
< TabItem value = "3" label = "SQL" >
2020-04-15 19:12:20 -04:00
```sql
SELECT k, v FROM lookup.countries
```
2023-08-16 22:01:21 -04:00
< / TabItem >
< TabItem value = "4" label = "Native" >
2020-04-15 19:12:20 -04:00
```json
{
"queryType": "scan",
"dataSource": {
"type": "lookup",
"lookup": "countries"
},
"columns": ["k", "v"],
"intervals": ["0000/3000"]
}
```
2023-08-16 22:01:21 -04:00
< / TabItem >
< / Tabs >
2020-04-15 19:12:20 -04:00
2022-05-17 19:56:31 -04:00
Lookup datasources correspond to Druid's key-value [lookup ](lookups.md ) objects. In [Druid SQL ](sql.md#from ),
2021-07-06 14:20:49 -04:00
they reside in the `lookup` schema. They are preloaded in memory on all servers, so they can be accessed rapidly.
2020-04-15 19:12:20 -04:00
They can be joined onto regular tables using the [join operator ](#join ).
Lookup datasources are key-value oriented and always have exactly two columns: `k` (the key) and `v` (the value), and
both are always strings.
To see a list of all lookup datasources, use the SQL query
`SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'lookup'` .
2023-08-16 22:01:21 -04:00
:::info
Performance tip: Lookups can be joined with a base table either using an explicit [join ](#join ), or by using the
SQL [`LOOKUP` function ](sql-scalar.md#string-functions ).
However, the join operator must evaluate the condition on each row, whereas the
`LOOKUP` function can defer evaluation until after an aggregation phase. This means that the `LOOKUP` function is
usually faster than joining to a lookup datasource.
:::
2020-04-15 19:12:20 -04:00
2020-04-23 19:04:59 -04:00
Refer to the [Query execution ](query-execution.md#table ) page for more details on how queries are executed when you
use table datasources.
### `union`
2023-08-16 22:01:21 -04:00
< Tabs >
< TabItem value = "5" label = "SQL" >
2021-04-14 21:18:14 -04:00
```sql
SELECT column1, column2
FROM (
SELECT column1, column2 FROM table1
UNION ALL
SELECT column1, column2 FROM table2
UNION ALL
SELECT column1, column2 FROM table3
)
```
2023-08-16 22:01:21 -04:00
< / TabItem >
< TabItem value = "6" label = "Native" >
2020-04-23 19:04:59 -04:00
```json
{
"queryType": "scan",
"dataSource": {
"type": "union",
2021-04-14 21:18:14 -04:00
"dataSources": ["table1", "table2", "table3"]
2020-04-23 19:04:59 -04:00
},
"columns": ["column1", "column2"],
"intervals": ["0000/3000"]
}
```
2023-08-16 22:01:21 -04:00
< / TabItem >
< / Tabs >
2020-04-23 19:04:59 -04:00
2021-04-14 21:18:14 -04:00
Unions allow you to treat two or more tables as a single datasource. In SQL, this is done with the UNION ALL operator
2022-05-17 19:56:31 -04:00
applied directly to tables, called a ["table-level union" ](sql.md#table-level ). In native queries, this is done with a
2021-04-14 21:18:14 -04:00
"union" datasource.
2022-05-17 19:56:31 -04:00
With SQL [table-level unions ](sql.md#table-level ) the same columns must be selected from each table in the same order,
2021-04-14 21:18:14 -04:00
and those columns must either have the same types, or types that can be implicitly cast to each other (such as different
numeric types). For this reason, it is more robust to write your queries to select specific columns.
2020-04-23 19:04:59 -04:00
2021-04-14 21:18:14 -04:00
With the native union datasource, the tables being unioned do not need to have identical schemas. If they do not fully
match up, then columns that exist in one table but not another will be treated as if they contained all null values in
the tables where they do not exist.
2020-08-26 17:23:54 -04:00
2021-04-14 21:18:14 -04:00
In either case, features like expressions, column aliasing, JOIN, GROUP BY, ORDER BY, and so on cannot be used with
table unions.
2020-12-17 16:37:43 -05:00
2020-04-23 19:04:59 -04:00
Refer to the [Query execution ](query-execution.md#union ) page for more details on how queries are executed when you
use union datasources.
### `inline`
2023-08-16 22:01:21 -04:00
< Tabs >
< TabItem value = "7" label = "Native" >
2020-04-23 19:04:59 -04:00
```json
{
"queryType": "scan",
"dataSource": {
"type": "inline",
"columnNames": ["country", "city"],
"rows": [
["United States", "San Francisco"],
["Canada", "Calgary"]
]
},
"columns": ["country", "city"],
"intervals": ["0000/3000"]
}
```
2023-08-16 22:01:21 -04:00
< / TabItem >
< / Tabs >
2020-04-23 19:04:59 -04:00
Inline datasources allow you to query a small amount of data that is embedded in the query itself. They are useful when
you want to write a query on a small amount of data without loading it first. They are also useful as inputs into a
[join ](#join ). Druid also uses them internally to handle subqueries that need to be inlined on the Broker. See the
[`query` datasource ](#query ) documentation for more details.
There are two fields in an inline datasource: an array of `columnNames` and an array of `rows` . Each row is an array
that must be exactly as long as the list of `columnNames` . The first element in each row corresponds to the first
column in `columnNames` , and so on.
Inline datasources are not available in Druid SQL.
Refer to the [Query execution ](query-execution.md#inline ) page for more details on how queries are executed when you
use inline datasources.
2020-04-15 19:12:20 -04:00
### `query`
2023-08-16 22:01:21 -04:00
< Tabs >
< TabItem value = "8" label = "SQL" >
2020-04-15 19:12:20 -04:00
```sql
-- Uses a subquery to count hits per page, then takes the average.
SELECT
AVG(cnt) AS average_hits_per_page
FROM
(SELECT page, COUNT(*) AS hits FROM site_traffic GROUP BY page)
```
2023-08-16 22:01:21 -04:00
< / TabItem >
< TabItem value = "9" label = "Native" >
2020-04-15 19:12:20 -04:00
```json
{
"queryType": "timeseries",
"dataSource": {
"type": "query",
"query": {
"queryType": "groupBy",
"dataSource": "site_traffic",
"intervals": ["0000/3000"],
"granularity": "all",
"dimensions": ["page"],
"aggregations": [
{ "type": "count", "name": "hits" }
]
}
},
"intervals": ["0000/3000"],
"granularity": "all",
"aggregations": [
{ "type": "longSum", "name": "hits", "fieldName": "hits" },
{ "type": "count", "name": "pages" }
],
"postAggregations": [
{ "type": "expression", "name": "average_hits_per_page", "expression": "hits / pages" }
]
}
```
2023-08-16 22:01:21 -04:00
< / TabItem >
< / Tabs >
2020-04-15 19:12:20 -04:00
Query datasources allow you to issue subqueries. In native queries, they can appear anywhere that accepts a
2023-03-01 09:59:15 -05:00
`dataSource` (except underneath a `union` ). In SQL, they can appear in the following places, always surrounded by parentheses:
2020-04-15 19:12:20 -04:00
- The FROM clause: `FROM (<subquery>)` .
- As inputs to a JOIN: `<table-or-subquery-1> t1 INNER JOIN <table-or-subquery-2> t2 ON t1.<col1> = t2.<col2>` .
- In the WHERE clause: `WHERE <column> { IN | NOT IN } (<subquery>)` . These are translated to joins by the SQL planner.
2023-08-16 22:01:21 -04:00
:::info
Performance tip: In most cases, subquery results are fully buffered in memory on the Broker and then further
processing occurs on the Broker itself. This means that subqueries with large result sets can cause performance
bottlenecks or run into memory usage limits on the Broker. See the [Query execution ](query-execution.md#query )
page for more details on how subqueries are executed and what limits will apply.
:::
2020-04-15 19:12:20 -04:00
### `join`
2023-08-16 22:01:21 -04:00
< Tabs >
< TabItem value = "10" label = "SQL" >
2020-04-15 19:12:20 -04:00
```sql
-- Joins "sales" with "countries" (using "store" as the join key) to get sales by country.
SELECT
store_to_country.v AS country,
SUM(sales.revenue) AS country_revenue
FROM
sales
INNER JOIN lookup.store_to_country ON sales.store = store_to_country.k
GROUP BY
countries.v
```
2023-08-16 22:01:21 -04:00
< / TabItem >
< TabItem value = "11" label = "Native" >
2015-05-05 17:07:32 -04:00
```json
{
2020-04-15 19:12:20 -04:00
"queryType": "groupBy",
"dataSource": {
"type": "join",
"left": "sales",
"right": {
"type": "lookup",
"lookup": "store_to_country"
},
"rightPrefix": "r.",
"condition": "store == \"r.k\"",
"joinType": "INNER"
},
"intervals": ["0000/3000"],
"granularity": "all",
"dimensions": [
{ "type": "default", "outputName": "country", "dimension": "r.v" }
],
"aggregations": [
{ "type": "longSum", "name": "country_revenue", "fieldName": "revenue" }
]
2015-05-05 17:07:32 -04:00
}
```
2023-08-16 22:01:21 -04:00
< / TabItem >
< / Tabs >
2020-04-15 19:12:20 -04:00
Join datasources allow you to do a SQL-style join of two datasources. Stacking joins on top of each other allows
you to join arbitrarily many datasources.
Sort-merge join and hash shuffles for MSQ. (#13506)
* Sort-merge join and hash shuffles for MSQ.
The main changes are in the processing, multi-stage-query, and sql modules.
processing module:
1) Rename SortColumn to KeyColumn, replace boolean descending with KeyOrder.
This makes it nicer to model hash keys, which use KeyOrder.NONE.
2) Add nullability checkers to the FieldReader interface, and an
"isPartiallyNullKey" method to FrameComparisonWidget. The join
processor uses this to detect null keys.
3) Add WritableFrameChannel.isClosed and OutputChannel.isReadableChannelReady
so callers can tell which OutputChannels are ready for reading and which
aren't.
4) Specialize FrameProcessors.makeCursor to return FrameCursor, a random-access
implementation. The join processor uses this to rewind when it needs to
replay a set of rows with a particular key.
5) Add MemoryAllocatorFactory, which is embedded inside FrameWriterFactory
instead of a particular MemoryAllocator. This allows FrameWriterFactory
to be shared in more scenarios.
multi-stage-query module:
1) ShuffleSpec: Add hash-based shuffles. New enum ShuffleKind helps callers
figure out what kind of shuffle is happening. The change from SortColumn
to KeyColumn allows ClusterBy to be used for both hash-based and sort-based
shuffling.
2) WorkerImpl: Add ability to handle hash-based shuffles. Refactor the logic
to be more readable by moving the work-order-running code to the inner
class RunWorkOrder, and the shuffle-pipeline-building code to the inner
class ShufflePipelineBuilder.
3) Add SortMergeJoinFrameProcessor and factory.
4) WorkerMemoryParameters: Adjust logic to reserve space for output frames
for hash partitioning. (We need one frame per partition.)
sql module:
1) Add sqlJoinAlgorithm context parameter; can be "broadcast" or
"sortMerge". With native, it must always be "broadcast", or it's a
validation error. MSQ supports both. Default is "broadcast" in
both engines.
2) Validate that MSQs do not use broadcast join with RIGHT or FULL join,
as results are not correct for broadcast join with those types. Allow
this in native for two reasons: legacy (the docs caution against it,
but it's always been allowed), and the fact that it actually *does*
generate correct results in native when the join is processed on the
Broker. It is much less likely that MSQ will plan in such a way that
generates correct results.
3) Remove subquery penalty in DruidJoinQueryRel when using sort-merge
join, because subqueries are always required, so there's no reason
to penalize them.
4) Move previously-disabled join reordering and manipulation rules to
FANCY_JOIN_RULES, and enable them when using sort-merge join. Helps
get to better plans where projections and filters are pushed down.
* Work around compiler problem.
* Updates from static analysis.
* Fix @param tag.
* Fix declared exception.
* Fix spelling.
* Minor adjustments.
* wip
* Merge fixups
* fixes
* Fix CalciteSelectQueryMSQTest
* Empty keys are sortable.
* Address comments from code review. Rename mux -> mix.
* Restore inspection config.
* Restore original doc.
* Reorder imports.
* Adjustments
* Fix.
* Fix imports.
* Adjustments from review.
* Update header.
* Adjust docs.
2023-03-08 17:19:39 -05:00
In Druid {{DRUIDVERSION}}, joins in native queries are implemented with a broadcast hash-join algorithm. This means
2023-09-25 14:35:44 -04:00
that all datasources other than the leftmost "base" datasource must fit in memory. It also means that the join condition
must be an equality. This feature is intended mainly to allow joining regular Druid tables with [lookup ](#lookup ),
[inline ](#inline ), and [query ](#query ) datasources.
2020-04-15 19:12:20 -04:00
2023-09-25 14:35:44 -04:00
Refer to the [Query execution ](query-execution.md#join ) page for more details on how queries are executed when you
use join datasources.
2020-04-15 19:12:20 -04:00
#### Joins in SQL
SQL joins take the form:
```
< o1 > [ INNER | LEFT [OUTER] ] JOIN < o2 > ON < condition >
```
2023-09-25 14:35:44 -04:00
The condition must involve only equalities, but functions are okay, and there can be multiple equalities ANDed together.
Conditions like `t1.x = t2.x` , or `LOWER(t1.x) = t2.x` , or `t1.x = t2.x AND t1.y = t2.y` can all be handled. Conditions
like `t1.x <> t2.x` cannot currently be handled.
2020-04-15 19:12:20 -04:00
2023-09-25 14:35:44 -04:00
Note that Druid SQL is less rigid than what native join datasources can handle. In cases where a SQL query does
something that is not allowed as-is with a native join datasource, Druid SQL will generate a subquery. This can have
a substantial effect on performance and scalability, so it is something to watch out for. Some examples of when the
SQL layer will generate subqueries include:
2015-05-05 17:07:32 -04:00
2023-09-25 14:35:44 -04:00
- Joining a regular Druid table to itself, or to another regular Druid table. The native join datasource can accept
a table on the left-hand side, but not the right, so a subquery is needed.
2015-05-05 17:07:32 -04:00
2023-09-25 14:35:44 -04:00
- Join conditions where the expressions on either side are of different types.
2015-05-05 17:07:32 -04:00
2023-09-25 14:35:44 -04:00
- Join conditions where the right-hand expression is not a direct column access.
2020-04-15 19:12:20 -04:00
For more information about how Druid translates SQL to native queries, refer to the
2022-02-11 17:43:30 -05:00
[Druid SQL ](sql-translation.md ) documentation.
2020-04-15 19:12:20 -04:00
#### Joins in native queries
Native join datasources have the following properties. All are required.
|Field|Description|
|-----|-----------|
|`left`|Left-hand datasource. Must be of type `table` , `join` , `lookup` , `query` , or `inline` . Placing another join as the left datasource allows you to join arbitrarily many datasources.|
|`right`|Right-hand datasource. Must be of type `lookup` , `query` , or `inline` . Note that this is more rigid than what Druid SQL requires.|
|`rightPrefix`|String prefix that will be applied to all columns from the right-hand datasource, to prevent them from colliding with columns from the left-hand datasource. Can be any string, so long as it is nonempty and is not be a prefix of the string `__time` . Any columns from the left-hand side that start with your `rightPrefix` will be shadowed. It is up to you to provide a prefix that will not shadow any important columns from the left side.|
2023-05-19 12:42:27 -04:00
|`condition`|[Expression](math-expr.md) that must be an equality where one side is an expression of the left-hand side, and the other side is a simple column reference to the right-hand side. Note that this is more rigid than what Druid SQL requires: here, the right-hand reference must be a simple column reference; in SQL it can be an expression.|
2020-04-15 19:12:20 -04:00
|`joinType`|`INNER` or `LEFT` .|
#### Join performance
Joins are a feature that can significantly affect performance of your queries. Some performance tips and notes:
1. Joins are especially useful with [lookup datasources ](#lookup ), but in most cases, the
2022-02-11 17:43:30 -05:00
[`LOOKUP` function ](sql-scalar.md#string-functions ) performs better than a join. Consider using the `LOOKUP` function if
2020-04-15 19:12:20 -04:00
it is appropriate for your use case.
2. When using joins in Druid SQL, keep in mind that it can generate subqueries that you did not explicitly include in
2022-02-11 17:43:30 -05:00
your queries. Refer to the [Druid SQL ](sql-translation.md ) documentation for more details about when this happens
2020-04-15 19:12:20 -04:00
and how to detect it.
3. One common reason for implicit subquery generation is if the types of the two halves of an equality do not match.
For example, since lookup keys are always strings, the condition `druid.d JOIN lookup.l ON d.field = l.field` will
perform best if `d.field` is a string.
4. As of Druid {{DRUIDVERSION}}, the join operator must evaluate the condition for each row. In the future, we expect
to implement both early and deferred condition evaluation, which we expect to improve performance considerably for
common use cases.
2023-08-16 22:01:21 -04:00
5. Currently, Druid does not support pushing down predicates (condition and filter) past a Join (i.e. into
Join's children). Druid only supports pushing predicates into the join if they originated from
above the join. Hence, the location of predicates and filters in your Druid SQL is very important.
2020-05-14 19:56:40 -04:00
Also, as a result of this, comma joins should be avoided.
2020-04-15 19:12:20 -04:00
#### Future work for joins
Joins are an area of active development in Druid. The following features are missing today but may appear in
future versions:
Sort-merge join and hash shuffles for MSQ. (#13506)
* Sort-merge join and hash shuffles for MSQ.
The main changes are in the processing, multi-stage-query, and sql modules.
processing module:
1) Rename SortColumn to KeyColumn, replace boolean descending with KeyOrder.
This makes it nicer to model hash keys, which use KeyOrder.NONE.
2) Add nullability checkers to the FieldReader interface, and an
"isPartiallyNullKey" method to FrameComparisonWidget. The join
processor uses this to detect null keys.
3) Add WritableFrameChannel.isClosed and OutputChannel.isReadableChannelReady
so callers can tell which OutputChannels are ready for reading and which
aren't.
4) Specialize FrameProcessors.makeCursor to return FrameCursor, a random-access
implementation. The join processor uses this to rewind when it needs to
replay a set of rows with a particular key.
5) Add MemoryAllocatorFactory, which is embedded inside FrameWriterFactory
instead of a particular MemoryAllocator. This allows FrameWriterFactory
to be shared in more scenarios.
multi-stage-query module:
1) ShuffleSpec: Add hash-based shuffles. New enum ShuffleKind helps callers
figure out what kind of shuffle is happening. The change from SortColumn
to KeyColumn allows ClusterBy to be used for both hash-based and sort-based
shuffling.
2) WorkerImpl: Add ability to handle hash-based shuffles. Refactor the logic
to be more readable by moving the work-order-running code to the inner
class RunWorkOrder, and the shuffle-pipeline-building code to the inner
class ShufflePipelineBuilder.
3) Add SortMergeJoinFrameProcessor and factory.
4) WorkerMemoryParameters: Adjust logic to reserve space for output frames
for hash partitioning. (We need one frame per partition.)
sql module:
1) Add sqlJoinAlgorithm context parameter; can be "broadcast" or
"sortMerge". With native, it must always be "broadcast", or it's a
validation error. MSQ supports both. Default is "broadcast" in
both engines.
2) Validate that MSQs do not use broadcast join with RIGHT or FULL join,
as results are not correct for broadcast join with those types. Allow
this in native for two reasons: legacy (the docs caution against it,
but it's always been allowed), and the fact that it actually *does*
generate correct results in native when the join is processed on the
Broker. It is much less likely that MSQ will plan in such a way that
generates correct results.
3) Remove subquery penalty in DruidJoinQueryRel when using sort-merge
join, because subqueries are always required, so there's no reason
to penalize them.
4) Move previously-disabled join reordering and manipulation rules to
FANCY_JOIN_RULES, and enable them when using sort-merge join. Helps
get to better plans where projections and filters are pushed down.
* Work around compiler problem.
* Updates from static analysis.
* Fix @param tag.
* Fix declared exception.
* Fix spelling.
* Minor adjustments.
* wip
* Merge fixups
* fixes
* Fix CalciteSelectQueryMSQTest
* Empty keys are sortable.
* Address comments from code review. Rename mux -> mix.
* Restore inspection config.
* Restore original doc.
* Reorder imports.
* Adjustments
* Fix.
* Fix imports.
* Adjustments from review.
* Update header.
* Adjust docs.
2023-03-08 17:19:39 -05:00
- Reordering of join operations to get the most performant plan.
2020-04-15 19:12:20 -04:00
- Preloaded dimension tables that are wider than lookups (i.e. supporting more than a single key and single value).
Sort-merge join and hash shuffles for MSQ. (#13506)
* Sort-merge join and hash shuffles for MSQ.
The main changes are in the processing, multi-stage-query, and sql modules.
processing module:
1) Rename SortColumn to KeyColumn, replace boolean descending with KeyOrder.
This makes it nicer to model hash keys, which use KeyOrder.NONE.
2) Add nullability checkers to the FieldReader interface, and an
"isPartiallyNullKey" method to FrameComparisonWidget. The join
processor uses this to detect null keys.
3) Add WritableFrameChannel.isClosed and OutputChannel.isReadableChannelReady
so callers can tell which OutputChannels are ready for reading and which
aren't.
4) Specialize FrameProcessors.makeCursor to return FrameCursor, a random-access
implementation. The join processor uses this to rewind when it needs to
replay a set of rows with a particular key.
5) Add MemoryAllocatorFactory, which is embedded inside FrameWriterFactory
instead of a particular MemoryAllocator. This allows FrameWriterFactory
to be shared in more scenarios.
multi-stage-query module:
1) ShuffleSpec: Add hash-based shuffles. New enum ShuffleKind helps callers
figure out what kind of shuffle is happening. The change from SortColumn
to KeyColumn allows ClusterBy to be used for both hash-based and sort-based
shuffling.
2) WorkerImpl: Add ability to handle hash-based shuffles. Refactor the logic
to be more readable by moving the work-order-running code to the inner
class RunWorkOrder, and the shuffle-pipeline-building code to the inner
class ShufflePipelineBuilder.
3) Add SortMergeJoinFrameProcessor and factory.
4) WorkerMemoryParameters: Adjust logic to reserve space for output frames
for hash partitioning. (We need one frame per partition.)
sql module:
1) Add sqlJoinAlgorithm context parameter; can be "broadcast" or
"sortMerge". With native, it must always be "broadcast", or it's a
validation error. MSQ supports both. Default is "broadcast" in
both engines.
2) Validate that MSQs do not use broadcast join with RIGHT or FULL join,
as results are not correct for broadcast join with those types. Allow
this in native for two reasons: legacy (the docs caution against it,
but it's always been allowed), and the fact that it actually *does*
generate correct results in native when the join is processed on the
Broker. It is much less likely that MSQ will plan in such a way that
generates correct results.
3) Remove subquery penalty in DruidJoinQueryRel when using sort-merge
join, because subqueries are always required, so there's no reason
to penalize them.
4) Move previously-disabled join reordering and manipulation rules to
FANCY_JOIN_RULES, and enable them when using sort-merge join. Helps
get to better plans where projections and filters are pushed down.
* Work around compiler problem.
* Updates from static analysis.
* Fix @param tag.
* Fix declared exception.
* Fix spelling.
* Minor adjustments.
* wip
* Merge fixups
* fixes
* Fix CalciteSelectQueryMSQTest
* Empty keys are sortable.
* Address comments from code review. Rename mux -> mix.
* Restore inspection config.
* Restore original doc.
* Reorder imports.
* Adjustments
* Fix.
* Fix imports.
* Adjustments from review.
* Update header.
* Adjust docs.
2023-03-08 17:19:39 -05:00
- RIGHT OUTER and FULL OUTER joins in the native query engine. Currently, they are partially implemented. Queries run
but results are not always correct.
2020-04-15 19:12:20 -04:00
- Performance-related optimizations as mentioned in the [previous section ](#join-performance ).
2020-06-03 15:55:52 -04:00
- Join conditions on a column containing a multi-value dimension.
2023-01-06 14:41:11 -05:00
### `unnest`
2023-03-10 06:12:08 -05:00
Use the `unnest` datasource to unnest a column with multiple values in an array.
2023-01-06 14:41:11 -05:00
For example, you have a source column that looks like this:
2023-08-16 22:01:21 -04:00
| Nested |
| -- |
2023-01-06 14:41:11 -05:00
| [a, b] |
| [c, d] |
| [e, [f,g]] |
When you use the `unnest` datasource, the unnested column looks like this:
2023-08-16 22:01:21 -04:00
| Unnested |
2023-01-06 14:41:11 -05:00
| -- |
| a |
| b |
| c |
| d |
| e |
| [f, g] |
When unnesting data, keep the following in mind:
- The total number of rows will grow to accommodate the new rows that the unnested data occupy.
2023-04-04 16:07:54 -04:00
- You can unnest the values in more than one column in a single `unnest` datasource, but this can lead to a very large number of new rows depending on your dataset.
2023-01-06 14:41:11 -05:00
The `unnest` datasource uses the following syntax:
```json
"dataSource": {
"type": "unnest",
"base": {
"type": "table",
"name": "nested_data"
},
2023-03-10 06:12:08 -05:00
"virtualColumn": {
"type": "expression",
2023-04-04 16:07:54 -04:00
"name": "output_column",
2023-03-10 06:12:08 -05:00
"expression": "\"column_reference\""
},
2023-05-12 16:05:27 -04:00
"unnestFilter": "optional_filter"
2023-03-14 19:05:56 -04:00
}
2023-01-06 14:41:11 -05:00
```
* `dataSource.type` : Set this to `unnest` .
* `dataSource.base` : Defines the datasource you want to unnest.
* `dataSource.base.type` : The type of datasource you want to unnest, such as a table.
2023-03-10 06:12:08 -05:00
* `dataSource.virtualColumn` : [Virtual column ](virtual-columns.md ) that references the nested values. The output name of this column is reused as the name of the column that contains unnested values. You can replace the source column with the unnested column by specifying the source column's name or a new column by specifying a different name. Outputting it to a new column can help you verify that you get the results that you expect but isn't required.
2023-05-12 16:05:27 -04:00
* `unnestFilter` : A filter only on the output column. You can omit this or set it to null if there are no filters.
2023-01-06 14:41:11 -05:00
2023-04-04 16:07:54 -04:00
To learn more about how to use the `unnest` datasource, see the [unnest tutorial ](../tutorials/tutorial-unnest-arrays.md ).