Refresh query docs. (#9704)

* Refresh query docs.

Larger changes:

- New doc: querying/datasource.md describes the various kinds of
datasources you can use, and has examples for both SQL and native.
- New doc: querying/query-execution.md describes how native queries
are executed at a high level. It doesn't go into the details of specific
query engines or how queries run at a per-segment level. But I think it
would be good to add or link that content here in the future.
- Refreshed doc: querying/sql.md updated to refer to joins, reformatted
a bit, added a new "Query translation" section that explains how
queries are translated from SQL to native, and removed configuration
details (moved to configuration/index.md).
- Refreshed doc: querying/joins.md updated to refer to join datasources.

Smaller changes:

- Add helpful banners to the top of query documentation pages telling
people whether a given page describes SQL, native, or both.
- Add SQL metrics to operations/metrics.md.
- Add some color and cross-links in various places.
- Add native query component docs to the sidebar, and renamed them so
they look nicer.
- Remove Select query from the sidebar.
- Fix Broker SQL configs in configuration/index.md. Remove them from
querying/sql.md.
- Combined querying/searchquery.md and querying/searchqueryspec.md.

* Updates.

* Fix numbering.

* Fix glitches.

* Add new words to spellcheck file.

* Assorted changes.

* Further adjustments.

* Add missing punctuation.
This commit is contained in:
Gian Merlino 2020-04-15 16:12:20 -07:00 committed by GitHub
parent b082262a2a
commit 42590ae64b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
36 changed files with 1040 additions and 409 deletions

View File

@ -1306,7 +1306,6 @@ Druid uses Jetty to serve HTTP requests.
|`druid.server.http.unannouncePropagationDelay`|How long to wait for zookeeper unannouncements to propagate before shutting down Jetty. This is a minimum and `druid.server.http.gracefulShutdownTimeout` does not start counting down until after this period elapses.|`PT0S` (do not wait)| |`druid.server.http.unannouncePropagationDelay`|How long to wait for zookeeper unannouncements to propagate before shutting down Jetty. This is a minimum and `druid.server.http.gracefulShutdownTimeout` does not start counting down until after this period elapses.|`PT0S` (do not wait)|
|`druid.server.http.maxQueryTimeout`|Maximum allowed value (in milliseconds) for `timeout` parameter. See [query-context](../querying/query-context.html) to know more about `timeout`. Query is rejected if the query context `timeout` is greater than this value. |Long.MAX_VALUE| |`druid.server.http.maxQueryTimeout`|Maximum allowed value (in milliseconds) for `timeout` parameter. See [query-context](../querying/query-context.html) to know more about `timeout`. Query is rejected if the query context `timeout` is greater than this value. |Long.MAX_VALUE|
|`druid.server.http.maxRequestHeaderSize`|Maximum size of a request header in bytes. Larger headers consume more memory and can make a server more vulnerable to denial of service attacks.|8 * 1024| |`druid.server.http.maxRequestHeaderSize`|Maximum size of a request header in bytes. Larger headers consume more memory and can make a server more vulnerable to denial of service attacks.|8 * 1024|
|`druid.server.http.maxSubqueryRows`|Maximum number of rows from subqueries per query. These rows are stored in memory.|100000|
|`druid.server.http.enableForwardedRequestCustomizer`|If enabled, adds Jetty ForwardedRequestCustomizer which reads X-Forwarded-* request headers to manipulate servlet request object when Druid is used behind a proxy.|false| |`druid.server.http.enableForwardedRequestCustomizer`|If enabled, adds Jetty ForwardedRequestCustomizer which reads X-Forwarded-* request headers to manipulate servlet request object when Druid is used behind a proxy.|false|
#### Indexer Processing Resources #### Indexer Processing Resources
@ -1548,6 +1547,7 @@ Druid uses Jetty to serve HTTP requests. Each query being processed consumes a s
|`druid.server.http.enableRequestLimit`|If enabled, no requests would be queued in jetty queue and "HTTP 429 Too Many Requests" error response would be sent. |false| |`druid.server.http.enableRequestLimit`|If enabled, no requests would be queued in jetty queue and "HTTP 429 Too Many Requests" error response would be sent. |false|
|`druid.server.http.defaultQueryTimeout`|Query timeout in millis, beyond which unfinished queries will be cancelled|300000| |`druid.server.http.defaultQueryTimeout`|Query timeout in millis, beyond which unfinished queries will be cancelled|300000|
|`druid.server.http.maxScatterGatherBytes`|Maximum number of bytes gathered from data processes such as Historicals and realtime processes to execute a query. Queries that exceed this limit will fail. This is an advance configuration that allows to protect in case Broker is under heavy load and not utilizing the data gathered in memory fast enough and leading to OOMs. This limit can be further reduced at query time using `maxScatterGatherBytes` in the context. Note that having large limit is not necessarily bad if broker is never under heavy concurrent load in which case data gathered is processed quickly and freeing up the memory used.|Long.MAX_VALUE| |`druid.server.http.maxScatterGatherBytes`|Maximum number of bytes gathered from data processes such as Historicals and realtime processes to execute a query. Queries that exceed this limit will fail. This is an advance configuration that allows to protect in case Broker is under heavy load and not utilizing the data gathered in memory fast enough and leading to OOMs. This limit can be further reduced at query time using `maxScatterGatherBytes` in the context. Note that having large limit is not necessarily bad if broker is never under heavy concurrent load in which case data gathered is processed quickly and freeing up the memory used.|Long.MAX_VALUE|
|`druid.server.http.maxSubqueryRows`|Maximum number of rows from subqueries per query. These rows are stored in memory.|100000|
|`druid.server.http.gracefulShutdownTimeout`|The maximum amount of time Jetty waits after receiving shutdown signal. After this timeout the threads will be forcefully shutdown. This allows any queries that are executing to complete.|`PT0S` (do not wait)| |`druid.server.http.gracefulShutdownTimeout`|The maximum amount of time Jetty waits after receiving shutdown signal. After this timeout the threads will be forcefully shutdown. This allows any queries that are executing to complete.|`PT0S` (do not wait)|
|`druid.server.http.unannouncePropagationDelay`|How long to wait for zookeeper unannouncements to propagate before shutting down Jetty. This is a minimum and `druid.server.http.gracefulShutdownTimeout` does not start counting down until after this period elapses.|`PT0S` (do not wait)| |`druid.server.http.unannouncePropagationDelay`|How long to wait for zookeeper unannouncements to propagate before shutting down Jetty. This is a minimum and `druid.server.http.gracefulShutdownTimeout` does not start counting down until after this period elapses.|`PT0S` (do not wait)|
|`druid.server.http.maxQueryTimeout`|Maximum allowed value (in milliseconds) for `timeout` parameter. See [query-context](../querying/query-context.md) to know more about `timeout`. Query is rejected if the query context `timeout` is greater than this value. |Long.MAX_VALUE| |`druid.server.http.maxQueryTimeout`|Maximum allowed value (in milliseconds) for `timeout` parameter. See [query-context](../querying/query-context.md) to know more about `timeout`. Query is rejected if the query context `timeout` is greater than this value. |Long.MAX_VALUE|
@ -1614,24 +1614,19 @@ The Druid SQL server is configured through the following properties on the Broke
|--------|-----------|-------| |--------|-----------|-------|
|`druid.sql.enable`|Whether to enable SQL at all, including background metadata fetching. If false, this overrides all other SQL-related properties and disables SQL metadata, serving, and planning completely.|true| |`druid.sql.enable`|Whether to enable SQL at all, including background metadata fetching. If false, this overrides all other SQL-related properties and disables SQL metadata, serving, and planning completely.|true|
|`druid.sql.avatica.enable`|Whether to enable JDBC querying at `/druid/v2/sql/avatica/`.|true| |`druid.sql.avatica.enable`|Whether to enable JDBC querying at `/druid/v2/sql/avatica/`.|true|
|`druid.sql.avatica.maxConnections`|Maximum number of open connections for the Avatica server. These are not HTTP connections, but are logical client connections that may span multiple HTTP connections.|50| |`druid.sql.avatica.maxConnections`|Maximum number of open connections for the Avatica server. These are not HTTP connections, but are logical client connections that may span multiple HTTP connections.|25|
|`druid.sql.avatica.maxRowsPerFrame`|Maximum number of rows to return in a single JDBC frame. Setting this property to -1 indicates that no row limit should be applied. Clients can optionally specify a row limit in their requests; if a client specifies a row limit, the lesser value of the client-provided limit and `maxRowsPerFrame` will be used.|5,000| |`druid.sql.avatica.maxRowsPerFrame`|Maximum number of rows to return in a single JDBC frame. Setting this property to -1 indicates that no row limit should be applied. Clients can optionally specify a row limit in their requests; if a client specifies a row limit, the lesser value of the client-provided limit and `maxRowsPerFrame` will be used.|5,000|
|`druid.sql.avatica.maxStatementsPerConnection`|Maximum number of simultaneous open statements per Avatica client connection.|1| |`druid.sql.avatica.maxStatementsPerConnection`|Maximum number of simultaneous open statements per Avatica client connection.|4|
|`druid.sql.avatica.connectionIdleTimeout`|Avatica client connection idle timeout.|PT5M| |`druid.sql.avatica.connectionIdleTimeout`|Avatica client connection idle timeout.|PT5M|
|`druid.sql.http.enable`|Whether to enable JSON over HTTP querying at `/druid/v2/sql/`.|true| |`druid.sql.http.enable`|Whether to enable JSON over HTTP querying at `/druid/v2/sql/`.|true|
|`druid.sql.planner.awaitInitializationOnStart`|Boolean|Whether the Broker will wait for its SQL metadata view to fully initialize before starting up. If set to 'true', the Broker's HTTP server will not start up, and the Broker will not announce itself as available, until the server view is initialized. See also `druid.broker.segment.awaitInitializationOnStart`, a related setting.|true|
|`druid.sql.planner.maxQueryCount`|Maximum number of queries to issue, including nested queries. Set to 1 to disable sub-queries, or set to 0 for unlimited.|8|
|`druid.sql.planner.maxSemiJoinRowsInMemory`|Maximum number of rows to keep in memory for executing two-stage semi-join queries like `SELECT * FROM Employee WHERE DeptName IN (SELECT DeptName FROM Dept)`.|100000|
|`druid.sql.planner.maxTopNLimit`|Maximum threshold for a [TopN query](../querying/topnquery.md). Higher limits will be planned as [GroupBy queries](../querying/groupbyquery.md) instead.|100000| |`druid.sql.planner.maxTopNLimit`|Maximum threshold for a [TopN query](../querying/topnquery.md). Higher limits will be planned as [GroupBy queries](../querying/groupbyquery.md) instead.|100000|
|`druid.sql.planner.metadataRefreshPeriod`|Throttle for metadata refreshes.|PT1M| |`druid.sql.planner.metadataRefreshPeriod`|Throttle for metadata refreshes.|PT1M|
|`druid.sql.planner.metadataSegmentCacheEnable`|Whether to keep a cache of published segments in broker. If true, broker polls coordinator in background to get segments from metadata store and maintains a local cache. If false, coordinator's REST API will be invoked when broker needs published segments info.|false|
|`druid.sql.planner.metadataSegmentPollPeriod`|How often to poll coordinator for published segments list if `druid.sql.planner.metadataSegmentCacheEnable` is set to true. Poll period is in milliseconds. |60000|
|`druid.sql.planner.selectThreshold`|Page size threshold for [Select queries](../querying/select-query.md). Select queries for larger resultsets will be issued back-to-back using pagination.|1000|
|`druid.sql.planner.useApproximateCountDistinct`|Whether to use an approximate cardinality algorithm for `COUNT(DISTINCT foo)`.|true| |`druid.sql.planner.useApproximateCountDistinct`|Whether to use an approximate cardinality algorithm for `COUNT(DISTINCT foo)`.|true|
|`druid.sql.planner.useApproximateTopN`|Whether to use approximate [TopN queries](../querying/topnquery.md) when a SQL query could be expressed as such. If false, exact [GroupBy queries](../querying/groupbyquery.md) will be used instead.|true| |`druid.sql.planner.useApproximateTopN`|Whether to use approximate [TopN queries](../querying/topnquery.html) when a SQL query could be expressed as such. If false, exact [GroupBy queries](../querying/groupbyquery.html) will be used instead.|true|
|`druid.sql.planner.requireTimeCondition`|Whether to require SQL to have filter conditions on __time column so that all generated native queries will have user specified intervals. If true, all queries without filter condition on __time column will fail|false| |`druid.sql.planner.requireTimeCondition`|Whether to require SQL to have filter conditions on __time column so that all generated native queries will have user specified intervals. If true, all queries without filter condition on __time column will fail|false|
|`druid.sql.planner.sqlTimeZone`|Sets the default time zone for the server, which will affect how time functions and timestamp literals behave. Should be a time zone name like "America/Los_Angeles" or offset like "-08:00".|UTC| |`druid.sql.planner.sqlTimeZone`|Sets the default time zone for the server, which will affect how time functions and timestamp literals behave. Should be a time zone name like "America/Los_Angeles" or offset like "-08:00".|UTC|
|`druid.sql.planner.serializeComplexValues`|Whether to serialize "complex" output values, false will return the class name instead of the serialized value.|true| |`druid.sql.planner.metadataSegmentCacheEnable`|Whether to keep a cache of published segments in broker. If true, broker polls coordinator in background to get segments from metadata store and maintains a local cache. If false, coordinator's REST API will be invoked when broker needs published segments info.|false|
|`druid.sql.planner.metadataSegmentPollPeriod`|How often to poll coordinator for published segments list if `druid.sql.planner.metadataSegmentCacheEnable` is set to true. Poll period is in milliseconds. |60000|
> Previous versions of Druid had properties named `druid.sql.planner.maxQueryCount` and `druid.sql.planner.maxSemiJoinRowsInMemory`. > Previous versions of Druid had properties named `druid.sql.planner.maxQueryCount` and `druid.sql.planner.maxSemiJoinRowsInMemory`.
> These properties are no longer available. Since Druid 0.18.0, you can use `druid.server.http.maxSubqueryRows` to control the maximum > These properties are no longer available. Since Druid 0.18.0, you can use `druid.server.http.maxSubqueryRows` to control the maximum
@ -1774,8 +1769,8 @@ The following configurations are to set the default behavior for query vectoriza
|Property|Description|Default| |Property|Description|Default|
|--------|-----------|-------| |--------|-----------|-------|
|`druid.query.vectorize`|See [Vectorizable queries](../querying/query-context.html#vectorizable-queries) for details. This value can be overridden by `vectorize` in the query contexts.|`false`| |`druid.query.vectorize`|See [Vectorization parameters](../querying/query-context.html#vectorization-parameters) for details. This value can be overridden by `vectorize` in the query contexts.|`false`|
|`druid.query.vectorSize`|See [Vectorizable queries](../querying/query-context.html#vectorizable-queries) for details. This value can be overridden by `vectorSize` in the query contexts.|`512`| |`druid.query.vectorSize`|See [Vectorization parameters](../querying/query-context.html#vectorization-parameters) for details. This value can be overridden by `vectorSize` in the query contexts.|`512`|
### TopN query config ### TopN query config

View File

@ -233,3 +233,6 @@ So Druid uses three different techniques to maximize query performance:
- Pruning which segments are accessed for each query. - Pruning which segments are accessed for each query.
- Within each segment, using indexes to identify which rows must be accessed. - Within each segment, using indexes to identify which rows must be accessed.
- Within each segment, only reading the specific rows and columns that are relevant to a particular query. - Within each segment, only reading the specific rows and columns that are relevant to a particular query.
For more details about how Druid executes queries, refer to the [Query execution](../querying/query-execution.md)
documentation.

View File

@ -22,6 +22,9 @@ title: "Spatial filters"
~ under the License. ~ under the License.
--> -->
> Apache Druid supports two query languages: [Druid SQL](../querying/sql.md) and [native queries](../querying/querying.md).
> This document describes functionality that is only available in the native language.
Apache Druid supports filtering specially spatially indexed columns based on an origin and a bound. Apache Druid supports filtering specially spatially indexed columns based on an origin and a bound.
## Spatial indexing ## Spatial indexing

View File

@ -22,9 +22,14 @@ title: "Expressions"
~ under the License. ~ under the License.
--> -->
> Apache Druid supports two query languages: [native queries](../querying/querying.md) and [Druid SQL](../querying/sql.md).
> This document describes the native language. For information about functions available in SQL, refer to the
> [SQL documentation](../querying/sql.md#scalar-functions).
> This feature is still experimental. It has not been optimized for performance yet, and its implementation is known to Expressions are used in various places in the native query language, including
> have significant inefficiencies. [virtual columns](../querying/virtual-columns.md) and [join conditions](../querying/datasource.md#join). They are
also generated by most [Druid SQL functions](../querying/sql.md#scalar-functions) during the
[query translation](../querying/sql.md#query-translation) process.
This expression language supports the following operators (listed in decreasing order of precedence). This expression language supports the following operators (listed in decreasing order of precedence).

View File

@ -58,6 +58,8 @@ Available Metrics
|`query/success/count`|number of queries successfully processed|This metric is only available if the QueryCountStatsMonitor module is included.|| |`query/success/count`|number of queries successfully processed|This metric is only available if the QueryCountStatsMonitor module is included.||
|`query/failed/count`|number of failed queries|This metric is only available if the QueryCountStatsMonitor module is included.|| |`query/failed/count`|number of failed queries|This metric is only available if the QueryCountStatsMonitor module is included.||
|`query/interrupted/count`|number of queries interrupted due to cancellation or timeout|This metric is only available if the QueryCountStatsMonitor module is included.|| |`query/interrupted/count`|number of queries interrupted due to cancellation or timeout|This metric is only available if the QueryCountStatsMonitor module is included.||
|`sqlQuery/time`|Milliseconds taken to complete a SQL query.|id, nativeQueryIds, dataSource, remoteAddress, success.|< 1s|
|`sqlQuery/bytes`|number of bytes returned in SQL query response.|id, nativeQueryIds, dataSource, remoteAddress, success.| |
### Historical ### Historical

View File

@ -22,6 +22,10 @@ title: "Aggregations"
~ under the License. ~ under the License.
--> -->
> Apache Druid supports two query languages: [Druid SQL](sql.md) and [native queries](querying.md).
> This document describes the native
> language. For information about aggregators available in SQL, refer to the
> [SQL documentation](sql.md#aggregation-functions).
Aggregations can be provided at ingestion time as part of the ingestion spec as a way of summarizing data before it enters Apache Druid. Aggregations can be provided at ingestion time as part of the ingestion spec as a way of summarizing data before it enters Apache Druid.
Aggregations can also be specified as part of many queries at query time. Aggregations can also be specified as part of many queries at query time.

View File

@ -22,43 +22,318 @@ title: "Datasources"
~ under the License. ~ under the License.
--> -->
Datasources in Apache Druid are things that you can query. The most common kind of datasource is a table datasource,
and in many contexts the word "datasource" implicitly refers to table datasources. This is especially true
[during data ingestion](../ingestion/index.html), where ingestion is always creating or writing into a table
datasource. But at query time, there are many other types of datasources available.
A data source is the Apache Druid equivalent of a database table. However, a query can also masquerade as a data source, providing subquery-like functionality. Query data sources are currently supported only by [GroupBy](../querying/groupbyquery.md) queries. The word "datasource" is generally spelled `dataSource` (with a capital S) when it appears in API requests and
responses.
### Table datasource ## Datasource type
The table data source is the most common type. It's represented by a string, or by the full structure:
### `table`
<!--DOCUSAURUS_CODE_TABS-->
<!--SQL-->
```sql
SELECT column1, column2 FROM "druid"."dataSourceName"
```
<!--Native-->
```json ```json
{ {
"type": "table", "queryType": "scan",
"name": "<string_value>" "dataSource": "dataSourceName",
"columns": ["column1", "column2"],
"intervals": ["0000/3000"]
}
```
<!--END_DOCUSAURUS_CODE_TABS-->
The table datasource is the most common type. This is the kind of datasource you get when you perform
[data ingestion](../ingestion/index.html). They are split up into segments, distributed around the cluster,
and queried in parallel.
In [Druid SQL](sql.html#from), table datasources reside in the the `druid` schema. This is the default schema, so table
datasources can be referenced as either `druid.dataSourceName` or simply `dataSourceName`.
In native queries, table datasources can be referenced using their names as strings (as in the example above), or by
using JSON objects of the form:
```json
"dataSource": {
"type": "table",
"name": "dataSourceName"
} }
``` ```
### Union datasource To see a list of all table datasources, use the SQL query
`SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'druid'`.
This data source unions two or more table data sources. ### `lookup`
<!--DOCUSAURUS_CODE_TABS-->
<!--SQL-->
```sql
SELECT k, v FROM lookup.countries
```
<!--Native-->
```json ```json
{ {
"type": "union", "queryType": "scan",
"dataSources": ["<string_value1>", "<string_value2>", "<string_value3>", ... ] "dataSource": {
"type": "lookup",
"lookup": "countries"
},
"columns": ["k", "v"],
"intervals": ["0000/3000"]
} }
``` ```
<!--END_DOCUSAURUS_CODE_TABS-->
Note that the data sources being unioned should have the same schema. Lookup datasources correspond to Druid's key-value [lookup](lookups.html) objects. In [Druid SQL](sql.html#from),
Union Queries should be always sent to a Broker/Router process and are *NOT* supported directly by the Historical processes. they reside in the the `lookup` schema. They are preloaded in memory on all servers, so they can be accessed rapidly.
They can be joined onto regular tables using the [join operator](#join).
### Query datasource Lookup datasources are key-value oriented and always have exactly two columns: `k` (the key) and `v` (the value), and
both are always strings.
This is used for nested groupBys and is only currently supported for groupBys. To see a list of all lookup datasources, use the SQL query
`SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'lookup'`.
> Performance tip: Lookups can be joined with a base table either using an explicit [join](#join), or by using the
> SQL [`LOOKUP` function](sql.html#string-functions).
> However, the join operator must evaluate the condition on each row, whereas the
> `LOOKUP` function can defer evaluation until after an aggregation phase. This means that the `LOOKUP` function is
> usually faster than joining to a lookup datasource.
### `query`
<!--DOCUSAURUS_CODE_TABS-->
<!--SQL-->
```sql
-- Uses a subquery to count hits per page, then takes the average.
SELECT
AVG(cnt) AS average_hits_per_page
FROM
(SELECT page, COUNT(*) AS hits FROM site_traffic GROUP BY page)
```
<!--Native-->
```json ```json
{ {
"type": "query", "queryType": "timeseries",
"query": { "dataSource": {
"type": "groupBy", "type": "query",
... "query": {
} "queryType": "groupBy",
"dataSource": "site_traffic",
"intervals": ["0000/3000"],
"granularity": "all",
"dimensions": ["page"],
"aggregations": [
{ "type": "count", "name": "hits" }
]
}
},
"intervals": ["0000/3000"],
"granularity": "all",
"aggregations": [
{ "type": "longSum", "name": "hits", "fieldName": "hits" },
{ "type": "count", "name": "pages" }
],
"postAggregations": [
{ "type": "expression", "name": "average_hits_per_page", "expression": "hits / pages" }
]
} }
``` ```
<!--END_DOCUSAURUS_CODE_TABS-->
Query datasources allow you to issue subqueries. In native queries, they can appear anywhere that accepts a
`dataSource`. In SQL, they can appear in the following places, always surrounded by parentheses:
- The FROM clause: `FROM (<subquery>)`.
- As inputs to a JOIN: `<table-or-subquery-1> t1 INNER JOIN <table-or-subquery-2> t2 ON t1.<col1> = t2.<col2>`.
- In the WHERE clause: `WHERE <column> { IN | NOT IN } (<subquery>)`. These are translated to joins by the SQL planner.
> Performance tip: In most cases, subquery results are fully buffered in memory on the Broker and then further
> processing occurs on the Broker itself. This means that subqueries with large result sets can cause performance
> bottlenecks or run into memory usage limits on the Broker. See the [Query execution](query-execution.md) documentation
> for more details on how subqueries are executed and what limits will apply.
### `join`
<!--DOCUSAURUS_CODE_TABS-->
<!--SQL-->
```sql
-- Joins "sales" with "countries" (using "store" as the join key) to get sales by country.
SELECT
store_to_country.v AS country,
SUM(sales.revenue) AS country_revenue
FROM
sales
INNER JOIN lookup.store_to_country ON sales.store = store_to_country.k
GROUP BY
countries.v
```
<!--Native-->
```json
{
"queryType": "groupBy",
"dataSource": {
"type": "join",
"left": "sales",
"right": {
"type": "lookup",
"lookup": "store_to_country"
},
"rightPrefix": "r.",
"condition": "store == \"r.k\"",
"joinType": "INNER"
},
"intervals": ["0000/3000"],
"granularity": "all",
"dimensions": [
{ "type": "default", "outputName": "country", "dimension": "r.v" }
],
"aggregations": [
{ "type": "longSum", "name": "country_revenue", "fieldName": "revenue" }
]
}
```
<!--END_DOCUSAURUS_CODE_TABS-->
Join datasources allow you to do a SQL-style join of two datasources. Stacking joins on top of each other allows
you to join arbitrarily many datasources.
In Druid {{DRUIDVERSION}}, joins are implemented with a broadcast hash-join algorithm. This means that all tables
other than the leftmost "base" table must fit in memory. It also means that the join condition must be an equality. This
feature is intended mainly to allow joining regular Druid tables with [lookup](#lookup), [inline](#inline), and
[query](#query) datasources.
For information about how Druid executes queries involving joins, refer to the
[Query execution](query-execution.html#join) page.
#### Joins in SQL
SQL joins take the form:
```
<o1> [ INNER | LEFT [OUTER] ] JOIN <o2> ON <condition>
```
The condition must involve only equalities, but functions are okay, and there can be multiple equalities ANDed together.
Conditions like `t1.x = t2.x`, or `LOWER(t1.x) = t2.x`, or `t1.x = t2.x AND t1.y = t2.y` can all be handled. Conditions
like `t1.x <> t2.x` cannot currently be handled.
Note that Druid SQL is less rigid than what native join datasources can handle. In cases where a SQL query does
something that is not allowed as-is with a native join datasource, Druid SQL will generate a subquery. This can have
a substantial effect on performance and scalability, so it is something to watch out for. Some examples of when the
SQL layer will generate subqueries include:
- Joining a regular Druid table to itself, or to another regular Druid table. The native join datasource can accept
a table on the left-hand side, but not the right, so a subquery is needed.
- Join conditions where the expressions on either side are of different types.
- Join conditions where the right-hand expression is not a direct column access.
For more information about how Druid translates SQL to native queries, refer to the
[Druid SQL](sql.md#query-translation) documentation.
#### Joins in native queries
Native join datasources have the following properties. All are required.
|Field|Description|
|-----|-----------|
|`left`|Left-hand datasource. Must be of type `table`, `join`, `lookup`, `query`, or `inline`. Placing another join as the left datasource allows you to join arbitrarily many datasources.|
|`right`|Right-hand datasource. Must be of type `lookup`, `query`, or `inline`. Note that this is more rigid than what Druid SQL requires.|
|`rightPrefix`|String prefix that will be applied to all columns from the right-hand datasource, to prevent them from colliding with columns from the left-hand datasource. Can be any string, so long as it is nonempty and is not be a prefix of the string `__time`. Any columns from the left-hand side that start with your `rightPrefix` will be shadowed. It is up to you to provide a prefix that will not shadow any important columns from the left side.|
|`condition`|[Expression](../misc/math-expr.md) that must be an equality where one side is an expression of the left-hand side, and the other side is a simple column reference to the right-hand side. Note that this is more rigid than what Druid SQL requires: here, the right-hand reference must be a simple column reference; in SQL it can be an expression.|
|`joinType`|`INNER` or `LEFT`.|
#### Join performance
Joins are a feature that can significantly affect performance of your queries. Some performance tips and notes:
1. Joins are especially useful with [lookup datasources](#lookup), but in most cases, the
[`LOOKUP` function](sql.html#string-functions) performs better than a join. Consider using the `LOOKUP` function if
it is appropriate for your use case.
2. When using joins in Druid SQL, keep in mind that it can generate subqueries that you did not explicitly include in
your queries. Refer to the [Druid SQL](sql.md#query-translation) documentation for more details about when this happens
and how to detect it.
3. One common reason for implicit subquery generation is if the types of the two halves of an equality do not match.
For example, since lookup keys are always strings, the condition `druid.d JOIN lookup.l ON d.field = l.field` will
perform best if `d.field` is a string.
4. As of Druid {{DRUIDVERSION}}, the join operator must evaluate the condition for each row. In the future, we expect
to implement both early and deferred condition evaluation, which we expect to improve performance considerably for
common use cases.
#### Future work for joins
Joins are an area of active development in Druid. The following features are missing today but may appear in
future versions:
- Preloaded dimension tables that are wider than lookups (i.e. supporting more than a single key and single value).
- RIGHT OUTER and FULL OUTER joins. Currently, they are partially implemented. Queries will run but results will not
always be correct.
- Performance-related optimizations as mentioned in the [previous section](#join-performance).
- Join algorithms other than broadcast hash-joins.
### `union`
<!--DOCUSAURUS_CODE_TABS-->
<!--Native-->
```json
{
"queryType": "scan",
"dataSource": {
"type": "union",
"dataSources": ["<tableDataSourceName1>", "<tableDataSourceName2>", "<tableDataSourceName3>"]
},
"columns": ["column1", "column2"],
"intervals": ["0000/3000"]
}
```
<!--END_DOCUSAURUS_CODE_TABS-->
Union datasources allow you to treat two or more table datasources as a single datasource. The datasources being unioned
do not need to have identical schemas. If they do not fully match up, then columns that exist in one table but not
another will be treated as if they contained all null values in the tables where they do not exist.
Union datasources are not available in Druid SQL.
Refer to the [Query execution](query-execution.md#union) documentation for more details on how union datasources
are executed.
### `inline`
<!--DOCUSAURUS_CODE_TABS-->
<!--Native-->
```json
{
"queryType": "scan",
"dataSource": {
"type": "inline",
"columnNames": ["country", "city"],
"rows": [
["United States", "San Francisco"],
["Canada", "Calgary"]
]
},
"columns": ["country", "city"],
"intervals": ["0000/3000"]
}
```
<!--END_DOCUSAURUS_CODE_TABS-->
Inline datasources allow you to query a small amount of data that is embedded in the query itself. They are useful when
you want to write a query on a small amount of data without loading it first. They are also useful as inputs into a
[join](#join). Druid also uses them internally to handle subqueries that need to be inlined on the Broker. See the
[`query` datasource](#query) documentation for more details.
There are two fields in an inline datasource: an array of `columnNames` and an array of `rows`. Each row is an array
that must be exactly as long as the list of `columnNames`. The first element in each row corresponds to the first
column in `columnNames`, and so on.
Inline datasources are not available in Druid SQL.

View File

@ -23,6 +23,9 @@ sidebar_label: "DatasourceMetadata"
~ under the License. ~ under the License.
--> -->
> Apache Druid supports two query languages: [Druid SQL](sql.md) and [native queries](querying.md).
> This document describes a query
> type that is only available in the native language.
Data Source Metadata queries return metadata information for a dataSource. These queries return information about: Data Source Metadata queries return metadata information for a dataSource. These queries return information about:

View File

@ -1,6 +1,7 @@
--- ---
id: dimensionspecs id: dimensionspecs
title: "Transforming Dimension Values" title: "Query dimensions"
sidebar_label: "Dimensions"
--- ---
<!-- <!--
@ -22,6 +23,10 @@ title: "Transforming Dimension Values"
~ under the License. ~ under the License.
--> -->
> Apache Druid supports two query languages: [Druid SQL](sql.md) and [native queries](querying.md).
> This document describes the native
> language. For information about functions available in SQL, refer to the
> [SQL documentation](sql.md#scalar-functions).
The following JSON fields can be used in a query to operate on dimension values. The following JSON fields can be used in a query to operate on dimension values.
@ -196,7 +201,7 @@ Returns the dimension value unchanged if the regular expression matches, otherwi
### Search query extraction function ### Search query extraction function
Returns the dimension value unchanged if the given [`SearchQuerySpec`](../querying/searchqueryspec.md) Returns the dimension value unchanged if the given [`SearchQuerySpec`](../querying/searchquery.html#searchqueryspec)
matches, otherwise returns null. matches, otherwise returns null.
```json ```json

View File

@ -1,6 +1,7 @@
--- ---
id: filters id: filters
title: "Query Filters" title: "Query filters"
sidebar_label: "Filters"
--- ---
<!-- <!--
@ -22,6 +23,10 @@ title: "Query Filters"
~ under the License. ~ under the License.
--> -->
> Apache Druid supports two query languages: [Druid SQL](sql.md) and [native queries](querying.md).
> This document describes the native
> language. For information about aggregators available in SQL, refer to the
> [SQL documentation](sql.md#scalar-functions).
A filter is a JSON object indicating which rows of data should be included in the computation for a query. Its essentially the equivalent of the WHERE clause in SQL. Apache Druid supports the following types of filters. A filter is a JSON object indicating which rows of data should be included in the computation for a query. Its essentially the equivalent of the WHERE clause in SQL. Apache Druid supports the following types of filters.

View File

@ -1,6 +1,7 @@
--- ---
id: granularities id: granularities
title: "Aggregation Granularity" title: "Query granularities"
sidebar_label: "Granularities"
--- ---
<!-- <!--
@ -22,6 +23,10 @@ title: "Aggregation Granularity"
~ under the License. ~ under the License.
--> -->
> Apache Druid supports two query languages: [Druid SQL](sql.md) and [native queries](querying.md).
> This document describes the native
> language. For information about time functions available in SQL, refer to the
> [SQL documentation](sql.md#time-functions).
The granularity field determines how data gets bucketed across the time dimension, or how it gets aggregated by hour, day, minute, etc. The granularity field determines how data gets bucketed across the time dimension, or how it gets aggregated by hour, day, minute, etc.

View File

@ -23,6 +23,10 @@ sidebar_label: "GroupBy"
~ under the License. ~ under the License.
--> -->
> Apache Druid supports two query languages: [Druid SQL](sql.md) and [native queries](querying.md).
> This document describes a query
> type in the native language. For information about when Druid SQL will use this query type, refer to the
> [SQL documentation](sql.md#query-types).
These types of Apache Druid queries take a groupBy query object and return an array of JSON objects where each object represents a These types of Apache Druid queries take a groupBy query object and return an array of JSON objects where each object represents a
grouping asked for by the query. grouping asked for by the query.

View File

@ -1,6 +1,6 @@
--- ---
id: having id: having
title: "Filter groupBy query results" title: "Having filters (groupBy)"
--- ---
<!-- <!--
@ -22,6 +22,10 @@ title: "Filter groupBy query results"
~ under the License. ~ under the License.
--> -->
> Apache Druid supports two query languages: [Druid SQL](sql.md) and [native queries](querying.md).
> This document describes the native
> language. For information about functions available in SQL, refer to the
> [SQL documentation](sql.md#scalar-functions).
A having clause is a JSON object identifying which rows from a groupBy query should be returned, by specifying conditions on aggregated values. A having clause is a JSON object identifying which rows from a groupBy query should be returned, by specifying conditions on aggregated values.

View File

@ -22,33 +22,19 @@ title: "Joins"
~ under the License. ~ under the License.
--> -->
Druid has two features related to joining of data:
Apache Druid has limited support for joins through [query-time lookups](../querying/lookups.md). The common use case of 1. [Join](datasource.md#join) operators. These are available using a [join datasource](datasource.md#join) in native
query-time lookups is to replace one dimension value (e.g. a String ID) with another value (e.g. a human-readable String value). This is similar to a star-schema join. queries, or using the [JOIN operator](sql.md#query-syntax) in Druid SQL. Refer to the
[join datasource](datasource.md#join) documentation for information about how joins work in Druid.
2. [Query-time lookups](lookups.md), simple key-to-value mappings. These are preloaded on all servers that are involved
in queries and can be accessed with or without an explicit join operator. Refer to the [lookups](lookups.md)
documentation for more details.
Druid does not yet have full support for joins. Although Druids storage format would allow for the implementation Whenever possible, for best performance it is good to avoid joins at query time. Often this can be accomplished by
of joins (there is no loss of fidelity for columns included as dimensions), full support for joins have not yet been implemented yet joining data before it is loaded into Druid. However, there are situations where joins or lookups are the best solution
for the following reasons: available despite the performance overhead, including:
1. Scaling join queries has been, in our professional experience, - The fact-to-dimension (star and snowflake schema) case: you need to change dimension values after initial ingestion,
a constant bottleneck of working with distributed databases. and aren't able to reingest to do this. In this case, you can use lookups for your dimension tables.
2. The incremental gains in functionality are perceived to be - Your workload requires joins or filters on subqueries.
of less value than the anticipated problems with managing
highly concurrent, join-heavy workloads.
A join query is essentially the merging of two or more streams of data based on a shared set of keys. The primary
high-level strategies for join queries we are aware of are a hash-based strategy or a
sorted-merge strategy. The hash-based strategy requires that all but
one data set be available as something that looks like a hash table,
a lookup operation is then performed on this hash table for every
row in the “primary” stream. The sorted-merge strategy assumes
that each stream is sorted by the join key and thus allows for the incremental
joining of the streams. Each of these strategies, however,
requires the materialization of some number of the streams either in
sorted order or in a hash table form.
When all sides of the join are significantly large tables (> 1 billion
records), materializing the pre-join streams requires complex
distributed memory management. The complexity of the memory
management is only amplified by the fact that we are targeting highly
concurrent, multi-tenant workloads.

View File

@ -1,6 +1,6 @@
--- ---
id: limitspec id: limitspec
title: "Sort groupBy query results" title: "Sorting and limiting (groupBy)"
--- ---
<!-- <!--
@ -22,6 +22,9 @@ title: "Sort groupBy query results"
~ under the License. ~ under the License.
--> -->
> Apache Druid supports two query languages: [Druid SQL](sql.md) and [native queries](querying.md).
> This document describes the native
> language. For information about sorting in SQL, refer to the [SQL documentation](sql.md#order-by).
The limitSpec field provides the functionality to sort and limit the set of results from a groupBy query. If you group by a single dimension and are ordering by a single metric, we highly recommend using [TopN Queries](../querying/topnquery.md) instead. The performance will be substantially better. Available options are: The limitSpec field provides the functionality to sort and limit the set of results from a groupBy query. If you group by a single dimension and are ordering by a single metric, we highly recommend using [TopN Queries](../querying/topnquery.md) instead. The performance will be substantially better. Available options are:

View File

@ -44,8 +44,9 @@ it will return the current account manager for that app-id REGARDLESS of the tim
If you require data time range sensitive lookups, such a use case is not currently supported dynamically at query time, If you require data time range sensitive lookups, such a use case is not currently supported dynamically at query time,
and such data belongs in the raw denormalized data for use in Druid. and such data belongs in the raw denormalized data for use in Druid.
Very small lookups (count of keys on the order of a few dozen to a few hundred) can be passed at query time as a "map" Lookups are generally preloaded in-memory on all servers. But very small lookups (on the order of a few dozen to a few
lookup as per [dimension specs](../querying/dimensionspecs.md). hundred entries) can also be passed inline in native queries time using the "map" lookup type. Refer to the
[dimension specs](dimensionspecs.md) documentation for details.
Other lookup types are available as extensions, including: Other lookup types are available as extensions, including:
@ -55,21 +56,37 @@ Other lookup types are available as extensions, including:
Query Syntax Query Syntax
------------ ------------
In [Druid SQL](sql.html), lookups can be queried using the `LOOKUP` function, for example: In [Druid SQL](sql.html), lookups can be queried using the [`LOOKUP` function](sql.md#string-functions), for example:
```sql
SELECT
LOOKUP(store, 'store_to_country') AS country,
SUM(revenue)
FROM sales
GROUP BY 1
``` ```
SELECT LOOKUP(column_name, 'lookup-name'), COUNT(*) FROM datasource GROUP BY 1
They can also be queried using the [JOIN operator](datasource.md#join):
```sql
SELECT
store_to_country.v AS country,
SUM(sales.revenue) AS country_revenue
FROM
sales
INNER JOIN lookup.store_to_country ON sales.store = store_to_country.k
GROUP BY 1
``` ```
In native queries, lookups can be queried with [dimension specs or extraction functions](dimensionspecs.html). In native queries, lookups can be queried with [dimension specs or extraction functions](dimensionspecs.html).
Query Execution Query Execution
--------------- ---------------
When executing an aggregation query involving lookups, Druid can decide to apply lookups either while scanning and When executing an aggregation query involving lookup functions (like the SQL [`LOOKUP` function](sql.md#string-functions),
aggregating rows, or to apply them after aggregation is complete. It is more efficient to apply lookups after Druid can decide to apply them while scanning and aggregating rows, or to apply them after aggregation is complete. It
aggregation is complete, so Druid will do this if it can. Druid decides this by checking if the lookup is marked is more efficient to apply lookups after aggregation is complete, so Druid will do this if it can. Druid decides this
as "injective" or not. In general, you should set this property for any lookup that is naturally one-to-one, to allow by checking if the lookup is marked as "injective" or not. In general, you should set this property for any lookup that
Druid to run your queries as fast as possible. is naturally one-to-one, to allow Druid to run your queries as fast as possible.
Injective lookups should include _all_ possible keys that may show up in your dataset, and should also map all keys to Injective lookups should include _all_ possible keys that may show up in your dataset, and should also map all keys to
_unique values_. This matters because non-injective lookups may map different keys to the same value, which must be _unique values_. This matters because non-injective lookups may map different keys to the same value, which must be
@ -95,6 +112,10 @@ But this one is not, since both "2" and "3" map to the same key:
To tell Druid that your lookup is injective, you must specify `"injective" : true` in the lookup configuration. Druid To tell Druid that your lookup is injective, you must specify `"injective" : true` in the lookup configuration. Druid
will not detect this automatically. will not detect this automatically.
> Currently, the injective lookup optimization is not triggered when lookups are inputs to a
> [join datasource](datasource.md#join). It is only used when lookup functions are used directly, without the join
> operator.
Dynamic Configuration Dynamic Configuration
--------------------- ---------------------
> Dynamic lookup configuration is an [experimental](../development/experimental.md) feature. Static > Dynamic lookup configuration is an [experimental](../development/experimental.md) feature. Static

View File

@ -1,6 +1,7 @@
--- ---
id: multitenancy id: multitenancy
title: "Multitenancy considerations" title: "Multitenancy considerations"
sidebar_label: "Multitenancy"
--- ---
<!-- <!--

View File

@ -1,6 +1,6 @@
--- ---
id: post-aggregations id: post-aggregations
title: "Post-Aggregations" title: "Post-aggregations"
--- ---
<!-- <!--
@ -22,6 +22,10 @@ title: "Post-Aggregations"
~ under the License. ~ under the License.
--> -->
> Apache Druid supports two query languages: [Druid SQL](sql.md) and [native queries](querying.md).
> This document describes the native
> language. For information about functions available in SQL, refer to the
> [SQL documentation](sql.md#aggregation-functions).
Post-aggregations are specifications of processing that should happen on aggregated values as they come out of Apache Druid. If you include a post aggregation as part of a query, make sure to include all aggregators the post-aggregator requires. Post-aggregations are specifications of processing that should happen on aggregated values as they come out of Apache Druid. If you include a post aggregation as part of a query, make sure to include all aggregators the post-aggregator requires.

View File

@ -1,6 +1,7 @@
--- ---
id: query-context id: query-context
title: "Query context" title: "Query context"
sidebar_label: "Context parameters"
--- ---
<!-- <!--
@ -22,8 +23,16 @@ title: "Query context"
~ under the License. ~ under the License.
--> -->
## General parameters
The query context is used for various query configuration parameters. The following parameters apply to all queries. The query context is used for various query configuration parameters. Query context parameters can be specified in
the following ways:
- For [Druid SQL](sql.md#client-apis), context parameters are provided either as a JSON object named `context` to the
HTTP POST API, or as properties to the JDBC connection.
- For [native queries](querying.md), context parameters are provided as a JSON object named `context`.
These parameters apply to all query types.
|property |default | description | |property |default | description |
|-----------------|----------------------------------------|----------------------| |-----------------|----------------------------------------|----------------------|
@ -46,26 +55,28 @@ The query context is used for various query configuration parameters. The follow
|parallelMergeInitialYieldRows|`druid.processing.merge.task.initialYieldNumRows`|Number of rows to yield per ForkJoinPool merge task for parallel result merging on the Broker, before forking off a new task to continue merging sequences. See [Broker configuration](../configuration/index.html#broker) for more details.| |parallelMergeInitialYieldRows|`druid.processing.merge.task.initialYieldNumRows`|Number of rows to yield per ForkJoinPool merge task for parallel result merging on the Broker, before forking off a new task to continue merging sequences. See [Broker configuration](../configuration/index.html#broker) for more details.|
|parallelMergeSmallBatchRows|`druid.processing.merge.task.smallBatchNumRows`|Size of result batches to operate on in ForkJoinPool merge tasks for parallel result merging on the Broker. See [Broker configuration](../configuration/index.html#broker) for more details.| |parallelMergeSmallBatchRows|`druid.processing.merge.task.smallBatchNumRows`|Size of result batches to operate on in ForkJoinPool merge tasks for parallel result merging on the Broker. See [Broker configuration](../configuration/index.html#broker) for more details.|
## Query-type-specific parameters
In addition, some query types offer context parameters specific to that query type. In addition, some query types offer context parameters specific to that query type.
### TopN queries ### TopN
|property |default | description | |property |default | description |
|-----------------|---------------------|----------------------| |-----------------|---------------------|----------------------|
|minTopNThreshold | `1000` | The top minTopNThreshold local results from each segment are returned for merging to determine the global topN. | |minTopNThreshold | `1000` | The top minTopNThreshold local results from each segment are returned for merging to determine the global topN. |
### Timeseries queries ### Timeseries
|property |default | description | |property |default | description |
|-----------------|---------------------|----------------------| |-----------------|---------------------|----------------------|
|skipEmptyBuckets | `false` | Disable timeseries zero-filling behavior, so only buckets with results will be returned. | |skipEmptyBuckets | `false` | Disable timeseries zero-filling behavior, so only buckets with results will be returned. |
### GroupBy queries ### GroupBy
See [GroupBy query context](groupbyquery.md#advanced-configurations). See the list of [GroupBy query context](groupbyquery.md#advanced-configurations) parameters available on the groupBy
query page.
### Vectorizable queries ## Vectorization parameters
The GroupBy and Timeseries query types can run in _vectorized_ mode, which speeds up query execution by processing The GroupBy and Timeseries query types can run in _vectorized_ mode, which speeds up query execution by processing
batches of rows at a time. Not all queries can be vectorized. In particular, vectorization currently has the following batches of rows at a time. Not all queries can be vectorized. In particular, vectorization currently has the following
@ -81,11 +92,12 @@ include "selector", "bound", "in", "like", "regex", "search", "and", "or", and "
- For GroupBy: No multi-value dimensions. - For GroupBy: No multi-value dimensions.
- For Timeseries: No "descending" order. - For Timeseries: No "descending" order.
- Only immutable segments (not real-time). - Only immutable segments (not real-time).
- Only [table datasources](datasource.html#table) (not joins, subqueries, lookups, or inline datasources).
Other query types (like TopN, Scan, Select, and Search) ignore the "vectorize" parameter, and will execute without Other query types (like TopN, Scan, Select, and Search) ignore the "vectorize" parameter, and will execute without
vectorization. These query types will ignore the "vectorize" parameter even if it is set to `"force"`. vectorization. These query types will ignore the "vectorize" parameter even if it is set to `"force"`.
Vectorization is an alpha-quality feature as of Druid {{DRUIDVERSION}}. We heartily welcome any feedback and testing Vectorization is a beta-quality feature as of Druid {{DRUIDVERSION}}. We heartily welcome any feedback and testing
from the community as we work to battle-test it. from the community as we work to battle-test it.
|property|default| description| |property|default| description|

View File

@ -0,0 +1,112 @@
---
id: query-execution
title: "Query execution"
---
<!--
~ Licensed to the Apache Software Foundation (ASF) under one
~ or more contributor license agreements. See the NOTICE file
~ distributed with this work for additional information
~ regarding copyright ownership. The ASF licenses this file
~ to you under the Apache License, Version 2.0 (the
~ "License"); you may not use this file except in compliance
~ with the License. You may obtain a copy of the License at
~
~ http://www.apache.org/licenses/LICENSE-2.0
~
~ Unless required by applicable law or agreed to in writing,
~ software distributed under the License is distributed on an
~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
~ KIND, either express or implied. See the License for the
~ specific language governing permissions and limitations
~ under the License.
-->
> This document describes how Druid executes [native queries](querying.md), but since [Druid SQL](sql.md) queries
> are translated to native queries, this document applies to the SQL runtime as well. Refer to the SQL
> [Query translation](sql.md#query-translation) page for information about how SQL queries are translated to native
> queries.
Druid's approach to query execution varies depending on the kind of [datasource](datasource.md) you are querying.
## Datasource type
### `table`
Queries that operate directly on [table datasources](datasource.md#table) are executed using a scatter-gather approach
led by the Broker process. The process looks like this:
1. The Broker identifies which [segments](../design/segments.md) are relevant to the query based on the `"intervals"`
parameter. Segments are always partitioned by time, so any segment whose interval overlaps the query interval is
potentially relevant.
2. The Broker may additionally further prune the segment list based on the `"filter"`, if the input data was partitioned
by range using the [`single_dim` partitionsSpec](../ingestion/native-batch.md#partitionsspec), and if the filter matches
the dimension used for partitioning.
3. The Broker, having pruned the list of segments for the query, forwards the query to data servers (like Historicals
and tasks running on MiddleManagers) that are currently serving those segments.
4. For all query types except [Scan](scan-query.md), data servers process each segment in parallel and generate partial
results for each segment. The specific processing that is done depends on the query type. These partial results may be
cached if [query caching](caching.md) is enabled. For Scan queries, segments are processed in order by a single thread,
and results are not cached.
5. The Broker receives partial results from each data server, merges them into the final result set, and returns them
to the caller. For Timeseries and Scan queries, and for GroupBy queries where there is no sorting, the Broker is able to
do this in a streaming fashion. Otherwise, the Broker fully computes the result set before returning anything.
### `lookup`
Queries that operate directly on [lookup datasources](datasource.md#lookup) (without a join) are executed on the Broker
that received the query, using its local copy of the lookup. All registered lookup tables are preloaded in-memory on the
Broker. The query runs single-threaded.
### `query`
[Query datasources](datasource.md#query) are subqueries. Each subquery is executed as if it was its own query and
the results are brought back to the Broker. Then, the Broker continues on with the rest of the query as if the subquery
was replaced with an inline datasource.
In most cases, subquery results are fully buffered in memory on the Broker before the rest of the query proceeds,
meaning subqueries execute sequentially. The total number of rows buffered across all subqueries of a given query
in this way cannot exceed the [`druid.server.http.maxSubqueryRows` property](../configuration/index.md).
There is one exception: if the outer query and all subqueries are the [groupBy](groupbyquery.md) type, then subquery
results can be processed in a streaming fashion and the `druid.server.http.maxSubqueryRows` limit does not apply.
### `join`
[Join datasources](datasource.md#join) are handled using a broadcast hash-join approach.
1. The Broker executes any subqueries that are inputs the join, as described in the [query](#query) section, and
replaces them with inline datasources.
2. The Broker flattens a join tree, if present, into a "base" datasource (the bottom-leftmost one) and other leaf
datasources (the rest).
3. Query execution proceeds using the same structure that the base datasource would use on its own. If the base
datasource is a [table](#table), segments are pruned based on `"intervals"` as usual, and the query is executed on the
cluster by forwarding it to all relevant data servers in parallel. If the base datasource is a [lookup](#lookup) or
[inline](#inline) datasource (including an inline datasource that was the result of inlining a subquery), the query is
executed on the Broker itself. The base query cannot be a union, because unions are not currently supported as inputs to
a join.
4. Before beginning to process the base datasource, the server(s) that will execute the query first inspect all the
non-base leaf datasources to determine if a new hash table needs to be built for the upcoming hash join. Currently,
lookups do not require new hash tables to be built (because they are preloaded), but inline datasources do.
5. Query execution proceeds again using the same structure that the base datasource would use on its own, with one
addition: while processing the base datasource, Druid servers will use the hash tables built from the other join inputs
to produce the join result row-by-row, and query engines will operate on the joined rows rather than the base rows.
### `union`
Queries that operate directly on [union datasources](datasource.md#union) are split up on the Broker into a separate
query for each table that is part of the union. Each of these queries runs separately, and the Broker merges their
results together.
### `inline`
Queries that operate directly on [inline datasources](datasource.md#inline) are executed on the Broker that received the
query. The query runs single-threaded.

View File

@ -1,7 +1,6 @@
--- ---
id: querying id: querying
title: "Native queries" title: "Native queries"
sidebar_label: "Making native queries"
--- ---
<!-- <!--
@ -24,8 +23,10 @@ sidebar_label: "Making native queries"
--> -->
> Apache Druid supports two query languages: [Druid SQL](sql.md) and native queries. Druid SQL > Apache Druid supports two query languages: [Druid SQL](sql.md) and [native queries](querying.md).
> queries are planned into native queries. This document describes the native query language. > This document describes the
> native query language. For information about how Druid SQL chooses which native query types to use when
> it runs a SQL query, refer to the [SQL documentation](sql.md#query-types).
Native queries in Druid are JSON objects and are typically issued to the Broker or Router processes. Queries can be Native queries in Druid are JSON objects and are typically issued to the Broker or Router processes. Queries can be
posted like this: posted like this:
@ -70,15 +71,16 @@ Druid has numerous query types for various use cases. Queries are composed of va
* [SegmentMetadata](../querying/segmentmetadataquery.md) * [SegmentMetadata](../querying/segmentmetadataquery.md)
* [DatasourceMetadata](../querying/datasourcemetadataquery.md) * [DatasourceMetadata](../querying/datasourcemetadataquery.md)
### Search queries ### Other queries
* [Scan](../querying/scan-query.md)
* [Search](../querying/searchquery.md) * [Search](../querying/searchquery.md)
## Which query should I use? ## Which query type should I use?
Where possible, we recommend using [Timeseries]() and [TopN]() queries instead of [GroupBy](). GroupBy is the most flexible Druid query, but also has the poorest performance. For aggregation queries, if more than one would satisfy your needs, we generally recommend using Timeseries or TopN
Timeseries are significantly faster than groupBy queries for aggregations that don't require grouping over dimensions. For grouping and sorting over a single dimension, whenever possible, as they are specifically optimized for their use cases. If neither is a good fit, you should use
topN queries are much more optimized than groupBys. the GroupBy query, which is the most flexible.
## Query cancellation ## Query cancellation

View File

@ -23,6 +23,10 @@ sidebar_label: "Scan"
~ under the License. ~ under the License.
--> -->
> Apache Druid supports two query languages: [Druid SQL](sql.md) and [native queries](querying.md).
> This document describes a query
> type in the native language. For information about when Druid SQL will use this query type, refer to the
> [SQL documentation](sql.md#query-types).
The Scan query returns raw Apache Druid rows in streaming mode. The Scan query returns raw Apache Druid rows in streaming mode.

View File

@ -23,6 +23,9 @@ sidebar_label: "Search"
~ under the License. ~ under the License.
--> -->
> Apache Druid supports two query languages: [Druid SQL](sql.md) and [native queries](querying.md).
> This document describes a query
> type that is only available in the native language.
A search query returns dimension values that match the search specification. A search query returns dimension values that match the search specification.
@ -59,7 +62,7 @@ There are several main parts to a search query:
|limit| Defines the maximum number per Historical process (parsed as int) of search results to return. |no (default to 1000)| |limit| Defines the maximum number per Historical process (parsed as int) of search results to return. |no (default to 1000)|
|intervals|A JSON Object representing ISO-8601 Intervals. This defines the time ranges to run the query over.|yes| |intervals|A JSON Object representing ISO-8601 Intervals. This defines the time ranges to run the query over.|yes|
|searchDimensions|The dimensions to run the search over. Excluding this means the search is run over all dimensions.|no| |searchDimensions|The dimensions to run the search over. Excluding this means the search is run over all dimensions.|no|
|query|See [SearchQuerySpec](../querying/searchqueryspec.md).|yes| |query|See [SearchQuerySpec](#searchqueryspec).|yes|
|sort|An object specifying how the results of the search should be sorted.<br/>Possible types are "lexicographic" (the default sort), "alphanumeric", "strlen", and "numeric".<br/>See [Sorting Orders](./sorting-orders.md) for more details.|no| |sort|An object specifying how the results of the search should be sorted.<br/>Possible types are "lexicographic" (the default sort), "alphanumeric", "strlen", and "numeric".<br/>See [Sorting Orders](./sorting-orders.md) for more details.|no|
|context|See [Context](../querying/query-context.md)|no| |context|See [Context](../querying/query-context.md)|no|
@ -139,3 +142,51 @@ The following query context parameters apply:
|Property|Description| |Property|Description|
|--------|-----------| |--------|-----------|
|`searchStrategy`|Overrides the value of `druid.query.search.searchStrategy` for this query.| |`searchStrategy`|Overrides the value of `druid.query.search.searchStrategy` for this query.|
## SearchQuerySpec
### `insensitive_contains`
If any part of a dimension value contains the value specified in this search query spec, regardless of case, a "match" occurs. The grammar is:
```json
{
"type" : "insensitive_contains",
"value" : "some_value"
}
```
### `fragment`
If any part of a dimension value contains all of the values specified in this search query spec, regardless of case by default, a "match" occurs. The grammar is:
```json
{
"type" : "fragment",
"case_sensitive" : false,
"values" : ["fragment1", "fragment2"]
}
```
### `contains`
If any part of a dimension value contains the value specified in this search query spec, a "match" occurs. The grammar is:
```json
{
"type" : "contains",
"case_sensitive" : true,
"value" : "some_value"
}
```
### `regex`
If any part of a dimension value contains the pattern specified in this search query spec, a "match" occurs. The grammar is:
```json
{
"type" : "regex",
"pattern" : "some_pattern"
}
```

View File

@ -1,76 +0,0 @@
---
id: searchqueryspec
title: "Refining search queries"
---
<!--
~ Licensed to the Apache Software Foundation (ASF) under one
~ or more contributor license agreements. See the NOTICE file
~ distributed with this work for additional information
~ regarding copyright ownership. The ASF licenses this file
~ to you under the Apache License, Version 2.0 (the
~ "License"); you may not use this file except in compliance
~ with the License. You may obtain a copy of the License at
~
~ http://www.apache.org/licenses/LICENSE-2.0
~
~ Unless required by applicable law or agreed to in writing,
~ software distributed under the License is distributed on an
~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
~ KIND, either express or implied. See the License for the
~ specific language governing permissions and limitations
~ under the License.
-->
Search query specs define how a "match" is defined between a search value and a dimension value. The available search query specs are:
InsensitiveContainsSearchQuerySpec
----------------------------------
If any part of a dimension value contains the value specified in this search query spec, regardless of case, a "match" occurs. The grammar is:
```json
{
"type" : "insensitive_contains",
"value" : "some_value"
}
```
FragmentSearchQuerySpec
-----------------------
If any part of a dimension value contains all of the values specified in this search query spec, regardless of case by default, a "match" occurs. The grammar is:
```json
{
"type" : "fragment",
"case_sensitive" : false,
"values" : ["fragment1", "fragment2"]
}
```
ContainsSearchQuerySpec
----------------------------------
If any part of a dimension value contains the value specified in this search query spec, a "match" occurs. The grammar is:
```json
{
"type" : "contains",
"case_sensitive" : true,
"value" : "some_value"
}
```
RegexSearchQuerySpec
----------------------------------
If any part of a dimension value contains the pattern specified in this search query spec, a "match" occurs. The grammar is:
```json
{
"type" : "regex",
"pattern" : "some_pattern"
}
```

View File

@ -23,6 +23,10 @@ sidebar_label: "SegmentMetadata"
~ under the License. ~ under the License.
--> -->
> Apache Druid supports two query languages: [Druid SQL](sql.md) and [native queries](querying.md).
> This document describes a query
> type that is only available in the native language. However, Druid SQL contains similar functionality in
> its [metadata tables](sql.md#metadata-tables).
Segment metadata queries return per-segment information about: Segment metadata queries return per-segment information about:

View File

@ -1,6 +1,6 @@
--- ---
id: sorting-orders id: sorting-orders
title: "Sorting Orders" title: "String comparators"
--- ---
<!-- <!--
@ -22,6 +22,10 @@ title: "Sorting Orders"
~ under the License. ~ under the License.
--> -->
> Apache Druid supports two query languages: [Druid SQL](sql.md) and [native queries](querying.md).
> This document describes the native
> language. For information about functions available in SQL, refer to the
> [SQL documentation](sql.md#scalar-functions).
These sorting orders are used by the [TopNMetricSpec](./topnmetricspec.md), [SearchQuery](./searchquery.md), GroupByQuery's [LimitSpec](./limitspec.md), and [BoundFilter](./filters.html#bound-filter). These sorting orders are used by the [TopNMetricSpec](./topnmetricspec.md), [SearchQuery](./searchquery.md), GroupByQuery's [LimitSpec](./limitspec.md), and [BoundFilter](./filters.html#bound-filter).

View File

@ -31,39 +31,24 @@ sidebar_label: "Druid SQL"
--> -->
> Apache Druid supports two query languages: Druid SQL and [native queries](querying.md), which > Apache Druid supports two query languages: Druid SQL and [native queries](querying.md).
> SQL queries are planned into, and which end users can also issue directly. This document describes the SQL language. > This document describes the SQL language.
Druid SQL is a built-in SQL layer and an alternative to Druid's native JSON-based query language, and is powered by a Druid SQL is a built-in SQL layer and an alternative to Druid's native JSON-based query language, and is powered by a
parser and planner based on [Apache Calcite](https://calcite.apache.org/). Druid SQL translates SQL into native Druid parser and planner based on [Apache Calcite](https://calcite.apache.org/). Druid SQL translates SQL into native Druid
queries on the query Broker (the first process you query), which are then passed down to data processes as native Druid queries on the query Broker (the first process you query), which are then passed down to data processes as native Druid
queries. Other than the (slight) overhead of translating SQL on the Broker, there isn't an additional performance queries. Other than the (slight) overhead of [translating](#query-translation) SQL on the Broker, there isn't an
penalty versus native queries. additional performance penalty versus native queries.
## Query syntax ## Query syntax
Each Druid datasource appears as a table in the "druid" schema. This is also the default schema, so Druid datasources
can be referenced as either `druid.dataSourceName` or simply `dataSourceName`.
Identifiers like datasource and column names can optionally be quoted using double quotes. To escape a double quote
inside an identifier, use another double quote, like `"My ""very own"" identifier"`. All identifiers are case-sensitive
and no implicit case conversions are performed.
Literal strings should be quoted with single quotes, like `'foo'`. Literal strings with Unicode escapes can be written
like `U&'fo\00F6'`, where character codes in hex are prefixed by a backslash. Literal numbers can be written in forms
like `100` (denoting an integer), `100.0` (denoting a floating point value), or `1.0e5` (scientific notation). Literal
timestamps can be written like `TIMESTAMP '2000-01-01 00:00:00'`. Literal intervals, used for time arithmetic, can be
written like `INTERVAL '1' HOUR`, `INTERVAL '1 02:03' DAY TO MINUTE`, `INTERVAL '1-2' YEAR TO MONTH`, and so on.
Druid SQL supports dynamic parameters in question mark (`?`) syntax, where parameters are bound to the `?` placeholders at execution time. To use dynamic parameters, replace any literal in the query with a `?` character and ensure that corresponding parameter values are provided at execution time. Parameters are bound to the placeholders in the order in which they are passed.
Druid SQL supports SELECT queries with the following structure: Druid SQL supports SELECT queries with the following structure:
``` ```
[ EXPLAIN PLAN FOR ] [ EXPLAIN PLAN FOR ]
[ WITH tableName [ ( column1, column2, ... ) ] AS ( query ) ] [ WITH tableName [ ( column1, column2, ... ) ] AS ( query ) ]
SELECT [ ALL | DISTINCT ] { * | exprs } SELECT [ ALL | DISTINCT ] { * | exprs }
FROM table FROM { <table> | (<subquery>) | <o1> [ INNER | LEFT ] JOIN <o2> ON condition }
[ WHERE expr ] [ WHERE expr ]
[ GROUP BY [ exprs | GROUPING SETS ( (exprs), ... ) | ROLLUP (exprs) | CUBE (exprs) ] ] [ GROUP BY [ exprs | GROUPING SETS ( (exprs), ... ) | ROLLUP (exprs) | CUBE (exprs) ] ]
[ HAVING expr ] [ HAVING expr ]
@ -72,17 +57,34 @@ FROM table
[ UNION ALL <another query> ] [ UNION ALL <another query> ]
``` ```
The FROM clause refers to either a Druid datasource, like `druid.foo`, an [INFORMATION_SCHEMA table](#metadata-tables), a ### FROM
subquery, or a common-table-expression provided in the WITH clause. If the FROM clause references a subquery or a
common-table-expression, and both levels of queries are aggregations and they cannot be combined into a single level of The FROM clause can refer to any of the following:
aggregation, the overall query will be executed as a [nested GroupBy](groupbyquery.html#nested-groupbys).
- [Table datasources](datasource.html#table) from the `druid` schema. This is the default schema, so Druid table
datasources can be referenced as either `druid.dataSourceName` or simply `dataSourceName`.
- [Lookups](datasource.html#lookup) from the `lookup` schema, for example `lookup.countries`. Note that lookups can
also be queried using the [`LOOKUP` function](#string-functions).
- [Subqueries](datasource.html#query).
- [Joins](datasource.html#join) between anything in this list, except between native datasources (table, lookup,
query) and system tables. The join condition must be an equality between expressions from the left- and right-hand side
of the join.
- [Metadata tables](#metadata-tables) from the `INFORMATION_SCHEMA` or `sys` schemas. Unlike the other options for the
FROM clause, metadata tables are not considered datasources. They exist only in the SQL layer.
For more information about table, lookup, query, and join datasources, refer to the [Datasources](datasource.html)
documentation.
### WHERE
The WHERE clause refers to columns in the FROM table, and will be translated to [native filters](filters.html). The The WHERE clause refers to columns in the FROM table, and will be translated to [native filters](filters.html). The
WHERE clause can also reference a subquery, like `WHERE col1 IN (SELECT foo FROM ...)`. Queries like this are executed WHERE clause can also reference a subquery, like `WHERE col1 IN (SELECT foo FROM ...)`. Queries like this are executed
as [semi-joins](#query-execution), described below. as a join on the subquery, described below in the [Query translation](#subqueries) section.
### GROUP BY
The GROUP BY clause refers to columns in the FROM table. Using GROUP BY, DISTINCT, or any aggregation functions will The GROUP BY clause refers to columns in the FROM table. Using GROUP BY, DISTINCT, or any aggregation functions will
trigger an aggregation query using one of Druid's [three native aggregation query types](#query-execution). GROUP BY trigger an aggregation query using one of Druid's [three native aggregation query types](#query-types). GROUP BY
can refer to an expression or a select clause ordinal position (like `GROUP BY 2` to group by the second selected can refer to an expression or a select clause ordinal position (like `GROUP BY 2` to group by the second selected
column). column).
@ -102,27 +104,63 @@ When using GROUP BY GROUPING SETS, GROUP BY ROLLUP, or GROUP BY CUBE, be aware t
order that you specify your grouping sets in the query. If you need results to be generated in a particular order, use order that you specify your grouping sets in the query. If you need results to be generated in a particular order, use
the ORDER BY clause. the ORDER BY clause.
### HAVING
The HAVING clause refers to columns that are present after execution of GROUP BY. It can be used to filter on either The HAVING clause refers to columns that are present after execution of GROUP BY. It can be used to filter on either
grouping expressions or aggregated values. It can only be used together with GROUP BY. grouping expressions or aggregated values. It can only be used together with GROUP BY.
### ORDER BY
The ORDER BY clause refers to columns that are present after execution of GROUP BY. It can be used to order the results The ORDER BY clause refers to columns that are present after execution of GROUP BY. It can be used to order the results
based on either grouping expressions or aggregated values. ORDER BY can refer to an expression or a select clause based on either grouping expressions or aggregated values. ORDER BY can refer to an expression or a select clause
ordinal position (like `ORDER BY 2` to order by the second selected column). For non-aggregation queries, ORDER BY ordinal position (like `ORDER BY 2` to order by the second selected column). For non-aggregation queries, ORDER BY
can only order by the `__time` column. For aggregation queries, ORDER BY can order by any column. can only order by the `__time` column. For aggregation queries, ORDER BY can order by any column.
### LIMIT
The LIMIT clause can be used to limit the number of rows returned. It can be used with any query type. It is pushed down The LIMIT clause can be used to limit the number of rows returned. It can be used with any query type. It is pushed down
to data processes for queries that run with the native TopN query type, but not the native GroupBy query type. Future to Data processes for queries that run with the native TopN query type, but not the native GroupBy query type. Future
versions of Druid will support pushing down limits using the native GroupBy query type as well. If you notice that versions of Druid will support pushing down limits using the native GroupBy query type as well. If you notice that
adding a limit doesn't change performance very much, then it's likely that Druid didn't push down the limit for your adding a limit doesn't change performance very much, then it's likely that Druid didn't push down the limit for your
query. query.
### UNION ALL
The "UNION ALL" operator can be used to fuse multiple queries together. Their results will be concatenated, and each The "UNION ALL" operator can be used to fuse multiple queries together. Their results will be concatenated, and each
query will run separately, back to back (not in parallel). Druid does not currently support "UNION" without "ALL". query will run separately, back to back (not in parallel). Druid does not currently support "UNION" without "ALL".
UNION ALL must appear at the very outer layer of a SQL query (it cannot appear in a subquery or in the FROM clause).
Add "EXPLAIN PLAN FOR" to the beginning of any query to see how it would be run as a native Druid query. In this case, Note that despite the similar name, UNION ALL is not the same thing as as [union datasource](datasource.md#union).
the query will not actually be executed. UNION ALL allows unioning the results of queries, whereas union datasources allow unioning tables.
## Data types and casts ### EXPLAIN PLAN
Add "EXPLAIN PLAN FOR" to the beginning of any query to get information about how it will be translated. In this case,
the query will not actually be executed. Refer to the [Query translation](#query-translation) documentation for help
interpreting EXPLAIN PLAN output.
### Identifiers and literals
Identifiers like datasource and column names can optionally be quoted using double quotes. To escape a double quote
inside an identifier, use another double quote, like `"My ""very own"" identifier"`. All identifiers are case-sensitive
and no implicit case conversions are performed.
Literal strings should be quoted with single quotes, like `'foo'`. Literal strings with Unicode escapes can be written
like `U&'fo\00F6'`, where character codes in hex are prefixed by a backslash. Literal numbers can be written in forms
like `100` (denoting an integer), `100.0` (denoting a floating point value), or `1.0e5` (scientific notation). Literal
timestamps can be written like `TIMESTAMP '2000-01-01 00:00:00'`. Literal intervals, used for time arithmetic, can be
written like `INTERVAL '1' HOUR`, `INTERVAL '1 02:03' DAY TO MINUTE`, `INTERVAL '1-2' YEAR TO MONTH`, and so on.
### Dynamic parameters
Druid SQL supports dynamic parameters using question mark (`?`) syntax, where parameters are bound to `?` placeholders
at execution time. To use dynamic parameters, replace any literal in the query with a `?` character and provide a
corresponding parameter value when you execute the query. Parameters are bound to the placeholders in the order in
which they are passed. Parameters are supported in both the [HTTP POST](#http-post) and [JDBC](#jdbc) APIs.
## Data types
### Standard types
Druid natively supports five basic column types: "long" (64 bit signed int), "float" (32 bit float), "double" (64 bit Druid natively supports five basic column types: "long" (64 bit signed int), "float" (32 bit float), "double" (64 bit
float) "string" (UTF-8 encoded strings and string arrays), and "complex" (catch-all for more exotic data types like float) "string" (UTF-8 encoded strings and string arrays), and "complex" (catch-all for more exotic data types like
@ -133,31 +171,7 @@ milliseconds since 1970-01-01 00:00:00 UTC, not counting leap seconds. Therefore
timezone information, but only carry information about the exact moment in time they represent. See the timezone information, but only carry information about the exact moment in time they represent. See the
[Time functions](#time-functions) section for more information about timestamp handling. [Time functions](#time-functions) section for more information about timestamp handling.
### Null handling modes The following table describes how Druid maps SQL types onto native types at query runtime. Casts between two SQL types
By default Druid treats NULLs and empty strings interchangeably, rather than according to the SQL standard. As such,
in this mode Druid SQL only has partial support for NULLs. For example, the expressions `col IS NULL` and `col = ''` are equivalent,
and both will evaluate to true if `col` contains an empty string. Similarly, the expression `COALESCE(col1, col2)` will
return `col2` if `col1` is an empty string. While the `COUNT(*)` aggregator counts all rows, the `COUNT(expr)`
aggregator will count the number of rows where expr is neither null nor the empty string. String columns in Druid are
NULLable. Numeric columns are NOT NULL; if you query a numeric column that is not present in all segments of your Druid
datasource, then it will be treated as zero for rows from those segments.
If `druid.generic.useDefaultValueForNull` is set to `false` _system-wide, at indexing time_, data
will be stored in a manner that allows distinguishing empty strings from NULL values for string columns, and will allow NULL values to be stored for numeric columns. Druid SQL will generally operate more properly and the SQL optimizer will work best in this mode, however this does come at a cost. See the [segment documentation on SQL compatible null-handling](../design/segments.md#sql-compatible-null-handling) for more details.
For mathematical operations, Druid SQL will use integer math if all operands involved in an expression are integers.
Otherwise, Druid will switch to floating point math. You can force this to happen by casting one of your operands
to FLOAT. At runtime, Druid may widen 32-bit floats to 64-bit for certain operators, like SUM aggregators.
Druid [multi-value string dimensions](multi-value-dimensions.html) will appear in the table schema as `VARCHAR` typed,
and may be interacted with in expressions as such. Additionally, they can be treated as `ARRAY` 'like', via a handful of
special multi-value operators. Expressions against multi-value string dimensions will apply the expression to all values
of the row, however the caveat is that aggregations on these multi-value string columns will observe the native Druid
multi-value aggregation behavior, which is equivalent to the `UNNEST` function available in many dialects.
Refer to the documentation on [multi-value string dimensions](multi-value-dimensions.html) and
[Druid expressions documentation](../misc/math-expr.html) for additional details.
The following table describes how SQL types map onto Druid types during query runtime. Casts between two SQL types
that have the same Druid runtime type will have no effect, other than exceptions noted in the table. Casts between two that have the same Druid runtime type will have no effect, other than exceptions noted in the table. Casts between two
SQL types that have different Druid runtime types will generate a runtime cast in Druid. If a value cannot be properly SQL types that have different Druid runtime types will generate a runtime cast in Druid. If a value cannot be properly
cast to another value, as in `CAST('foo' AS BIGINT)`, the runtime will substitute a default value. NULL values cast cast to another value, as in `CAST('foo' AS BIGINT)`, the runtime will substitute a default value. NULL values cast
@ -167,7 +181,7 @@ converted to zeroes).
|SQL type|Druid runtime type|Default value|Notes| |SQL type|Druid runtime type|Default value|Notes|
|--------|------------------|-------------|-----| |--------|------------------|-------------|-----|
|CHAR|STRING|`''`|| |CHAR|STRING|`''`||
|VARCHAR|STRING|`''`|Druid STRING columns are reported as VARCHAR| |VARCHAR|STRING|`''`|Druid STRING columns are reported as VARCHAR. Can include [multi-value strings](#multi-value-strings) as well.|
|DECIMAL|DOUBLE|`0.0`|DECIMAL uses floating point, not fixed point math| |DECIMAL|DOUBLE|`0.0`|DECIMAL uses floating point, not fixed point math|
|FLOAT|FLOAT|`0.0`|Druid FLOAT columns are reported as FLOAT| |FLOAT|FLOAT|`0.0`|Druid FLOAT columns are reported as FLOAT|
|REAL|DOUBLE|`0.0`|| |REAL|DOUBLE|`0.0`||
@ -181,9 +195,36 @@ converted to zeroes).
|DATE|LONG|`0`, meaning 1970-01-01|Casting TIMESTAMP to DATE rounds down the timestamp to the nearest day. Casts between string and date types assume standard SQL formatting, e.g. `2000-01-02`. For handling other formats, use one of the [time functions](#time-functions)| |DATE|LONG|`0`, meaning 1970-01-01|Casting TIMESTAMP to DATE rounds down the timestamp to the nearest day. Casts between string and date types assume standard SQL formatting, e.g. `2000-01-02`. For handling other formats, use one of the [time functions](#time-functions)|
|OTHER|COMPLEX|none|May represent various Druid column types such as hyperUnique, approxHistogram, etc| |OTHER|COMPLEX|none|May represent various Druid column types such as hyperUnique, approxHistogram, etc|
## Built-in functions ### Multi-value strings
### Aggregation functions Druid's native type system allows strings to potentially have multiple values. These
[multi-value string dimensions](multi-value-dimensions.html) will be reported in SQL as `VARCHAR` typed, and can be
syntactically used like any other VARCHAR. Regular string functions that refer to multi-value string dimensions will be
applied to all values for each row individually. Multi-value string dimensions can also be treated as arrays via special
[multi-value string functions](#multi-value-string-functions), which can perform powerful array-aware operations.
Grouping by a multi-value expression will observe the native Druid multi-value aggregation behavior, which is similar to
the `UNNEST` functionality available in some other SQL dialects. Refer to the documentation on
[multi-value string dimensions](multi-value-dimensions.html) for additional details.
### NULL values
The `druid.generic.useDefaultValueForNull` [runtime property](../configuration/index.html#sql-compatible-null-handling)
controls Druid's NULL handling mode.
In the default mode (`true`), Druid treats NULLs and empty strings interchangeably, rather than according to the SQL
standard. In this mode Druid SQL only has partial support for NULLs. For example, the expressions `col IS NULL` and
`col = ''` are equivalent, and both will evaluate to true if `col` contains an empty string. Similarly, the expression
`COALESCE(col1, col2)` will return `col2` if `col1` is an empty string. While the `COUNT(*)` aggregator counts all rows,
the `COUNT(expr)` aggregator will count the number of rows where expr is neither null nor the empty string. Numeric
columns in this mode are not nullable; any null or missing values will be treated as zeroes.
In SQL compatible mode (`false`), NULLs are treated more closely to the SQL standard. The property affects both storage
and querying, so for best behavior, it should be set at both ingestion time and query time. There is some overhead
associated with the ability to handle NULLs; see the [segment internals](../design/segments.md#sql-compatible-null-handling)
documentation for more details.
## Aggregation functions
Aggregation functions can appear in the SELECT clause of any query. Any aggregator can be filtered using syntax like Aggregation functions can appear in the SELECT clause of any query. Any aggregator can be filtered using syntax like
`AGG(expr) FILTER(WHERE whereExpr)`. Filtered aggregators will only aggregate rows that match their filter. It's `AGG(expr) FILTER(WHERE whereExpr)`. Filtered aggregators will only aggregate rows that match their filter. It's
@ -224,13 +265,15 @@ Only the COUNT aggregation can accept DISTINCT.
|`ANY_VALUE(expr)`|Returns any value of `expr` including null. `expr` must be numeric. This aggregator can simplify and optimize the performance by returning the first encountered value (including null)| |`ANY_VALUE(expr)`|Returns any value of `expr` including null. `expr` must be numeric. This aggregator can simplify and optimize the performance by returning the first encountered value (including null)|
|`ANY_VALUE(expr, maxBytesPerString)`|Like `ANY_VALUE(expr)`, but for strings. The `maxBytesPerString` parameter determines how much aggregation space to allocate per string. Strings longer than this limit will be truncated. This parameter should be set as low as possible, since high values will lead to wasted memory.| |`ANY_VALUE(expr, maxBytesPerString)`|Like `ANY_VALUE(expr)`, but for strings. The `maxBytesPerString` parameter determines how much aggregation space to allocate per string. Strings longer than this limit will be truncated. This parameter should be set as low as possible, since high values will lead to wasted memory.|
For advice on choosing approximate aggregation functions, check out our [approximate aggregations documentation](aggregations.html#approx). For advice on choosing approximate aggregation functions, check out our [approximate aggregations documentation](aggregations.html#approx).
## Scalar functions
### Numeric functions ### Numeric functions
Numeric functions will return 64 bit integers or 64 bit floats, depending on their inputs. For mathematical operations, Druid SQL will use integer math if all operands involved in an expression are integers.
Otherwise, Druid will switch to floating point math. You can force this to happen by casting one of your operands
to FLOAT. At runtime, Druid will widen 32-bit floats to 64-bit for most expressions.
|Function|Notes| |Function|Notes|
|--------|-----| |--------|-----|
@ -275,7 +318,7 @@ String functions accept strings, and return a type appropriate to the function.
|`CHAR_LENGTH(expr)`|Synonym for `LENGTH`.| |`CHAR_LENGTH(expr)`|Synonym for `LENGTH`.|
|`CHARACTER_LENGTH(expr)`|Synonym for `LENGTH`.| |`CHARACTER_LENGTH(expr)`|Synonym for `LENGTH`.|
|`STRLEN(expr)`|Synonym for `LENGTH`.| |`STRLEN(expr)`|Synonym for `LENGTH`.|
|`LOOKUP(expr, lookupName)`|Look up expr in a registered [query-time lookup table](lookups.html).| |`LOOKUP(expr, lookupName)`|Look up expr in a registered [query-time lookup table](lookups.html). Note that lookups can also be queried directly using the [`lookup` schema](#from).|
|`LOWER(expr)`|Returns expr in all lowercase.| |`LOWER(expr)`|Returns expr in all lowercase.|
|`PARSE_LONG(string[, radix])`|Parses a string into a long (BIGINT) with the given radix, or 10 (decimal) if a radix is not provided.| |`PARSE_LONG(string[, radix])`|Parses a string into a long (BIGINT) with the given radix, or 10 (decimal) if a radix is not provided.|
|`POSITION(needle IN haystack [FROM fromIndex])`|Returns the index of needle within haystack, with indexes starting from 1. The search will begin at fromIndex, or 1 if fromIndex is not specified. If the needle is not found, returns 0.| |`POSITION(needle IN haystack [FROM fromIndex])`|Returns the index of needle within haystack, with indexes starting from 1. The search will begin at fromIndex, or 1 if fromIndex is not specified. If the needle is not found, returns 0.|
@ -384,13 +427,68 @@ argument should be a string formatted as an IPv4 address subnet in CIDR notation
|`x IS NOT FALSE`|True if x is not false.| |`x IS NOT FALSE`|True if x is not false.|
|`x IN (values)`|True if x is one of the listed values.| |`x IN (values)`|True if x is one of the listed values.|
|`x NOT IN (values)`|True if x is not one of the listed values.| |`x NOT IN (values)`|True if x is not one of the listed values.|
|`x IN (subquery)`|True if x is returned by the subquery. See [Query execution](#query-execution) above for details about how Druid SQL handles `IN (subquery)`.| |`x IN (subquery)`|True if x is returned by the subquery. This will be translated into a join; see [Query translation](#query-translation) for details.|
|`x NOT IN (subquery)`|True if x is not returned by the subquery. See [Query execution](#query-execution) for details about how Druid SQL handles `IN (subquery)`.| |`x NOT IN (subquery)`|True if x is not returned by the subquery. This will be translated into a join; see [Query translation](#query-translation) for details.|
|`x AND y`|Boolean AND.| |`x AND y`|Boolean AND.|
|`x OR y`|Boolean OR.| |`x OR y`|Boolean OR.|
|`NOT x`|Boolean NOT.| |`NOT x`|Boolean NOT.|
### Multi-value string functions ### Sketch functions
These functions operate on expressions or columns that return sketch objects.
#### HLL sketch functions
The following functions operate on [DataSketches HLL sketches](../development/extensions-core/datasketches-hll.html).
The [DataSketches extension](../development/extensions-core/datasketches-extension.html) must be loaded to use the following functions.
|Function|Notes|
|--------|-----|
|`HLL_SKETCH_ESTIMATE(expr, [round])`|Returns the distinct count estimate from an HLL sketch. `expr` must return an HLL sketch. The optional `round` boolean parameter will round the estimate if set to `true`, with a default of `false`.|
|`HLL_SKETCH_ESTIMATE_WITH_ERROR_BOUNDS(expr, [numStdDev])`|Returns the distinct count estimate and error bounds from an HLL sketch. `expr` must return an HLL sketch. An optional `numStdDev` argument can be provided.|
|`HLL_SKETCH_UNION([lgK, tgtHllType], expr0, expr1, ...)`|Returns a union of HLL sketches, where each input expression must return an HLL sketch. The `lgK` and `tgtHllType` can be optionally specified as the first parameter; if provided, both optional parameters must be specified.|
|`HLL_SKETCH_TO_STRING(expr)`|Returns a human-readable string representation of an HLL sketch for debugging. `expr` must return an HLL sketch.|
#### Theta sketch functions
The following functions operate on [theta sketches](../development/extensions-core/datasketches-theta.html).
The [DataSketches extension](../development/extensions-core/datasketches-extension.html) must be loaded to use the following functions.
|Function|Notes|
|--------|-----|
|`THETA_SKETCH_ESTIMATE(expr)`|Returns the distinct count estimate from a theta sketch. `expr` must return a theta sketch.|
|`THETA_SKETCH_ESTIMATE_WITH_ERROR_BOUNDS(expr, errorBoundsStdDev)`|Returns the distinct count estimate and error bounds from a theta sketch. `expr` must return a theta sketch.|
|`THETA_SKETCH_UNION([size], expr0, expr1, ...)`|Returns a union of theta sketches, where each input expression must return a theta sketch. The `size` can be optionally specified as the first parameter.|
|`THETA_SKETCH_INTERSECT([size], expr0, expr1, ...)`|Returns an intersection of theta sketches, where each input expression must return a theta sketch. The `size` can be optionally specified as the first parameter.|
|`THETA_SKETCH_NOT([size], expr0, expr1, ...)`|Returns a set difference of theta sketches, where each input expression must return a theta sketch. The `size` can be optionally specified as the first parameter.|
#### Quantiles sketch functions
The following functions operate on [quantiles sketches](../development/extensions-core/datasketches-quantiles.html).
The [DataSketches extension](../development/extensions-core/datasketches-extension.html) must be loaded to use the following functions.
|Function|Notes|
|--------|-----|
|`DS_GET_QUANTILE(expr, fraction)`|Returns the quantile estimate corresponding to `fraction` from a quantiles sketch. `expr` must return a quantiles sketch.|
|`DS_GET_QUANTILES(expr, fraction0, fraction1, ...)`|Returns a string representing an array of quantile estimates corresponding to a list of fractions from a quantiles sketch. `expr` must return a quantiles sketch.|
|`DS_HISTOGRAM(expr, splitPoint0, splitPoint1, ...)`|Returns a string representing an approximation to the histogram given a list of split points that define the histogram bins from a quantiles sketch. `expr` must return a quantiles sketch.|
|`DS_CDF(expr, splitPoint0, splitPoint1, ...)`|Returns a string representing approximation to the Cumulative Distribution Function given a list of split points that define the edges of the bins from a quantiles sketch. `expr` must return a quantiles sketch.|
|`DS_RANK(expr, value)`|Returns an approximation to the rank of a given value that is the fraction of the distribution less than that value from a quantiles sketch. `expr` must return a quantiles sketch.|
|`DS_QUANTILE_SUMMARY(expr)`|Returns a string summary of a quantiles sketch, useful for debugging. `expr` must return a quantiles sketch.|
### Other scalar functions
|Function|Notes|
|--------|-----|
|`CAST(value AS TYPE)`|Cast value to another type. See [Data types](#data-types) for details about how Druid SQL handles CAST.|
|`CASE expr WHEN value1 THEN result1 \[ WHEN value2 THEN result2 ... \] \[ ELSE resultN \] END`|Simple CASE.|
|`CASE WHEN boolean_expr1 THEN result1 \[ WHEN boolean_expr2 THEN result2 ... \] \[ ELSE resultN \] END`|Searched CASE.|
|`NULLIF(value1, value2)`|Returns NULL if value1 and value2 match, else returns value1.|
|`COALESCE(value1, value2, ...)`|Returns the first value that is neither NULL nor empty string.|
|`NVL(expr,expr-for-null)`|Returns 'expr-for-null' if 'expr' is null (or empty string for string type).|
|`BLOOM_FILTER_TEST(<expr>, <serialized-filter>)`|Returns true if the value is contained in a Base64-serialized bloom filter. See the [Bloom filter extension](../development/extensions-core/bloom-filter.html) documentation for additional details.|
## Multi-value string functions
All 'array' references in the multi-value string function documentation can refer to multi-value string columns or All 'array' references in the multi-value string function documentation can refer to multi-value string columns or
`ARRAY` literals. `ARRAY` literals.
@ -412,84 +510,114 @@ All 'array' references in the multi-value string function documentation can refe
| `MV_TO_STRING(arr,str)` | joins all elements of arr by the delimiter specified by str | | `MV_TO_STRING(arr,str)` | joins all elements of arr by the delimiter specified by str |
| `STRING_TO_MV(str1,str2)` | splits str1 into an array on the delimiter specified by str2 | | `STRING_TO_MV(str1,str2)` | splits str1 into an array on the delimiter specified by str2 |
### Sketch operators ## Query translation
These functions operate on expressions or columns that return sketch objects. Druid SQL translates SQL queries to [native queries](querying.md) before running them, and understanding how this
translation works is key to getting good performance.
#### HLL sketch operators ### Best practices
The following functions operate on [DataSketches HLL sketches](../development/extensions-core/datasketches-hll.html). Consider this (non-exhaustive) list of things to look out for when looking into the performance implications of
The [DataSketches extension](../development/extensions-core/datasketches-extension.html) must be loaded to use the following functions. how your SQL queries are translated to native queries.
|Function|Notes| 1. If you wrote a filter on the primary time column `__time`, make sure it is being correctly translated to an
|--------|-----| `"intervals"` filter, as described in the [Time filters](#time-filters) section below. If not, you may need to change
|`HLL_SKETCH_ESTIMATE(expr, [round])`|Returns the distinct count estimate from an HLL sketch. `expr` must return an HLL sketch. The optional `round` boolean parameter will round the estimate if set to `true`, with a default of `false`.| the way you write the filter.
|`HLL_SKETCH_ESTIMATE_WITH_ERROR_BOUNDS(expr, [numStdDev])`|Returns the distinct count estimate and error bounds from an HLL sketch. `expr` must return an HLL sketch. An optional `numStdDev` argument can be provided.|
|`HLL_SKETCH_UNION([lgK, tgtHllType], expr0, expr1, ...)`|Returns a union of HLL sketches, where each input expression must return an HLL sketch. The `lgK` and `tgtHllType` can be optionally specified as the first parameter; if provided, both optional parameters must be specified.|
|`HLL_SKETCH_TO_STRING(expr)`|Returns a human-readable string representation of an HLL sketch for debugging. `expr` must return an HLL sketch.|
#### Theta sketch operators 2. Try to avoid subqueries underneath joins: they affect both performance and scalability. This includes implicit
subqueries generated by conditions on mismatched types, and implicit subqueries generated by conditions that use
expressions to refer to the right-hand side.
The following functions operate on [theta sketches](../development/extensions-core/datasketches-theta.html). 3. Read through the [Query execution](query-execution.md) page to understand how various types of native queries
The [DataSketches extension](../development/extensions-core/datasketches-extension.html) must be loaded to use the following functions. will be executed.
|Function|Notes| 4. Be careful when interpreting EXPLAIN PLAN output, and use request logging if in doubt. Request logs will show the
|--------|-----| exact native query that was run. See the [next section](#interpreting-explain-plan-output) for more details.
|`THETA_SKETCH_ESTIMATE(expr)`|Returns the distinct count estimate from a theta sketch. `expr` must return a theta sketch.|
|`THETA_SKETCH_ESTIMATE_WITH_ERROR_BOUNDS(expr, errorBoundsStdDev)`|Returns the distinct count estimate and error bounds from a theta sketch. `expr` must return a theta sketch.|
|`THETA_SKETCH_UNION([size], expr0, expr1, ...)`|Returns a union of theta sketches, where each input expression must return a theta sketch. The `size` can be optionally specified as the first parameter.|
|`THETA_SKETCH_INTERSECT([size], expr0, expr1, ...)`|Returns an intersection of theta sketches, where each input expression must return a theta sketch. The `size` can be optionally specified as the first parameter.|
|`THETA_SKETCH_NOT([size], expr0, expr1, ...)`|Returns a set difference of theta sketches, where each input expression must return a theta sketch. The `size` can be optionally specified as the first parameter.|
#### Quantiles sketch operators 5. If you encounter a query that could be planned better, feel free to
[raise an issue on GitHub](https://github.com/apache/druid/issues/new/choose). A reproducible test case is always
appreciated.
The following functions operate on [quantiles sketches](../development/extensions-core/datasketches-quantiles.html). ### Interpreting EXPLAIN PLAN output
The [DataSketches extension](../development/extensions-core/datasketches-extension.html) must be loaded to use the following functions.
|Function|Notes| The [EXPLAIN PLAN](#explain-plan) functionality can help you understand how a given SQL query will
|--------|-----| be translated to native. For simple queries that do not involve subqueries or joins, the output of EXPLAIN PLAN
|`DS_GET_QUANTILE(expr, fraction)`|Returns the quantile estimate corresponding to `fraction` from a quantiles sketch. `expr` must return a quantiles sketch.| is easy to interpret. The native query that will run is embedded as JSON inside a "DruidQueryRel" line:
|`DS_GET_QUANTILES(expr, fraction0, fraction1, ...)`|Returns a string representing an array of quantile estimates corresponding to a list of fractions from a quantiles sketch. `expr` must return a quantiles sketch.|
|`DS_HISTOGRAM(expr, splitPoint0, splitPoint1, ...)`|Returns a string representing an approximation to the histogram given a list of split points that define the histogram bins from a quantiles sketch. `expr` must return a quantiles sketch.|
|`DS_CDF(expr, splitPoint0, splitPoint1, ...)`|Returns a string representing approximation to the Cumulative Distribution Function given a list of split points that define the edges of the bins from a quantiles sketch. `expr` must return a quantiles sketch.|
|`DS_RANK(expr, value)`|Returns an approximation to the rank of a given value that is the fraction of the distribution less than that value from a quantiles sketch. `expr` must return a quantiles sketch.|
|`DS_QUANTILE_SUMMARY(expr)`|Returns a string summary of a quantiles sketch, useful for debugging. `expr` must return a quantiles sketch.|
### Other functions ```
> EXPLAIN PLAN FOR SELECT COUNT(*) FROM wikipedia
|Function|Notes| DruidQueryRel(query=[{"queryType":"timeseries","dataSource":"wikipedia","intervals":"-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z","granularity":"all","aggregations":[{"type":"count","name":"a0"}]}], signature=[{a0:LONG}])
|--------|-----| ```
|`CAST(value AS TYPE)`|Cast value to another type. See [Data types and casts](#data-types-and-casts) for details about how Druid SQL handles CAST.|
|`CASE expr WHEN value1 THEN result1 \[ WHEN value2 THEN result2 ... \] \[ ELSE resultN \] END`|Simple CASE.|
|`CASE WHEN boolean_expr1 THEN result1 \[ WHEN boolean_expr2 THEN result2 ... \] \[ ELSE resultN \] END`|Searched CASE.|
|`NULLIF(value1, value2)`|Returns NULL if value1 and value2 match, else returns value1.|
|`COALESCE(value1, value2, ...)`|Returns the first value that is neither NULL nor empty string.|
|`NVL(expr,expr-for-null)`|Returns 'expr-for-null' if 'expr' is null (or empty string for string type).|
|`BLOOM_FILTER_TEST(<expr>, <serialized-filter>)`|Returns true if the value is contained in the base64 serialized bloom filter. See [bloom filter extension](../development/extensions-core/bloom-filter.html) documentation for additional details.|
### Unsupported features For more complex queries that do involve subqueries or joins, EXPLAIN PLAN is somewhat more difficult to interpret.
For example, consider this query:
Druid does not support all SQL features, including: ```
> EXPLAIN PLAN FOR
> SELECT
> channel,
> COUNT(*)
> FROM wikipedia
> WHERE channel IN (SELECT page FROM wikipedia GROUP BY page ORDER BY COUNT(*) DESC LIMIT 10)
> GROUP BY channel
- OVER clauses, and analytic functions such as `LAG` and `LEAD`. DruidJoinQueryRel(condition=[=($1, $3)], joinType=[inner], query=[{"queryType":"groupBy","dataSource":{"type":"table","name":"__join__"},"intervals":{"type":"intervals","intervals":["-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z"]},"granularity":"all","dimensions":["channel"],"aggregations":[{"type":"count","name":"a0"}]}], signature=[{d0:STRING, a0:LONG}])
- JOIN clauses, other than semi-joins as described above. DruidQueryRel(query=[{"queryType":"scan","dataSource":{"type":"table","name":"wikipedia"},"intervals":{"type":"intervals","intervals":["-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z"]},"resultFormat":"compactedList","columns":["__time","channel","page"],"granularity":"all"}], signature=[{__time:LONG, channel:STRING, page:STRING}])
- OFFSET clauses. DruidQueryRel(query=[{"queryType":"topN","dataSource":{"type":"table","name":"wikipedia"},"dimension":"page","metric":{"type":"numeric","metric":"a0"},"threshold":10,"intervals":{"type":"intervals","intervals":["-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z"]},"granularity":"all","aggregations":[{"type":"count","name":"a0"}]}], signature=[{d0:STRING}])
- DDL and DML. ```
Additionally, some Druid features are not supported by the SQL language. Some unsupported Druid features include: Here, there is a join with two inputs. The way to read this is to consider each line of the EXPLAIN PLAN output as
something that might become a query, or might just become a simple datasource. The `query` field they all have is
called a "partial query" and represents what query would be run on the datasource represented by that line, if that
line ran by itself. In some cases — like the "scan" query in the second line of this example — the query does not
actually run, and it ends up being translated to a simple table datasource. See the [Join translation](#joins) section
for more details about how this works.
- [Multi-value dimensions](multi-value-dimensions.html). We can see this for ourselves using Druid's [request logging](../configuration/index.md#request-logging) feature. After
- [Set operations on DataSketches aggregators](../development/extensions-core/datasketches-extension.html). enabling logging and running this query, we can see that it actually runs as the following native query.
- [Spatial filters](../development/geo.html).
- [Query cancellation](querying.html#query-cancellation).
## Query execution ```json
{
"queryType": "groupBy",
"dataSource": {
"type": "join",
"left": "wikipedia",
"right": {
"type": "query",
"query": {
"queryType": "topN",
"dataSource": "wikipedia",
"dimension": {"type": "default", "dimension": "page", "outputName": "d0"},
"metric": {"type": "numeric", "metric": "a0"},
"threshold": 10,
"intervals": "-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z",
"granularity": "all",
"aggregations": [
{ "type": "count", "name": "a0"}
]
}
},
"rightPrefix": "j0.",
"condition": "(\"page\" == \"j0.d0\")",
"joinType": "INNER"
},
"intervals": "-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z",
"granularity": "all",
"dimensions": [
{"type": "default", "dimension": "channel", "outputName": "d0"}
],
"aggregations": [
{ "type": "count", "name": "a0"}
]
}
```
Queries without aggregations will use Druid's [Scan](scan-query.html) native query type. ### Query types
Aggregation queries (using GROUP BY, DISTINCT, or any aggregation functions) will use one of Druid's three native Druid SQL uses four different native query types.
aggregation query types. Two (Timeseries and TopN) are specialized for specific types of aggregations, whereas the other
(GroupBy) is general-purpose. - [Scan](scan-query.html) is used for queries that do not aggregate (no GROUP BY, no DISTINCT).
- [Timeseries](timeseriesquery.html) is used for queries that GROUP BY `FLOOR(__time TO <unit>)` or `TIME_FLOOR(__time, - [Timeseries](timeseriesquery.html) is used for queries that GROUP BY `FLOOR(__time TO <unit>)` or `TIME_FLOOR(__time,
period)`, have no other grouping expressions, no HAVING or LIMIT clauses, no nesting, and either no ORDER BY, or an period)`, have no other grouping expressions, no HAVING or LIMIT clauses, no nesting, and either no ORDER BY, or an
@ -511,23 +639,50 @@ don't appear in the GROUP BY clause (like aggregation functions) then the Broker
memory, up to a max of your LIMIT, if any. See the GroupBy documentation for details about tuning performance and memory memory, up to a max of your LIMIT, if any. See the GroupBy documentation for details about tuning performance and memory
use. use.
If your query does nested aggregations (an aggregation subquery in your FROM clause) then Druid will execute it as a ### Time filters
[nested GroupBy](groupbyquery.html#nested-groupbys). In nested GroupBys, the innermost aggregation is distributed, but
all outer aggregations beyond that take place locally on the query Broker.
Semi-join queries containing WHERE clauses like `col IN (SELECT expr FROM ...)` are executed with a special process. The
Broker will first translate the subquery into a GroupBy to find distinct values of `expr`. Then, the broker will rewrite
the subquery to a literal filter, like `col IN (val1, val2, ...)` and run the outer query. The configuration parameter
druid.sql.planner.maxSemiJoinRowsInMemory controls the maximum number of values that will be materialized for this kind
of plan.
For all native query types, filters on the `__time` column will be translated into top-level query "intervals" whenever For all native query types, filters on the `__time` column will be translated into top-level query "intervals" whenever
possible, which allows Druid to use its global time index to quickly prune the set of data that must be scanned. In possible, which allows Druid to use its global time index to quickly prune the set of data that must be scanned.
addition, Druid will use indexes local to each data process to further speed up WHERE evaluation. This can typically be Consider this (non-exhaustive) list of time filters that will be recognized and translated to "intervals":
done for filters that involve boolean combinations of references to and functions of single columns, like
`WHERE col1 = 'a' AND col2 = 'b'`, but not `WHERE col1 = col2`.
### Approximate algorithms - `__time >= TIMESTAMP '2000-01-01 00:00:00'` (comparison to absolute time)
- `__time >= CURRENT_TIMESTAMP - INTERVAL '8' HOUR` (comparison to relative time)
- `FLOOR(__time TO DAY) = TIMESTAMP '2000-01-01 00:00:00'` (specific day)
Refer to the [Interpreting EXPLAIN PLAN output](#interpreting-explain-plan-output) section for details on confirming
that time filters are being translated as you expect.
### Joins
SQL join operators are translated to native join datasources as follows:
1. Joins that the native layer can handle directly are translated literally, to a [join datasource](datasource.md#join)
whose `left`, `right`, and `condition` are faithful translations of the original SQL. This includes any SQL join where
the right-hand side is a lookup or subquery, and where the condition is an equality where one side is an expression based
on the left-hand table, the other side is a simple column reference to the right-hand table, and both sides of the
equality are the same data type.
2. If a join cannot be handled directly by a native [join datasource](datasource.md#join) as written, Druid SQL
will insert subqueries to make it runnable. For example, `foo INNER JOIN bar ON foo.abc = LOWER(bar.def)` cannot be
directly translated, because there is an expression on the right-hand side instead of a simple column access. A subquery
will be inserted that effectively transforms this clause to
`foo INNER JOIN (SELECT LOWER(def) AS def FROM bar) t ON foo.abc = t.def`.
3. Druid SQL does not currently reorder joins to optimize queries.
Refer to the [Interpreting EXPLAIN PLAN output](#interpreting-explain-plan-output) section for details on confirming
that joins are being translated as you expect.
Refer to the [Query execution](query-execution.md#join) page for information about how joins are executed.
### Subqueries
Subqueries in SQL are generally translated to native query datasources. Refer to the
[Query execution](query-execution.md#query) page for information about how subqueries are executed.
> Note: Subqueries in the WHERE clause, like `WHERE col1 IN (SELECT foo FROM ...)` are translated to inner joins.
### Approximations
Druid SQL will use approximate algorithms in some situations: Druid SQL will use approximate algorithms in some situations:
@ -535,31 +690,51 @@ Druid SQL will use approximate algorithms in some situations:
[HyperLogLog](http://algo.inria.fr/flajolet/Publications/FlFuGaMe07.pdf), a fast approximate distinct counting [HyperLogLog](http://algo.inria.fr/flajolet/Publications/FlFuGaMe07.pdf), a fast approximate distinct counting
algorithm. Druid SQL will switch to exact distinct counts if you set "useApproximateCountDistinct" to "false", either algorithm. Druid SQL will switch to exact distinct counts if you set "useApproximateCountDistinct" to "false", either
through query context or through Broker configuration. through query context or through Broker configuration.
- GROUP BY queries over a single column with ORDER BY and LIMIT may be executed using the TopN engine, which uses an - GROUP BY queries over a single column with ORDER BY and LIMIT may be executed using the TopN engine, which uses an
approximate algorithm. Druid SQL will switch to an exact grouping algorithm if you set "useApproximateTopN" to "false", approximate algorithm. Druid SQL will switch to an exact grouping algorithm if you set "useApproximateTopN" to "false",
either through query context or through Broker configuration. either through query context or through Broker configuration.
- The APPROX_COUNT_DISTINCT and APPROX_QUANTILE aggregation functions always use approximate algorithms, regardless
of configuration.
- Aggregation functions that are labeled as using sketches or approximations, such as APPROX_COUNT_DISTINCT, are always
approximate, regardless of configuration.
### Unsupported features
Druid does not support all SQL features. In particular, the following features are not supported.
- JOIN between native datasources (table, lookup, subquery) and system tables.
- JOIN conditions that are not an equality between expressions from the left- and right-hand sides.
- OVER clauses, and analytic functions such as `LAG` and `LEAD`.
- OFFSET clauses.
- DDL and DML.
- Using Druid-specific functions like `TIME_PARSE` and `APPROX_QUANTILE_DS` on [metadata tables](#metadata-tables).
Additionally, some Druid native query features are not supported by the SQL language. Some unsupported Druid features
include:
- [Union datasources](datasource.html#union)
- [Inline datasources](datasource.html#inline)
- [Spatial filters](../development/geo.html).
- [Query cancellation](querying.html#query-cancellation).
## Client APIs ## Client APIs
### JSON over HTTP <a name="json-over-http"></a>
You can make Druid SQL queries using JSON over HTTP by posting to the endpoint `/druid/v2/sql/`. The request should ### HTTP POST
You can make Druid SQL queries using HTTP via POST to the endpoint `/druid/v2/sql/`. The request should
be a JSON object with a "query" field, like `{"query" : "SELECT COUNT(*) FROM data_source WHERE foo = 'bar'"}`. be a JSON object with a "query" field, like `{"query" : "SELECT COUNT(*) FROM data_source WHERE foo = 'bar'"}`.
##### Request ##### Request
|Property|Type|Description|Required| |Property|Description|Default|
|--------|----|-----------|--------| |--------|----|-----------|--------|
|`query`|`String`| SQL query to run| yes | |`query`|SQL query string.| none (required)|
|`resultFormat`|`String` (`ResultFormat`)| Result format for output | no (default `"object"`)| |`resultFormat`|Format of query results. See [Responses](#responses) for details.|`"object"`|
|`header`|`Boolean`| Write column name header for supporting formats| no (default `false`)| |`header`|Whether or not to include a header. See [Responses] for details.|`false`|
|`context`|`Object`| Connection context map. see [connection context parameters](#connection-context)| no | |`context`|JSON object containing [connection context parameters](#connection-context).|`{}` (empty)|
|`parameters`|`SqlParameter` list| List of query parameters for parameterized queries. | no | |`parameters`|List of query parameters for parameterized queries. Each parameter in the list should be a JSON object like `{"type": "VARCHAR", "value": "foo"}`. The type should be a SQL type; see [Data types](#data-types) for a list of supported SQL types.|`[]` (empty)|
You can use _curl_ to send SQL queries from the command-line: You can use _curl_ to send SQL queries from the command-line:
@ -595,19 +770,12 @@ Parameterized SQL queries are also supported:
} }
``` ```
##### SqlParameter Metadata is available over HTTP POST by querying [metadata tables](#metadata-tables).
|Property|Type|Description|Required|
|--------|----|-----------|--------|
|`type`|`String` (`SqlType`) | String value of `SqlType` of parameter. [`SqlType`](https://calcite.apache.org/avatica/javadocAggregate/org/apache/calcite/avatica/SqlType.html) is a friendly wrapper around [`java.sql.Types`](https://docs.oracle.com/javase/8/docs/api/java/sql/Types.html?is-external=true)|yes|
|`value`|`Object`| Value of the parameter|yes|
Metadata is also available over the HTTP API by querying [system tables](#metadata-tables).
#### Responses #### Responses
Druid SQL supports a variety of result formats. You can specify these by adding a "resultFormat" parameter, like: Druid SQL's HTTP POST API supports a variety of result formats. You can specify these by adding a "resultFormat"
parameter, like:
```json ```json
{ {
@ -690,7 +858,7 @@ enabled. The Druid Router process provides connection stickiness when balancing
the necessary stickiness even with a normal non-sticky load balancer. Please see the the necessary stickiness even with a normal non-sticky load balancer. Please see the
[Router](../design/router.md) documentation for more details. [Router](../design/router.md) documentation for more details.
Note that the non-JDBC [JSON over HTTP](#json-over-http) API is stateless and does not require stickiness. Note that the non-JDBC [JSON over HTTP](#http-post) API is stateless and does not require stickiness.
### Dynamic Parameters ### Dynamic Parameters
@ -741,6 +909,9 @@ datasource "foo", use the query:
SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = 'druid' AND TABLE_NAME = 'foo' SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = 'druid' AND TABLE_NAME = 'foo'
``` ```
> Note: INFORMATION_SCHEMA tables do not currently support Druid-specific functions like `TIME_PARSE` and
> `APPROX_QUANTILE_DS`. Only standard SQL functions can be used.
#### SCHEMATA table #### SCHEMATA table
|Column|Notes| |Column|Notes|
@ -788,6 +959,9 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = 'druid' AND TABLE_
The "sys" schema provides visibility into Druid segments, servers and tasks. The "sys" schema provides visibility into Druid segments, servers and tasks.
> Note: "sys" tables do not currently support Druid-specific functions like `TIME_PARSE` and
> `APPROX_QUANTILE_DS`. Only standard SQL functions can be used.
#### SEGMENTS table #### SEGMENTS table
Segments table provides details on all Druid segments, whether they are published yet or not. Segments table provides details on all Druid segments, whether they are published yet or not.
@ -923,43 +1097,12 @@ For example, to retrieve supervisor tasks information filtered by health status,
SELECT * FROM sys.supervisors WHERE healthy=0; SELECT * FROM sys.supervisors WHERE healthy=0;
``` ```
Note that sys tables may not support all the Druid SQL Functions.
## Server configuration ## Server configuration
The Druid SQL server is configured through the following properties on the Broker. Druid SQL planning occurs on the Broker and is configured by
[Broker runtime properties](../configuration/index.html#sql).
|Property|Description|Default| ## Security
|--------|-----------|-------|
|`druid.sql.enable`|Whether to enable SQL at all, including background metadata fetching. If false, this overrides all other SQL-related properties and disables SQL metadata, serving, and planning completely.|true|
|`druid.sql.avatica.enable`|Whether to enable JDBC querying at `/druid/v2/sql/avatica/`.|true|
|`druid.sql.avatica.maxConnections`|Maximum number of open connections for the Avatica server. These are not HTTP connections, but are logical client connections that may span multiple HTTP connections.|25|
|`druid.sql.avatica.maxRowsPerFrame`|Maximum number of rows to return in a single JDBC frame. Setting this property to -1 indicates that no row limit should be applied. Clients can optionally specify a row limit in their requests; if a client specifies a row limit, the lesser value of the client-provided limit and `maxRowsPerFrame` will be used.|5,000|
|`druid.sql.avatica.maxStatementsPerConnection`|Maximum number of simultaneous open statements per Avatica client connection.|4|
|`druid.sql.avatica.connectionIdleTimeout`|Avatica client connection idle timeout.|PT5M|
|`druid.sql.http.enable`|Whether to enable JSON over HTTP querying at `/druid/v2/sql/`.|true|
|`druid.sql.planner.maxTopNLimit`|Maximum threshold for a [TopN query](../querying/topnquery.md). Higher limits will be planned as [GroupBy queries](../querying/groupbyquery.md) instead.|100000|
|`druid.sql.planner.metadataRefreshPeriod`|Throttle for metadata refreshes.|PT1M|
|`druid.sql.planner.useApproximateCountDistinct`|Whether to use an approximate cardinality algorithm for `COUNT(DISTINCT foo)`.|true|
|`druid.sql.planner.useApproximateTopN`|Whether to use approximate [TopN queries](../querying/topnquery.html) when a SQL query could be expressed as such. If false, exact [GroupBy queries](../querying/groupbyquery.html) will be used instead.|true|
|`druid.sql.planner.requireTimeCondition`|Whether to require SQL to have filter conditions on __time column so that all generated native queries will have user specified intervals. If true, all queries without filter condition on __time column will fail|false|
|`druid.sql.planner.sqlTimeZone`|Sets the default time zone for the server, which will affect how time functions and timestamp literals behave. Should be a time zone name like "America/Los_Angeles" or offset like "-08:00".|UTC|
|`druid.sql.planner.metadataSegmentCacheEnable`|Whether to keep a cache of published segments in broker. If true, broker polls coordinator in background to get segments from metadata store and maintains a local cache. If false, coordinator's REST API will be invoked when broker needs published segments info.|false|
|`druid.sql.planner.metadataSegmentPollPeriod`|How often to poll coordinator for published segments list if `druid.sql.planner.metadataSegmentCacheEnable` is set to true. Poll period is in milliseconds. |60000|
> Previous versions of Druid had properties named `druid.sql.planner.maxQueryCount` and `druid.sql.planner.maxSemiJoinRowsInMemory`. Please see [Defining SQL permissions](../development/extensions-core/druid-basic-security.html#sql-permissions) in the
> These properties are no longer available. Since Druid 0.18.0, you can use `druid.server.http.maxSubqueryRows` to control the maximum basic security documentation for information on what permissions are needed for making SQL queries.
> number of rows permitted across all subqueries.
## SQL Metrics
Broker will emit the following metrics for SQL.
|Metric|Description|Dimensions|Normal Value|
|------|-----------|----------|------------|
|`sqlQuery/time`|Milliseconds taken to complete a SQL.|id, nativeQueryIds, dataSource, remoteAddress, success.|< 1s|
|`sqlQuery/bytes`|number of bytes returned in SQL response.|id, nativeQueryIds, dataSource, remoteAddress, success.| |
## Authorization Permissions
Please see [Defining SQL permissions](../development/extensions-core/druid-basic-security.html#sql-permissions) for information on what permissions are needed for making SQL queries in a secured cluster.

View File

@ -23,6 +23,9 @@ sidebar_label: "TimeBoundary"
~ under the License. ~ under the License.
--> -->
> Apache Druid supports two query languages: [Druid SQL](sql.md) and [native queries](querying.md).
> This document describes a query
> type that is only available in the native language.
Time boundary queries return the earliest and latest data points of a data set. The grammar is: Time boundary queries return the earliest and latest data points of a data set. The grammar is:

View File

@ -23,6 +23,10 @@ sidebar_label: "Timeseries"
~ under the License. ~ under the License.
--> -->
> Apache Druid supports two query languages: [Druid SQL](sql.md) and [native queries](querying.md).
> This document describes a query
> type in the native language. For information about when Druid SQL will use this query type, refer to the
> [SQL documentation](sql.md#query-types).
These types of queries take a timeseries query object and return an array of JSON objects where each object represents a value asked for by the timeseries query. These types of queries take a timeseries query object and return an array of JSON objects where each object represents a value asked for by the timeseries query.

View File

@ -1,6 +1,6 @@
--- ---
id: topnmetricspec id: topnmetricspec
title: "TopNMetricSpec" title: "Sorting (topN)"
--- ---
<!-- <!--
@ -22,6 +22,9 @@ title: "TopNMetricSpec"
~ under the License. ~ under the License.
--> -->
> Apache Druid supports two query languages: [Druid SQL](sql.md) and [native queries](querying.md).
> This document describes the native
> language. For information about sorting in SQL, refer to the [SQL documentation](sql.md#order-by).
In Apache Druid, the topN metric spec specifies how topN values should be sorted. In Apache Druid, the topN metric spec specifies how topN values should be sorted.

View File

@ -23,6 +23,10 @@ sidebar_label: "TopN"
~ under the License. ~ under the License.
--> -->
> Apache Druid supports two query languages: [Druid SQL](sql.md) and [native queries](querying.md).
> This document describes a query
> type in the native language. For information about when Druid SQL will use this query type, refer to the
> [SQL documentation](sql.md#query-types).
Apache Druid TopN queries return a sorted set of results for the values in a given dimension according to some criteria. Conceptually, they can be thought of as an approximate [GroupByQuery](../querying/groupbyquery.md) over a single dimension with an [Ordering](../querying/limitspec.md) spec. TopNs are much faster and resource efficient than GroupBys for this use case. These types of queries take a topN query object and return an array of JSON objects where each object represents a value asked for by the topN query. Apache Druid TopN queries return a sorted set of results for the values in a given dimension according to some criteria. Conceptually, they can be thought of as an approximate [GroupByQuery](../querying/groupbyquery.md) over a single dimension with an [Ordering](../querying/limitspec.md) spec. TopNs are much faster and resource efficient than GroupBys for this use case. These types of queries take a topN query object and return an array of JSON objects where each object represents a value asked for by the topN query.

View File

@ -1,6 +1,6 @@
--- ---
id: virtual-columns id: virtual-columns
title: "Virtual Columns" title: "Virtual columns"
--- ---
<!-- <!--
@ -22,6 +22,10 @@ title: "Virtual Columns"
~ under the License. ~ under the License.
--> -->
> Apache Druid supports two query languages: [Druid SQL](sql.md) and [native queries](querying.md).
> This document describes the native
> language. For information about functions available in SQL, refer to the
> [SQL documentation](sql.md#scalar-functions).
Virtual columns are queryable column "views" created from a set of columns during a query. Virtual columns are queryable column "views" created from a set of columns during a query.

View File

@ -1591,6 +1591,7 @@ _default_tier
addr addr
affinityConfig affinityConfig
allowAll allowAll
ANDed
array_mod array_mod
batch_index_task batch_index_task
cgroup cgroup
@ -1618,6 +1619,7 @@ druid_supervisors
druid_taskLock druid_taskLock
druid_taskLog druid_taskLog
druid_tasks druid_tasks
DruidQueryRel
ec2 ec2
equalDistribution equalDistribution
extractionFn extractionFn
@ -1634,6 +1636,7 @@ hdfs
httpRemote httpRemote
indexTask indexTask
info_dir info_dir
inlining
java.class.path java.class.path
java.io.tmpdir java.io.tmpdir
javaOpts javaOpts
@ -1650,6 +1653,7 @@ middlemanager
minTimeMs minTimeMs
minmax minmax
mins mins
nullable
orderby orderby
orderbys orderbys
org.apache.druid org.apache.druid
@ -1658,6 +1662,7 @@ org.apache.hadoop
overlord.html overlord.html
pendingSegments pendingSegments
pre-flight pre-flight
preloaded
queryType queryType
remoteTaskRunnerConfig remoteTaskRunnerConfig
rendezvousHash rendezvousHash
@ -1679,6 +1684,7 @@ tmp
tmpfs tmpfs
truststore truststore
tuningConfig tuningConfig
unioning
useIndexes useIndexes
user.timezone user.timezone
v0.12.0 v0.12.0

View File

@ -179,6 +179,12 @@
"development/extensions-core/druid-lookups": { "development/extensions-core/druid-lookups": {
"title": "Cached Lookup Module" "title": "Cached Lookup Module"
}, },
"development/extensions-core/druid-pac4j": {
"title": "Druid pac4j based Security extension"
},
"development/extensions-core/druid-ranger-security": {
"title": "Apache Ranger Security"
},
"development/extensions-core/examples": { "development/extensions-core/examples": {
"title": "Extension Examples" "title": "Extension Examples"
}, },
@ -377,20 +383,23 @@
"sidebar_label": "DatasourceMetadata" "sidebar_label": "DatasourceMetadata"
}, },
"querying/dimensionspecs": { "querying/dimensionspecs": {
"title": "Transforming Dimension Values" "title": "Query dimensions",
"sidebar_label": "Dimensions"
}, },
"querying/filters": { "querying/filters": {
"title": "Query Filters" "title": "Query filters",
"sidebar_label": "Filters"
}, },
"querying/granularities": { "querying/granularities": {
"title": "Aggregation Granularity" "title": "Query granularities",
"sidebar_label": "Granularities"
}, },
"querying/groupbyquery": { "querying/groupbyquery": {
"title": "GroupBy queries", "title": "GroupBy queries",
"sidebar_label": "GroupBy" "sidebar_label": "GroupBy"
}, },
"querying/having": { "querying/having": {
"title": "Filter groupBy query results" "title": "Having filters (groupBy)"
}, },
"querying/hll-old": { "querying/hll-old": {
"title": "Cardinality/HyperUnique aggregators" "title": "Cardinality/HyperUnique aggregators"
@ -399,7 +408,7 @@
"title": "Joins" "title": "Joins"
}, },
"querying/limitspec": { "querying/limitspec": {
"title": "Sort groupBy query results" "title": "Sorting and limiting (groupBy)"
}, },
"querying/lookups": { "querying/lookups": {
"title": "Lookups" "title": "Lookups"
@ -408,17 +417,21 @@
"title": "Multi-value dimensions" "title": "Multi-value dimensions"
}, },
"querying/multitenancy": { "querying/multitenancy": {
"title": "Multitenancy considerations" "title": "Multitenancy considerations",
"sidebar_label": "Multitenancy"
}, },
"querying/post-aggregations": { "querying/post-aggregations": {
"title": "Post-Aggregations" "title": "Postaggregations"
}, },
"querying/query-context": { "querying/query-context": {
"title": "Query context" "title": "Query context",
"sidebar_label": "Context parameters"
},
"querying/query-execution": {
"title": "Query execution"
}, },
"querying/querying": { "querying/querying": {
"title": "Native queries", "title": "Native queries"
"sidebar_label": "Making native queries"
}, },
"querying/scan-query": { "querying/scan-query": {
"title": "Scan queries", "title": "Scan queries",
@ -428,9 +441,6 @@
"title": "Search queries", "title": "Search queries",
"sidebar_label": "Search" "sidebar_label": "Search"
}, },
"querying/searchqueryspec": {
"title": "Refining search queries"
},
"querying/segmentmetadataquery": { "querying/segmentmetadataquery": {
"title": "SegmentMetadata queries", "title": "SegmentMetadata queries",
"sidebar_label": "SegmentMetadata" "sidebar_label": "SegmentMetadata"
@ -440,7 +450,7 @@
"sidebar_label": "Select" "sidebar_label": "Select"
}, },
"querying/sorting-orders": { "querying/sorting-orders": {
"title": "Sorting Orders" "title": "String comparators"
}, },
"querying/sql": { "querying/sql": {
"title": "SQL", "title": "SQL",
@ -455,14 +465,14 @@
"sidebar_label": "Timeseries" "sidebar_label": "Timeseries"
}, },
"querying/topnmetricspec": { "querying/topnmetricspec": {
"title": "TopNMetricSpec" "title": "Sorting (topN)"
}, },
"querying/topnquery": { "querying/topnquery": {
"title": "TopN queries", "title": "TopN queries",
"sidebar_label": "TopN" "sidebar_label": "TopN"
}, },
"querying/virtual-columns": { "querying/virtual-columns": {
"title": "Virtual Columns" "title": "Virtual columns"
}, },
"tutorials/cluster": { "tutorials/cluster": {
"title": "Clustered deployment" "title": "Clustered deployment"
@ -535,7 +545,7 @@
"Getting started": "Getting started", "Getting started": "Getting started",
"Tutorials": "Tutorials", "Tutorials": "Tutorials",
"Design": "Design", "Design": "Design",
"Data ingestion": "Data ingestion", "Ingestion": "Ingestion",
"Querying": "Querying", "Querying": "Querying",
"Configuration": "Configuration", "Configuration": "Configuration",
"Operations": "Operations", "Operations": "Operations",

View File

@ -78,7 +78,7 @@
{"source": "Router.html", "target": "design/router.html"} {"source": "Router.html", "target": "design/router.html"}
{"source": "Rule-Configuration.html", "target": "operations/rule-configuration.html"} {"source": "Rule-Configuration.html", "target": "operations/rule-configuration.html"}
{"source": "SearchQuery.html", "target": "querying/searchquery.html"} {"source": "SearchQuery.html", "target": "querying/searchquery.html"}
{"source": "SearchQuerySpec.html", "target": "querying/searchqueryspec.html"} {"source": "SearchQuerySpec.html", "target": "querying/searchquery.html"}
{"source": "SegmentMetadataQuery.html", "target": "querying/segmentmetadataquery.html"} {"source": "SegmentMetadataQuery.html", "target": "querying/segmentmetadataquery.html"}
{"source": "Segments.html", "target": "design/segments.html"} {"source": "Segments.html", "target": "design/segments.html"}
{"source": "SelectQuery.html", "target": "querying/select-query.html"} {"source": "SelectQuery.html", "target": "querying/select-query.html"}
@ -184,6 +184,7 @@
{"source": "operations/multitenancy.html", "target": "../querying/multitenancy.html"} {"source": "operations/multitenancy.html", "target": "../querying/multitenancy.html"}
{"source": "operations/performance-faq.html", "target": "../operations/basic-cluster-tuning.html"} {"source": "operations/performance-faq.html", "target": "../operations/basic-cluster-tuning.html"}
{"source": "querying/optimizations.html", "target": "multi-value-dimensions.html"} {"source": "querying/optimizations.html", "target": "multi-value-dimensions.html"}
{"source": "querying/searchqueryspec.html", "target": "searchquery.html"}
{"source": "tutorials/booting-a-production-cluster.html", "target": "cluster.html"} {"source": "tutorials/booting-a-production-cluster.html", "target": "cluster.html"}
{"source": "tutorials/examples.html", "target": "index.html"} {"source": "tutorials/examples.html", "target": "index.html"}
{"source": "tutorials/firewall.html", "target": "cluster.html"} {"source": "tutorials/firewall.html", "target": "cluster.html"}

View File

@ -29,7 +29,7 @@
"dependencies/metadata-storage", "dependencies/metadata-storage",
"dependencies/zookeeper" "dependencies/zookeeper"
], ],
"Data ingestion": [ "Ingestion": [
"ingestion/index", "ingestion/index",
"ingestion/data-formats", "ingestion/data-formats",
"ingestion/schema-design", "ingestion/schema-design",
@ -57,28 +57,53 @@
], ],
"Querying": [ "Querying": [
"querying/sql", "querying/sql",
"querying/querying",
"querying/query-execution",
{
"type": "subcategory",
"label": "Concepts",
"ids": [
"querying/datasource",
"querying/joins",
"querying/lookups",
"querying/multi-value-dimensions",
"querying/multitenancy",
"querying/caching",
"querying/query-context"
]
},
{ {
"type": "subcategory", "type": "subcategory",
"label": "Native query types", "label": "Native query types",
"ids": [ "ids": [
"querying/querying",
"querying/timeseriesquery", "querying/timeseriesquery",
"querying/topnquery", "querying/topnquery",
"querying/groupbyquery", "querying/groupbyquery",
"querying/scan-query", "querying/scan-query",
"querying/searchquery",
"querying/timeboundaryquery", "querying/timeboundaryquery",
"querying/segmentmetadataquery", "querying/segmentmetadataquery",
"querying/datasourcemetadataquery", "querying/datasourcemetadataquery"
"querying/searchquery",
"querying/select-query"
] ]
}, },
"querying/multi-value-dimensions", {
"querying/lookups", "type": "subcategory",
"querying/joins", "label": "Native query components",
"querying/multitenancy", "ids": [
"querying/caching", "querying/filters",
"development/geo" "querying/granularities",
"querying/dimensionspecs",
"querying/aggregations",
"querying/post-aggregations",
"misc/math-expr",
"querying/having",
"querying/limitspec",
"querying/topnmetricspec",
"querying/sorting-orders",
"querying/virtual-columns",
"development/geo"
]
}
], ],
"Configuration": [ "Configuration": [
"configuration/index", "configuration/index",
@ -125,7 +150,6 @@
"development/experimental" "development/experimental"
], ],
"Misc": [ "Misc": [
"misc/math-expr",
"misc/papers-and-talks" "misc/papers-and-talks"
], ],
"Hidden": [ "Hidden": [
@ -175,20 +199,8 @@
"development/extensions-contrib/cloudfiles", "development/extensions-contrib/cloudfiles",
"development/extensions-contrib/distinctcount", "development/extensions-contrib/distinctcount",
"development/extensions-contrib/graphite", "development/extensions-contrib/graphite",
"querying/aggregations",
"querying/datasource",
"querying/dimensionspecs",
"querying/filters",
"querying/granularities",
"querying/having",
"querying/hll-old", "querying/hll-old",
"querying/limitspec", "querying/select-query",
"querying/post-aggregations",
"querying/query-context",
"querying/searchqueryspec",
"querying/sorting-orders",
"querying/topnmetricspec",
"querying/virtual-columns",
"development/extensions-contrib/influx", "development/extensions-contrib/influx",
"development/extensions-contrib/influxdb-emitter", "development/extensions-contrib/influxdb-emitter",
"development/extensions-contrib/kafka-emitter", "development/extensions-contrib/kafka-emitter",