{
"cells": [
{
"cell_type": "markdown",
"id": "ad4e60b6",
"metadata": {
"tags": []
},
"source": [
"# Learn the basics of Druid SQL\n",
"\n",
"\n",
" \n",
"Apache Druid supports two query languages: Druid SQL and native queries.\n",
"Druid SQL is a Structured Query Language (SQL) dialect that enables you to query datasources in Apache Druid using SQL statements.\n",
"SQL and Druid SQL use similar syntax, with some notable differences.\n",
"Not all SQL functions are supported in Druid SQL. Instead, Druid includes Druid-specific SQL functions for optimized query performance.\n",
"\n",
"This interactive tutorial introduces you to the unique aspects of Druid SQL with the primary focus on the SELECT statement.\n",
"To learn about native queries, see [Native queries](https://druid.apache.org/docs/latest/querying/querying.html)."
]
},
{
"cell_type": "markdown",
"id": "8d6bbbcb",
"metadata": {
"tags": []
},
"source": [
"## Prerequisites\n",
"\n",
"This tutorial works with Druid 25.0.0 or later.\n",
"\n",
"Launch this tutorial and all prerequisites using the `druid-jupyter` or `all-services` profiles of the Docker Compose file for Jupyter-based Druid tutorials. For more information, see [Docker for Jupyter Notebook tutorials](https://druid.apache.org/docs/latest/tutorials/tutorial-jupyter-docker.html).\n",
"\n",
"If you do not use the Docker Compose environment, you need the following:\n",
"\n",
"* A running Druid instance.
\n",
" Update the `druid_host` variable to point to your Router endpoint. For example:\n",
" ```\n",
" druid_host = \"http://localhost:8888\"\n",
" ```\n",
"* The [Druid Python API](https://github.com/apache/druid/blob/master/examples/quickstart/jupyter-notebooks/) to simplify access to Druid.\n",
"\n",
"It will also help to have a working knowledge of SQL.\n",
"\n",
"\n",
"To start the tutorial, run the following cell. It imports the required Python packages and defines a variable for the Druid client, and another for the SQL client used to run SQL commands."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b7f08a52",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"import druidapi\n",
"\n",
"# druid_host is the hostname and port for your Druid deployment. \n",
"# In the Docker Compose tutorial environment, this is the Router\n",
"# service running at \"http://router:8888\".\n",
"# If you are not using the Docker Compose environment, edit the `druid_host`.\n",
"\n",
"druid_host = \"http://router:8888\"\n",
"druid_host\n",
"\n",
"\n",
"druid = druidapi.jupyter_client(druid_host)\n",
"display = druid.display\n",
"sql_client = druid.sql"
]
},
{
"cell_type": "markdown",
"id": "e893ef7d-7136-442f-8bd9-31b5a5276518",
"metadata": {},
"source": [
"## Druid SQL statements\n",
"\n",
"The following are the main Druid SQL statements:\n",
"\n",
"* SELECT: extract data from a datasource\n",
"* INSERT INTO: create a new datasource or append to an existing datasource\n",
"* REPLACE INTO: create a new datasource or overwrite data in an existing datasource\n",
"\n",
"Druid SQL does not support CREATE TABLE, DELETE, and DROP TABLE statements.\n",
"\n",
"## Ingest data\n",
"\n",
"You can use either INSERT INTO or REPLACE INTO to create a datasource and ingest data.\n",
"INSERT INTO and REPLACE INTO statements both require the PARTITIONED BY clause which defines the granularity of time-based partitioning. For more information, see [Partitioning by time](https://druid.apache.org/docs/latest/multi-stage-query/concepts.html#partitioning-by-time).\n",
"\n",
"Run the following cell to ingest data from an external source into a table called `wikipedia-sql-tutorial`. \n",
"If you already have a table with the same name, use REPLACE INTO instead of INSERT INTO.\n",
"\n",
"Note the following about the query to ingest data:\n",
"- The query uses the TIME_PARSE function to parse ISO 8601 time strings into timestamps. See the section on [timestamp values](#timestamp-values) for more information.\n",
"- The asterisk ( * ) tells Druid to ingest all the columns.\n",
"- The EXTERN statement lets you define the data source type and the input schema. See [Read external data with EXTERN](https://druid.apache.org/docs/latest/multi-stage-query/concepts.html#read-external-data-with-extern) for more information.\n",
"\n",
"The following cell defines the query, uses MSQ to ingest the data, and waits for the MSQ task to complete. You will see an asterisk `[*]` in the left margin while the task runs."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "045f782c-74d8-4447-9487-529071812b51",
"metadata": {},
"outputs": [],
"source": [
"sql = '''\n",
"INSERT INTO \"wikipedia-sql-tutorial\" \n",
"SELECT TIME_PARSE(\"timestamp\") AS __time, * \n",
"FROM TABLE (EXTERN(\n",
" '{\"type\": \"http\", \"uris\": [\"https://druid.apache.org/data/wikipedia.json.gz\"]}',\n",
" '{\"type\": \"json\"}', \n",
" '[{\"name\": \"added\", \"type\": \"long\"}, {\"name\": \"channel\", \"type\": \"string\"}, {\"name\": \"cityName\", \"type\": \"string\"}, {\"name\": \"comment\", \"type\": \"string\"}, {\"name\": \"commentLength\", \"type\": \"long\"}, {\"name\": \"countryIsoCode\", \"type\": \"string\"}, {\"name\": \"countryName\", \"type\": \"string\"}, {\"name\": \"deleted\", \"type\": \"long\"}, {\"name\": \"delta\", \"type\": \"long\"}, {\"name\": \"deltaBucket\", \"type\": \"string\"}, {\"name\": \"diffUrl\", \"type\": \"string\"}, {\"name\": \"flags\", \"type\": \"string\"}, {\"name\": \"isAnonymous\", \"type\": \"string\"}, {\"name\": \"isMinor\", \"type\": \"string\"}, {\"name\": \"isNew\", \"type\": \"string\"}, {\"name\": \"isRobot\", \"type\": \"string\"}, {\"name\": \"isUnpatrolled\", \"type\": \"string\"}, {\"name\": \"metroCode\", \"type\": \"string\"}, {\"name\": \"namespace\", \"type\": \"string\"}, {\"name\": \"page\", \"type\": \"string\"}, {\"name\": \"regionIsoCode\", \"type\": \"string\"}, {\"name\": \"regionName\", \"type\": \"string\"}, {\"name\": \"timestamp\", \"type\": \"string\"}, {\"name\": \"user\", \"type\": \"string\"}]'\n",
" ))\n",
"PARTITIONED BY DAY\n",
"'''\n",
"sql_client.run_task(sql)"
]
},
{
"cell_type": "markdown",
"id": "a141e962",
"metadata": {},
"source": [
"MSQ reports task completion as soon as ingestion is done. However, it takes a while for Druid to load the resulting segments. Wait for the table to become ready."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cca15307",
"metadata": {},
"outputs": [],
"source": [
"sql_client.wait_until_ready('wikipedia-sql-tutorial')"
]
},
{
"cell_type": "markdown",
"id": "240b0ad5-48f2-4737-b12b-5fd5f98da300",
"metadata": {},
"source": [
"## Datasources\n",
"\n",
"Druid supports a variety of datasources, with the table datasource being the most common. In Druid documentation, the word \"datasource\" often implicitly refers to the table datasource.\n",
"The [Datasources](https://druid.apache.org/docs/latest/querying/datasource.html) topic provides a comprehensive overview of datasources supported by Druid SQL.\n",
"\n",
"In Druid SQL, table datasources reside in the `druid` schema. This is the default schema, so table datasources can be referenced as either `druid.DATASOURCE_NAME` or `DATASOURCE_NAME`.\n",
"\n",
"For example, run the next cell to return the rows of the column named `channel` from the `wikipedia-sql-tutorial` table. Because this tutorial is running in Jupyter, the cells use the LIMIT clause to limit the size of the query results for display purposes. The cell uses the built-in table formatting feature of the Python API. You can also retrieve the values as a Python object if you wish to perform additional processing."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6e5d8de0",
"metadata": {},
"outputs": [],
"source": [
"sql = '''\n",
"SELECT \"channel\" FROM \"wikipedia-sql-tutorial\" LIMIT 7\n",
"'''\n",
"display.sql(sql)"
]
},
{
"cell_type": "markdown",
"id": "cbeb5a63",
"metadata": {
"tags": []
},
"source": [
"## Data types\n",
"\n",
"Druid maps SQL data types onto native types at query runtime.\n",
"The following native types are supported for Druid columns:\n",
"\n",
"* SQL: `VARCHAR`, Druid: `STRING`: UTF-8 encoded strings and string arrays\n",
"* SQL: `BIGINT`, Druid: `LONG`: 64-bit signed int\n",
"* SQL & Druid: `FLOAT`: 32-bit float\n",
"* SQL & Druid: `DOUBLE`: 64-bit float\n",
"* Druid `COMPLEX`: represents non-standard data types, such as nested JSON, hyperUnique and approxHistogram aggregators, and DataSketches aggregators\n",
"\n",
"For reference on how SQL data types map onto Druid native types, see [Standard types](https://druid.apache.org/docs/latest/querying/sql-data-types.html#standard-types).\n",
"\n",
"Druid exposes table and column metadata through [INFORMATION_SCHEMA](https://druid.apache.org/docs/latest/querying/sql-metadata-tables.html#information-schema) tables. Run the following query to retrieve metadata for the `wikipedia-sql-tutorial` datasource. In the response body, each JSON object correlates to a column in the table.\n",
"Check the objects' `DATA_TYPE` property for SQL data types. You should see TIMESTAMP, BIGINT, and VARCHAR SQL data types. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c7a86e2e",
"metadata": {},
"outputs": [],
"source": [
"sql = '''\n",
"SELECT COLUMN_NAME, DATA_TYPE \n",
"FROM INFORMATION_SCHEMA.COLUMNS \n",
"WHERE \"TABLE_SCHEMA\" = 'druid' AND \"TABLE_NAME\" = 'wikipedia-sql-tutorial' \n",
"LIMIT 7\n",
"'''\n",
"display.sql(sql)"
]
},
{
"cell_type": "markdown",
"id": "59f41229",
"metadata": {},
"source": [
"This is such a common query that the SQL client has it built in:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1ac6c410",
"metadata": {},
"outputs": [],
"source": [
"display.table('wikipedia-sql-tutorial')"
]
},
{
"cell_type": "markdown",
"id": "c59ca797-dd91-442b-8d02-67b711b3fcc6",
"metadata": {},
"source": [
"### Timestamp values\n",
"\n",
"Druid stores timestamp values as the number of milliseconds since the Unix epoch.\n",
"Primary timestamps are stored in a column named `__time`.\n",
"If a dataset doesn't have a timestamp, Druid uses the default value of `1970-01-01 00:00:00`.\n",
"\n",
"Druid time functions perform best when used with the `__time` column.\n",
"By default, time functions use the UTC time zone.\n",
"For more information about timestamp handling, see [Date and time functions](https://druid.apache.org/docs/latest/querying/sql-scalar.html#date-and-time-functions).\n",
"\n",
"Run the following cell to see a time function at work. This example uses the `TIME_IN_INTERVAL` function to query the `channel` and `page` columns of the `wikipedia-sql-tutorial` for rows whose timestamp is contained within the specified interval. The cell groups the results by columns."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "16c1a31a",
"metadata": {},
"outputs": [],
"source": [
"sql = '''\n",
"SELECT channel, page\n",
"FROM \"wikipedia-sql-tutorial\" \n",
"WHERE TIME_IN_INTERVAL(__time, '2016-06-27T00:05:54.56/2016-06-27T00:06:53')\n",
"GROUP BY channel, page\n",
"LIMIT 7\n",
"'''\n",
"display.sql(sql)"
]
},
{
"cell_type": "markdown",
"id": "f7cfdfae-ccba-49ba-a70f-63d0bd3527b2",
"metadata": {},
"source": [
"### NULL values\n",
"\n",
"Druid supports SQL compatible NULL handling, allowing string columns to distinguish empty strings from NULL and numeric columns to contain NULL rows. To store and query data in SQL compatible mode, explicitly set the `useDefaultValueForNull` property to `false` in `_common/common.runtime.properties`. See [Configuration reference](https://druid.apache.org/docs/latest/configuration/index.html) for common configuration properties.\n",
"\n",
"When `useDefaultValueForNull` is set to `true` (default behavior), Druid stores NULL values as `0` for numeric columns and as `''` for string columns."
]
},
{
"cell_type": "markdown",
"id": "29c24856",
"metadata": {
"tags": []
},
"source": [
"## SELECT statement syntax\n",
"\n",
"Druid SQL supports SELECT statements with the following structure:\n",
"\n",
"``` mysql\n",
"[ EXPLAIN PLAN FOR ]\n",
"[ WITH tableName [ ( column1, column2, ... ) ] AS ( query ) ]\n",
"SELECT [ ALL | DISTINCT ] { * | exprs }\n",
"FROM {