* Various documentation updates. 1) Split out "data management" from "ingestion". Break it into thematic pages. 2) Move "SQL-based ingestion" into the Ingestion category. Adjust content so all conceptual content is in concepts.md and all syntax content is in reference.md. Shorten the known issues page to the most interesting ones. 3) Add SQL-based ingestion to the ingestion method comparison page. Remove the index task, since index_parallel is just as good when maxNumConcurrentSubTasks: 1. 4) Rename various mentions of "Druid console" to "web console". 5) Add additional information to ingestion/partitioning.md. 6) Remove a mention of Tranquility. 7) Remove a note about upgrading to Druid 0.10.1. 8) Remove no-longer-relevant task types from ingestion/tasks.md. 9) Move ingestion/native-batch-firehose.md to the hidden section. It was previously deprecated. 10) Move ingestion/native-batch-simple-task.md to the hidden section. It is still linked in some places, but it isn't very useful compared to index_parallel, so it shouldn't take up space in the sidebar. 11) Make all br tags self-closing. 12) Certain other cosmetic changes. 13) Update to node-sass 7. * make travis use node12 for docs Co-authored-by: Vadim Ogievetsky <vadim@ogievetsky.com>
6.2 KiB
id | title | sidebar_label | description |
---|---|---|---|
tutorial-msq-convert-spec | Tutorial: Convert an ingestion spec for SQL-based ingestion | Convert ingestion spec to SQL | How to convert an ingestion spec to a query for SQL-based ingestion in the web console. |
This page describes SQL-based batch ingestion using the
druid-multi-stage-query
extension, new in Druid 24.0. Refer to the ingestion methods table to determine which ingestion method is right for you.
If you're already ingesting data with native batch ingestion, you can use the web console to convert the ingestion spec to a SQL query that the multi-stage query task engine can use to ingest data.
This tutorial demonstrates how to convert the ingestion spec to a query task in the web console.
To convert the ingestion spec to a query task, do the following:
-
In the Query view of the web console, navigate to the menu bar that includes Run.
-
Click the ellipsis icon and select Convert ingestion spec to SQL.
-
In the Ingestion spec to covert window, insert your ingestion spec. You can use your own spec or the sample ingestion spec provided in the tutorial. The sample spec uses data hosted at
https://druid.apache.org/data/wikipedia.json.gz
and loads it into a table namedwikipedia
:Show the spec
{ "type": "index_parallel", "spec": { "ioConfig": { "type": "index_parallel", "inputSource": { "type": "http", "uris": [ "https://druid.apache.org/data/wikipedia.json.gz" ] }, "inputFormat": { "type": "json" } }, "tuningConfig": { "type": "index_parallel", "partitionsSpec": { "type": "dynamic" } }, "dataSchema": { "dataSource": "wikipedia", "timestampSpec": { "column": "timestamp", "format": "iso" }, "dimensionsSpec": { "dimensions": [ "isRobot", "channel", "flags", "isUnpatrolled", "page", "diffUrl", { "type": "long", "name": "added" }, "comment", { "type": "long", "name": "commentLength" }, "isNew", "isMinor", { "type": "long", "name": "delta" }, "isAnonymous", "user", { "type": "long", "name": "deltaBucket" }, { "type": "long", "name": "deleted" }, "namespace", "cityName", "countryName", "regionIsoCode", "metroCode", "countryIsoCode", "regionName" ] }, "granularitySpec": { "queryGranularity": "none", "rollup": false, "segmentGranularity": "day" } } } }
-
Click Submit to submit the spec. The web console uses the JSON-based ingestion spec to generate a SQL query that you can use instead. This is what the query looks like for the sample ingestion spec:
Show the query
-- This SQL query was auto generated from an ingestion spec REPLACE INTO wikipedia OVERWRITE ALL WITH source AS (SELECT * FROM TABLE( EXTERN( '{"type":"http","uris":["https://druid.apache.org/data/wikipedia.json.gz"]}', '{"type":"json"}', '[{"name":"timestamp","type":"string"},{"name":"isRobot","type":"string"},{"name":"channel","type":"string"},{"name":"flags","type":"string"},{"name":"isUnpatrolled","type":"string"},{"name":"page","type":"string"},{"name":"diffUrl","type":"string"},{"name":"added","type":"long"},{"name":"comment","type":"string"},{"name":"commentLength","type":"long"},{"name":"isNew","type":"string"},{"name":"isMinor","type":"string"},{"name":"delta","type":"long"},{"name":"isAnonymous","type":"string"},{"name":"user","type":"string"},{"name":"deltaBucket","type":"long"},{"name":"deleted","type":"long"},{"name":"namespace","type":"string"},{"name":"cityName","type":"string"},{"name":"countryName","type":"string"},{"name":"regionIsoCode","type":"string"},{"name":"metroCode","type":"string"},{"name":"countryIsoCode","type":"string"},{"name":"regionName","type":"string"}]' ) )) SELECT TIME_PARSE("timestamp") AS __time, "isRobot", "channel", "flags", "isUnpatrolled", "page", "diffUrl", "added", "comment", "commentLength", "isNew", "isMinor", "delta", "isAnonymous", "user", "deltaBucket", "deleted", "namespace", "cityName", "countryName", "regionIsoCode", "metroCode", "countryIsoCode", "regionName" FROM source PARTITIONED BY DAY
-
Review the generated SQL query to make sure it matches your requirements and does what you expect.
-
Click Run to start the ingestion.