2018-12-13 14:47:20 -05:00
---
2019-08-21 00:48:59 -04:00
id: tutorial-kafka
2019-04-19 18:52:26 -04:00
title: "Tutorial: Load streaming data from Apache Kafka"
2019-08-21 00:48:59 -04:00
sidebar_label: "Load from Apache Kafka"
2018-12-13 14:47:20 -05:00
---
2018-11-13 12:38:37 -05:00
<!--
~ Licensed to the Apache Software Foundation (ASF) under one
~ or more contributor license agreements. See the NOTICE file
~ distributed with this work for additional information
~ regarding copyright ownership. The ASF licenses this file
~ to you under the Apache License, Version 2.0 (the
~ "License"); you may not use this file except in compliance
~ with the License. You may obtain a copy of the License at
~
~ http://www.apache.org/licenses/LICENSE-2.0
~
~ Unless required by applicable law or agreed to in writing,
~ software distributed under the License is distributed on an
~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
~ KIND, either express or implied. See the License for the
~ specific language governing permissions and limitations
~ under the License.
-->
2016-01-06 00:27:52 -05:00
## Getting started
2020-01-03 12:33:19 -05:00
This tutorial demonstrates how to load data into Apache Druid from a Kafka stream, using Druid's Kafka indexing service.
2016-01-06 00:27:52 -05:00
2019-08-21 00:48:59 -04:00
For this tutorial, we'll assume you've already downloaded Druid as described in
2020-12-17 16:37:43 -05:00
the [quickstart ](index.md ) using the `micro-quickstart` single-machine configuration and have it
2019-06-11 11:50:52 -04:00
running on your local machine. You don't need to have loaded any data yet.
2016-01-06 00:27:52 -05:00
2018-08-09 16:37:52 -04:00
## Download and start Kafka
2016-01-06 00:27:52 -05:00
2016-02-04 14:53:09 -05:00
[Apache Kafka ](http://kafka.apache.org/ ) is a high throughput message bus that works well with
2020-12-22 16:56:00 -05:00
Druid. For this tutorial, we will use Kafka 2.7.0. To download Kafka, issue the following
2016-01-06 00:27:52 -05:00
commands in your terminal:
```bash
2020-12-22 16:56:00 -05:00
curl -O https://archive.apache.org/dist/kafka/2.7.0/kafka_2.13-2.7.0.tgz
tar -xzf kafka_2.13-2.7.0.tgz
cd kafka_2.13-2.7.0
2016-01-06 00:27:52 -05:00
```
2022-09-08 22:06:19 -04:00
Start zookeeper first with the following command:
```bash
./bin/zookeeper-server-start.sh config/zookeeper.properties
```
2016-01-06 00:27:52 -05:00
Start a Kafka broker by running the following command in a new terminal:
```bash
./bin/kafka-server-start.sh config/server.properties
```
2018-08-09 16:37:52 -04:00
Run this command to create a Kafka topic called *wikipedia* , to which we'll send data:
2016-01-06 00:27:52 -05:00
```bash
2020-08-15 10:56:40 -04:00
./bin/kafka-topics.sh --create --topic wikipedia --bootstrap-server localhost:9092
2019-09-22 18:00:52 -04:00
```
## Load data into Kafka
Let's launch a producer for our topic and send some data!
In your Druid directory, run the following command:
```bash
cd quickstart/tutorial
gunzip -c wikiticker-2015-09-12-sampled.json.gz > wikiticker-2015-09-12-sampled.json
2016-01-06 00:27:52 -05:00
```
2019-09-22 18:00:52 -04:00
In your Kafka directory, run the following command, where {PATH_TO_DRUID} is replaced by the path to the Druid directory:
```bash
export KAFKA_OPTS="-Dfile.encoding=UTF-8"
./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic wikipedia < {PATH_TO_DRUID}/quickstart/tutorial/wikiticker-2015-09-12-sampled.json
```
The previous command posted sample events to the *wikipedia* Kafka topic.
Now we will use Druid's Kafka indexing service to ingest messages from our newly created topic.
## Loading data with the data loader
Navigate to [localhost:8888 ](http://localhost:8888 ) and click `Load data` in the console header.
![Data loader init ](../assets/tutorial-kafka-data-loader-01.png "Data loader init" )
Select `Apache Kafka` and click `Connect data` .
![Data loader sample ](../assets/tutorial-kafka-data-loader-02.png "Data loader sample" )
Enter `localhost:9092` as the bootstrap server and `wikipedia` as the topic.
2020-04-29 12:53:25 -04:00
Click `Apply` and make sure that the data you are seeing is correct.
2019-09-22 18:00:52 -04:00
Once the data is located, you can click "Next: Parse data" to go to the next step.
![Data loader parse data ](../assets/tutorial-kafka-data-loader-03.png "Data loader parse data" )
The data loader will try to automatically determine the correct parser for the data.
In this case it will successfully determine `json` .
Feel free to play around with different parser options to get a preview of how Druid will parse your data.
With the `json` parser selected, click `Next: Parse time` to get to the step centered around determining your primary timestamp column.
![Data loader parse time ](../assets/tutorial-kafka-data-loader-04.png "Data loader parse time" )
Druid's architecture requires a primary timestamp column (internally stored in a column called `__time` ).
If you do not have a timestamp in your data, select `Constant value` .
In our example, the data loader will determine that the `time` column in our raw data is the only candidate that can be used as the primary time column.
Click `Next: ...` twice to go past the `Transform` and `Filter` steps.
You do not need to enter anything in these steps as applying ingestion time transforms and filters are out of scope for this tutorial.
![Data loader schema ](../assets/tutorial-kafka-data-loader-05.png "Data loader schema" )
2021-08-13 11:42:03 -04:00
In the `Configure schema` step, you can configure which [dimensions ](../ingestion/data-model.md#dimensions ) and [metrics ](../ingestion/data-model.md#metrics ) will be ingested into Druid.
2019-09-22 18:00:52 -04:00
This is exactly what the data will appear like in Druid once it is ingested.
2021-08-13 11:42:03 -04:00
Since our dataset is very small, go ahead and turn off [`Rollup` ](../ingestion/rollup.md ) by clicking on the switch and confirming the change.
2019-09-22 18:00:52 -04:00
Once you are satisfied with the schema, click `Next` to go to the `Partition` step where you can fine tune how the data will be partitioned into segments.
![Data loader partition ](../assets/tutorial-kafka-data-loader-06.png "Data loader partition" )
Here, you can adjust how the data will be split up into segments in Druid.
Since this is a small dataset, there are no adjustments that need to be made in this step.
Click `Next: Tune` to go to the tuning step.
![Data loader tune ](../assets/tutorial-kafka-data-loader-07.png "Data loader tune" )
In the `Tune` step is it *very important* to set `Use earliest offset` to `True` since we want to consume the data from the start of the stream.
2020-08-17 10:54:21 -04:00
There are no other changes that need to be made here, so click `Next: Publish` to go to the `Publish` step.
2019-09-22 18:00:52 -04:00
![Data loader publish ](../assets/tutorial-kafka-data-loader-08.png "Data loader publish" )
Let's name this datasource `wikipedia-kafka` .
Finally, click `Next` to review your spec.
![Data loader spec ](../assets/tutorial-kafka-data-loader-09.png "Data loader spec" )
This is the spec you have constructed.
Feel free to go back and make changes in previous steps to see how changes will update the spec.
Similarly, you can also edit the spec directly and see it reflected in the previous steps.
Once you are satisfied with the spec, click `Submit` and an ingestion task will be created.
![Tasks view ](../assets/tutorial-kafka-data-loader-10.png "Tasks view" )
You will be taken to the task view with the focus on the newly created supervisor.
The task view is set to auto refresh, wait until your supervisor launches a task.
When a tasks starts running, it will also start serving the data that it is ingesting.
Navigate to the `Datasources` view from the header.
![Datasource view ](../assets/tutorial-kafka-data-loader-11.png "Datasource view" )
When the `wikipedia-kafka` datasource appears here it can be queried.
*Note:* if the datasource does not appear after a minute you might have not set the supervisor to read from the start of the stream (in the `Tune` step).
At this point, you can go to the `Query` view to run SQL queries against the datasource.
Since this is a small dataset, you can simply run a `SELECT * FROM "wikipedia-kafka"` query to see your results.
![Query view ](../assets/tutorial-kafka-data-loader-12.png "Query view" )
Check out the [query tutorial ](../tutorials/tutorial-query.md ) to run some example queries on the newly loaded data.
2019-06-17 21:00:54 -04:00
### Submit a supervisor via the console
In the console, click `Submit supervisor` to open the submit supervisor dialog.
2019-09-22 18:00:52 -04:00
![Submit supervisor ](../assets/tutorial-kafka-submit-supervisor-01.png "Submit supervisor" )
2019-06-17 21:00:54 -04:00
Paste in this spec and click `Submit` .
```json
{
"type": "kafka",
2019-11-20 13:04:41 -05:00
"spec" : {
"dataSchema": {
"dataSource": "wikipedia",
2020-01-20 14:34:37 -05:00
"timestampSpec": {
"column": "time",
"format": "auto"
},
"dimensionsSpec": {
"dimensions": [
"channel",
"cityName",
"comment",
"countryIsoCode",
"countryName",
"isAnonymous",
"isMinor",
"isNew",
"isRobot",
"isUnpatrolled",
"metroCode",
"namespace",
"page",
"regionIsoCode",
"regionName",
"user",
{ "name": "added", "type": "long" },
{ "name": "deleted", "type": "long" },
{ "name": "delta", "type": "long" }
]
2019-11-20 13:04:41 -05:00
},
"metricsSpec" : [],
"granularitySpec": {
"type": "uniform",
"segmentGranularity": "DAY",
"queryGranularity": "NONE",
"rollup": false
2019-06-17 21:00:54 -04:00
}
},
2019-11-20 13:04:41 -05:00
"tuningConfig": {
"type": "kafka",
"reportParseExceptions": false
},
"ioConfig": {
"topic": "wikipedia",
2020-01-20 14:34:37 -05:00
"inputFormat": {
"type": "json"
},
2019-11-20 13:04:41 -05:00
"replicas": 2,
"taskDuration": "PT10M",
"completionTimeout": "PT20M",
"consumerProperties": {
"bootstrap.servers": "localhost:9092"
}
2019-06-17 21:00:54 -04:00
}
}
}
```
This will start the supervisor that will in turn spawn some tasks that will start listening for incoming data.
### Submit a supervisor directly
To start the service directly, we will need to submit a supervisor spec to the Druid overlord by running the following from the Druid package root:
2016-01-06 00:27:52 -05:00
```bash
2019-06-11 11:50:52 -04:00
curl -XPOST -H'Content-Type: application/json' -d @quickstart/tutorial/wikipedia -kafka-supervisor.json http://localhost:8081/druid/indexer/v1/supervisor
2016-01-06 00:27:52 -05:00
```
2019-06-11 11:50:52 -04:00
2019-04-17 17:56:29 -04:00
If the supervisor was successfully created, you will get a response containing the ID of the supervisor; in our case we should see `{"id":"wikipedia"}` .
2016-01-06 00:27:52 -05:00
2018-08-09 16:37:52 -04:00
For more details about what's going on here, check out the
2019-08-21 00:48:59 -04:00
[Druid Kafka indexing service documentation ](../development/extensions-core/kafka-ingestion.md ).
2016-01-06 00:27:52 -05:00
2022-06-15 19:42:20 -04:00
You can view the current supervisors and tasks in the Druid console: [http://localhost:8888/unified-console.md#tasks ](http://localhost:8888/unified-console.html#tasks ).
2019-02-27 22:50:31 -05:00
2018-08-09 16:37:52 -04:00
## Querying your data
2016-01-06 00:27:52 -05:00
2018-08-09 16:37:52 -04:00
After data is sent to the Kafka stream, it is immediately available for querying.
2016-01-06 00:27:52 -05:00
2019-08-21 00:48:59 -04:00
Please follow the [query tutorial ](../tutorials/tutorial-query.md ) to run some example queries on the newly loaded data.
2016-01-06 00:27:52 -05:00
2018-08-09 16:37:52 -04:00
## Cleanup
2016-01-06 00:27:52 -05:00
2020-04-30 15:07:28 -04:00
To go through any of the other ingestion tutorials, you will need to shut down the cluster and reset the cluster state by removing the contents of the `var` directory in the Druid home, as the other tutorials will write to the same "wikipedia" datasource.
You should additionally clear out any Kafka state. Do so by shutting down the Kafka broker with CTRL-C before stopping ZooKeeper and the Druid services, and then deleting the Kafka log directory at `/tmp/kafka-logs` :
```bash
rm -rf /tmp/kafka-logs
```
2016-01-06 00:27:52 -05:00
## Further reading
2019-08-21 00:48:59 -04:00
For more information on loading data from Kafka streams, please see the [Druid Kafka indexing service documentation ](../development/extensions-core/kafka-ingestion.md ).