mirror of https://github.com/apache/druid.git
[Docs] updating transformation during ingestion tutorial (#16845)
* first major revision of tutorial * more edits * re-ID the file to reflect new content + redirect * renaming file * Apply suggestions from code review Co-authored-by: Victoria Lim <vtlim@users.noreply.github.com> * addressing suggestions * adding column names * Update docs/tutorials/tutorial-transform.md * Update docs/tutorials/tutorial-transform.md * Addressing suggestions * Apply suggestions from code review Co-authored-by: Katya Macedo <38017980+ektravel@users.noreply.github.com> * adding trademark logo and moving paragraph * decided to shorten final paragraph --------- Co-authored-by: Victoria Lim <vtlim@users.noreply.github.com> Co-authored-by: Benedict Jin <asdf2014@apache.org> Co-authored-by: Katya Macedo <38017980+ektravel@users.noreply.github.com>
This commit is contained in:
parent
4283b270e3
commit
c968e73171
|
@ -1,156 +0,0 @@
|
|||
---
|
||||
id: tutorial-transform-spec
|
||||
title: Transform input data
|
||||
sidebar_label: Transform input data
|
||||
---
|
||||
|
||||
<!--
|
||||
~ Licensed to the Apache Software Foundation (ASF) under one
|
||||
~ or more contributor license agreements. See the NOTICE file
|
||||
~ distributed with this work for additional information
|
||||
~ regarding copyright ownership. The ASF licenses this file
|
||||
~ to you under the Apache License, Version 2.0 (the
|
||||
~ "License"); you may not use this file except in compliance
|
||||
~ with the License. You may obtain a copy of the License at
|
||||
~
|
||||
~ http://www.apache.org/licenses/LICENSE-2.0
|
||||
~
|
||||
~ Unless required by applicable law or agreed to in writing,
|
||||
~ software distributed under the License is distributed on an
|
||||
~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
~ KIND, either express or implied. See the License for the
|
||||
~ specific language governing permissions and limitations
|
||||
~ under the License.
|
||||
-->
|
||||
|
||||
|
||||
This tutorial will demonstrate how to use transform specs to filter and transform input data during ingestion.
|
||||
|
||||
For this tutorial, we'll assume you've already downloaded Apache Druid as described in
|
||||
the [single-machine quickstart](index.md) and have it running on your local machine.
|
||||
|
||||
It will also be helpful to have finished [Load a file](../tutorials/tutorial-batch.md) and [Query data](../tutorials/tutorial-query.md) tutorials.
|
||||
|
||||
## Sample data
|
||||
|
||||
We've included sample data for this tutorial at `quickstart/tutorial/transform-data.json`, reproduced here for convenience:
|
||||
|
||||
```json
|
||||
{"timestamp":"2018-01-01T07:01:35Z","animal":"octopus", "location":1, "number":100}
|
||||
{"timestamp":"2018-01-01T05:01:35Z","animal":"mongoose", "location":2,"number":200}
|
||||
{"timestamp":"2018-01-01T06:01:35Z","animal":"snake", "location":3, "number":300}
|
||||
{"timestamp":"2018-01-01T01:01:35Z","animal":"lion", "location":4, "number":300}
|
||||
```
|
||||
|
||||
## Load data with transform specs
|
||||
|
||||
We will ingest the sample data using the following spec, which demonstrates the use of transform specs:
|
||||
|
||||
```json
|
||||
{
|
||||
"type" : "index_parallel",
|
||||
"spec" : {
|
||||
"dataSchema" : {
|
||||
"dataSource" : "transform-tutorial",
|
||||
"timestampSpec": {
|
||||
"column": "timestamp",
|
||||
"format": "iso"
|
||||
},
|
||||
"dimensionsSpec" : {
|
||||
"dimensions" : [
|
||||
"animal",
|
||||
{ "name": "location", "type": "long" }
|
||||
]
|
||||
},
|
||||
"metricsSpec" : [
|
||||
{ "type" : "count", "name" : "count" },
|
||||
{ "type" : "longSum", "name" : "number", "fieldName" : "number" },
|
||||
{ "type" : "longSum", "name" : "triple-number", "fieldName" : "triple-number" }
|
||||
],
|
||||
"granularitySpec" : {
|
||||
"type" : "uniform",
|
||||
"segmentGranularity" : "week",
|
||||
"queryGranularity" : "minute",
|
||||
"intervals" : ["2018-01-01/2018-01-03"],
|
||||
"rollup" : true
|
||||
},
|
||||
"transformSpec": {
|
||||
"transforms": [
|
||||
{
|
||||
"type": "expression",
|
||||
"name": "animal",
|
||||
"expression": "concat('super-', animal)"
|
||||
},
|
||||
{
|
||||
"type": "expression",
|
||||
"name": "triple-number",
|
||||
"expression": "number * 3"
|
||||
}
|
||||
],
|
||||
"filter": {
|
||||
"type":"or",
|
||||
"fields": [
|
||||
{ "type": "selector", "dimension": "animal", "value": "super-mongoose" },
|
||||
{ "type": "selector", "dimension": "triple-number", "value": "300" },
|
||||
{ "type": "selector", "dimension": "location", "value": "3" }
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
"ioConfig" : {
|
||||
"type" : "index_parallel",
|
||||
"inputSource" : {
|
||||
"type" : "local",
|
||||
"baseDir" : "quickstart/tutorial",
|
||||
"filter" : "transform-data.json"
|
||||
},
|
||||
"inputFormat" : {
|
||||
"type" :"json"
|
||||
},
|
||||
"appendToExisting" : false
|
||||
},
|
||||
"tuningConfig" : {
|
||||
"type" : "index_parallel",
|
||||
"partitionsSpec": {
|
||||
"type": "dynamic"
|
||||
},
|
||||
"maxRowsInMemory" : 25000
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
In the transform spec, we have two expression transforms:
|
||||
* `super-animal`: prepends "super-" to the values in the `animal` column. This will override the `animal` column with the transformed version, since the transform's name is `animal`.
|
||||
* `triple-number`: multiplies the `number` column by 3. This will create a new `triple-number` column. Note that we are ingesting both the original and the transformed column.
|
||||
|
||||
Additionally, we have an OR filter with three clauses:
|
||||
* `super-animal` values that match "super-mongoose"
|
||||
* `triple-number` values that match 300
|
||||
* `location` values that match 3
|
||||
|
||||
This filter selects the first 3 rows, and it will exclude the final "lion" row in the input data. Note that the filter is applied after the transformation.
|
||||
|
||||
Let's submit this task now, which has been included at `quickstart/tutorial/transform-index.json`:
|
||||
|
||||
```bash
|
||||
bin/post-index-task --file quickstart/tutorial/transform-index.json --url http://localhost:8081
|
||||
```
|
||||
|
||||
## Query the transformed data
|
||||
|
||||
Let's run `bin/dsql` and issue a `select * from "transform-tutorial";` query to see what was ingested:
|
||||
|
||||
```bash
|
||||
dsql> select * from "transform-tutorial";
|
||||
┌──────────────────────────┬────────────────┬───────┬──────────┬────────┬───────────────┐
|
||||
│ __time │ animal │ count │ location │ number │ triple-number │
|
||||
├──────────────────────────┼────────────────┼───────┼──────────┼────────┼───────────────┤
|
||||
│ 2018-01-01T05:01:00.000Z │ super-mongoose │ 1 │ 2 │ 200 │ 600 │
|
||||
│ 2018-01-01T06:01:00.000Z │ super-snake │ 1 │ 3 │ 300 │ 900 │
|
||||
│ 2018-01-01T07:01:00.000Z │ super-octopus │ 1 │ 1 │ 100 │ 300 │
|
||||
└──────────────────────────┴────────────────┴───────┴──────────┴────────┴───────────────┘
|
||||
Retrieved 3 rows in 0.03s.
|
||||
```
|
||||
|
||||
The "lion" row has been discarded, the `animal` column has been transformed, and we have both the original and transformed `number` column.
|
|
@ -0,0 +1,103 @@
|
|||
---
|
||||
id: tutorial-transform
|
||||
title: Transform input data
|
||||
sidebar_label: Transform input data
|
||||
---
|
||||
|
||||
<!--
|
||||
~ Licensed to the Apache Software Foundation (ASF) under one
|
||||
~ or more contributor license agreements. See the NOTICE file
|
||||
~ distributed with this work for additional information
|
||||
~ regarding copyright ownership. The ASF licenses this file
|
||||
~ to you under the Apache License, Version 2.0 (the
|
||||
~ "License"); you may not use this file except in compliance
|
||||
~ with the License. You may obtain a copy of the License at
|
||||
~
|
||||
~ http://www.apache.org/licenses/LICENSE-2.0
|
||||
~
|
||||
~ Unless required by applicable law or agreed to in writing,
|
||||
~ software distributed under the License is distributed on an
|
||||
~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
~ KIND, either express or implied. See the License for the
|
||||
~ specific language governing permissions and limitations
|
||||
~ under the License.
|
||||
-->
|
||||
|
||||
|
||||
This tutorial demonstrates how to transform input data during ingestion.
|
||||
|
||||
## Prerequisite
|
||||
|
||||
Before proceeding, download Apache Druid® as described in [Quickstart (local)](index.md) and have it running on your local machine. You don't need to load any data into the Druid cluster.
|
||||
|
||||
You should be familiar with data querying in Druid. If you haven't already, go through the [Query data](../tutorials/tutorial-query.md) tutorial first.
|
||||
|
||||
## Sample data
|
||||
|
||||
For this tutorial, you use the following sample data:
|
||||
|
||||
```json
|
||||
{"timestamp":"2018-01-01T07:01:35Z", "animal":"octopus", "location":1, "number":100}
|
||||
{"timestamp":"2018-01-01T05:01:35Z", "animal":"mongoose", "location":2,"number":200}
|
||||
{"timestamp":"2018-01-01T06:01:35Z", "animal":"snake", "location":3, "number":300}
|
||||
{"timestamp":"2018-01-01T01:01:35Z", "animal":"lion", "location":4, "number":300}
|
||||
```
|
||||
|
||||
## Transform data during ingestion
|
||||
|
||||
Load the sample dataset using the [`INSERT INTO`](../multi-stage-query/reference.md/#insert) statement and the [`EXTERN`](../multi-stage-query/reference.md/#extern-function) function to ingest the data inline. In the [Druid web console](../operations/web-console.md), go to the **Query** view and run the following query:
|
||||
|
||||
```sql
|
||||
INSERT INTO "transform_tutorial"
|
||||
WITH "ext" AS (
|
||||
SELECT *
|
||||
FROM TABLE(EXTERN('{"type":"inline","data":"{\"timestamp\":\"2018-01-01T07:01:35Z\",\"animal\":\"octopus\", \"location\":1, \"number\":100}\n{\"timestamp\":\"2018-01-01T05:01:35Z\",\"animal\":\"mongoose\", \"location\":2,\"number\":200}\n{\"timestamp\":\"2018-01-01T06:01:35Z\",\"animal\":\"snake\", \"location\":3, \"number\":300}\n{\"timestamp\":\"2018-01-01T01:01:35Z\",\"animal\":\"lion\", \"location\":4, \"number\":300}"}', '{"type":"json"}')) EXTEND ("timestamp" VARCHAR, "animal" VARCHAR, "location" BIGINT, "number" BIGINT)
|
||||
)
|
||||
SELECT
|
||||
TIME_PARSE("timestamp") AS "__time",
|
||||
TEXTCAT('super-', "animal") AS "animal",
|
||||
"location",
|
||||
"number",
|
||||
"number" * 3 AS "triple-number"
|
||||
FROM "ext"
|
||||
WHERE (TEXTCAT('super-', "animal") = 'super-mongoose' OR "location" = 3 OR "number" = 100)
|
||||
PARTITIONED BY DAY
|
||||
```
|
||||
|
||||
In the `SELECT` clause, you specify the following transformations:
|
||||
* `animal`: prepends "super-" to the values in the `animal` column using the [`TEXTCAT`](../querying/sql-functions.md/#textcat) function. Note that it only ingests the transformed data.
|
||||
* `triple-number`: multiplies the `number` column by three and stores the results in a column named `triple-number`. Note that the query ingests both the original and the transformed data.
|
||||
|
||||
Additionally, the `WHERE` clause applies the following three OR operators so that the query only ingests the rows where at least one of the following conditions is `true`:
|
||||
|
||||
* `TEXTCAT('super-', "animal")` matches "super-mongoose"
|
||||
* `location` matches 3
|
||||
* `number` matches 100
|
||||
|
||||
Once a row passes the filter, the ingestion job applies the transformations. In this example, the filter selects the first three rows because each row meets at least one of the required OR conditions. For the selected rows, the ingestion job ingests the transformed `animal` column, the `location` column, and both the original `number` and the transformed `triple-number` column. The "lion" row doesn't meet any of the conditions, so it is not ingested or transformed.
|
||||
|
||||
## Query the transformed data
|
||||
|
||||
In the web console, open a new tab in the **Query** view. Run the following query to view the ingested data:
|
||||
|
||||
```sql
|
||||
SELECT * FROM "transform_tutorial"
|
||||
```
|
||||
|
||||
Returns the following:
|
||||
|
||||
| `__time` | `animal` | `location` | `number` | `triple-number` |
|
||||
| -- | -- | -- | -- | -- |
|
||||
| `2018-01-01T05:01:35.000Z` | `super-mongoose` | `2` | `200` | `600` |
|
||||
| `2018-01-01T06:01:35.000Z` | `super-snake` | `3` | `300` | `900` |
|
||||
| `2018-01-01T07:01:35.000Z` | `super-octopus` | `1` | `100` | `300` |
|
||||
|
||||
Notice how the "lion" row is missing, and how the other three rows that were ingested have transformations applied to them.
|
||||
|
||||
## Learn more
|
||||
|
||||
See the following topics for more information:
|
||||
|
||||
* [All functions](../querying/sql-functions.md) for a list of functions that can be used to transform data.
|
||||
* [Transform spec reference](../ingestion/ingestion-spec.md/#transformspec) to learn more about transforms in JSON-based batch ingestion.
|
||||
* [WHERE clause](../querying/sql.md#where) to learn how to specify filters in Druid SQL.
|
|
@ -20,6 +20,10 @@
|
|||
|
||||
|
||||
const Redirects=[
|
||||
{
|
||||
"from": "/docs/latest/tutorials/tutorial-transform-spec",
|
||||
"to": "/docs/latest/tutorials/tutorial-transform"
|
||||
},
|
||||
{
|
||||
"from": "/docs/latest/development/extensions-core/kafka-extraction-namespace/",
|
||||
"to": "/docs/latest/querying/kafka-extraction-namespace"
|
||||
|
|
|
@ -18,7 +18,7 @@
|
|||
"tutorials/tutorial-kafka",
|
||||
"tutorials/tutorial-rollup",
|
||||
"tutorials/tutorial-ingestion-spec",
|
||||
"tutorials/tutorial-transform-spec",
|
||||
"tutorials/tutorial-transform",
|
||||
"tutorials/tutorial-msq-convert-spec"
|
||||
]},
|
||||
{"type": "category",
|
||||
|
|
Loading…
Reference in New Issue