mirror of https://github.com/apache/druid.git
Merge pull request #380 from agile/data_format_docs
Link Data Formats document and format spec references
This commit is contained in:
commit
2d7af555d0
|
@ -1,8 +1,9 @@
|
|||
---
|
||||
layout: doc_page
|
||||
---
|
||||
Data Formats for Ingestion
|
||||
==========================
|
||||
|
||||
# Data Formats for Ingestion
|
||||
Druid can ingest data in JSON, CSV, or TSV. While most examples in the documentation use data in JSON format, it is not difficult to configure Druid to ingest CSV or TSV data.
|
||||
|
||||
## Formatting the Data
|
||||
|
@ -10,7 +11,7 @@ The following are three samples of the data used in the [Wikipedia example](Tuto
|
|||
|
||||
_JSON_
|
||||
|
||||
```
|
||||
```json
|
||||
{"timestamp": "2013-08-31T01:02:33Z", "page": "Gypsy Danger", "language" : "en", "user" : "nuclear", "unpatrolled" : "true", "newPage" : "true", "robot": "false", "anonymous": "false", "namespace":"article", "continent":"North America", "country":"United States", "region":"Bay Area", "city":"San Francisco", "added": 57, "deleted": 200, "delta": -143}
|
||||
{"timestamp": "2013-08-31T03:32:45Z", "page": "Striker Eureka", "language" : "en", "user" : "speed", "unpatrolled" : "false", "newPage" : "true", "robot": "true", "anonymous": "false", "namespace":"wikipedia", "continent":"Australia", "country":"Australia", "region":"Cantebury", "city":"Syndey", "added": 459, "deleted": 129, "delta": 330}
|
||||
{"timestamp": "2013-08-31T07:11:21Z", "page": "Cherno Alpha", "language" : "ru", "user" : "masterYi", "unpatrolled" : "false", "newPage" : "true", "robot": "true", "anonymous": "false", "namespace":"article", "continent":"Asia", "country":"Russia", "region":"Oblast", "city":"Moscow", "added": 123, "deleted": 12, "delta": 111}
|
||||
|
@ -43,7 +44,7 @@ Note that the CSV and TSV data do not contain column heads. This becomes importa
|
|||
## Configuring Ingestion For the Indexing Service
|
||||
If you use the [indexing service](Indexing-Service.html) for ingesting the data, a [task](Tasks.html) must be configured and submitted. Tasks are configured with a JSON object which, among other things, specifies the data source and type. In the Wikipedia example, JSON data was read from a local file. The task spec contains a firehose element to specify this:
|
||||
|
||||
...
|
||||
```json
|
||||
"firehose" : {
|
||||
"type" : "local",
|
||||
"baseDir" : "examples/indexing",
|
||||
|
@ -58,13 +59,13 @@ If you use the [indexing service](Indexing-Service.html) for ingesting the data,
|
|||
}
|
||||
}
|
||||
}
|
||||
...
|
||||
```
|
||||
|
||||
Specified here are the location of the datafile, the timestamp column, the format of the data, and the columns that will become dimensions in Druid.
|
||||
|
||||
Since the CSV data does not contain the column names, they will have to be added before that data can be processed:
|
||||
|
||||
...
|
||||
```json
|
||||
"firehose" : {
|
||||
"type" : "local",
|
||||
"baseDir" : "examples/indexing/",
|
||||
|
@ -80,7 +81,7 @@ Since the CSV data does not contain the column names, they will have to be added
|
|||
}
|
||||
}
|
||||
}
|
||||
...
|
||||
```
|
||||
|
||||
Note also that the filename extension and the data type were changed to "csv". For the TSV data, the same changes are made but with "tsv" for the filename extension and the data type.
|
||||
|
||||
|
|
|
@ -28,6 +28,7 @@ h2. Data Ingestion
|
|||
* "Batch":./Batch-ingestion.html
|
||||
* "Indexing Service":./Indexing-Service.html
|
||||
** "Tasks":./Tasks.html
|
||||
* "Data Formats":./Data_formats.html
|
||||
* "Ingestion FAQ":./Ingestion-FAQ.html
|
||||
|
||||
h2. Querying
|
||||
|
|
Loading…
Reference in New Issue