druid/docs/content/Tasks.md

352 lines
11 KiB
Markdown
Raw Normal View History

---
2013-09-26 19:22:28 -04:00
layout: doc_page
---
2014-01-16 18:37:07 -05:00
# Tasks
Tasks are run on middle managers and always operate on a single data source. Tasks are submitted using [POST requests](Indexing-Service.html).
2013-09-13 18:20:39 -04:00
There are several different types of tasks.
2013-10-08 19:34:58 -04:00
Segment Creation Tasks
----------------------
### Index Task
2013-10-08 19:34:58 -04:00
The Index Task is a simpler variation of the Index Hadoop task that is designed to be used for smaller data sets. The task executes within the indexing service and does not require an external Hadoop setup to use. The grammar of the index task is as follows:
2013-10-10 18:05:01 -04:00
```json
2013-10-08 19:34:58 -04:00
{
"type" : "index",
2015-01-22 00:48:48 -05:00
"spec" : {
"dataSchema" : {
"dataSource" : "wikipedia",
"parser" : {
"type" : "string",
"parseSpec" : {
"format" : "json",
"timestampSpec" : {
"column" : "timestamp",
"format" : "auto"
},
"dimensionsSpec" : {
"dimensions": ["page","language","user","unpatrolled","newPage","robot","anonymous","namespace","continent","country","region","city"],
"dimensionExclusions" : [],
"spatialDimensions" : []
}
}
2013-10-08 19:34:58 -04:00
},
2015-01-22 00:48:48 -05:00
"metricsSpec" : [
{
"type" : "count",
"name" : "count"
},
{
"type" : "doubleSum",
"name" : "added",
"fieldName" : "added"
},
{
"type" : "doubleSum",
"name" : "deleted",
"fieldName" : "deleted"
},
{
"type" : "doubleSum",
"name" : "delta",
"fieldName" : "delta"
}
],
"granularitySpec" : {
"type" : "uniform",
"segmentGranularity" : "DAY",
"queryGranularity" : "NONE",
"intervals" : [ "2013-08-31/2013-09-01" ]
2013-10-08 19:34:58 -04:00
}
2015-01-22 00:48:48 -05:00
},
"ioConfig" : {
"type" : "index",
"firehose" : {
"type" : "local",
"baseDir" : "examples/indexing/",
"filter" : "wikipedia_data.json"
}
},
"tuningConfig" : {
"type" : "index",
"targetPartitionSize" : -1,
"rowFlushBoundary" : 0,
"numShards": 1
2013-10-08 19:34:58 -04:00
}
}
}
```
2015-01-22 00:48:48 -05:00
#### Task Properties
2013-10-08 19:34:58 -04:00
|property|description|required?|
|--------|-----------|---------|
|type|The task type, this should always be "index".|yes|
2014-03-20 00:08:40 -04:00
|id|The task ID. If this is not explicitly specified, Druid generates the task ID using the name of the task file and date-time stamp. |no|
2015-01-22 00:48:48 -05:00
|spec|The ingestion spec. See below for more details. |yes|
#### DataSchema
This field is required.
See [Ingestion](Ingestion.html)
#### IOConfig
This field is required. You can specify a type of [Firehose](Firehose.html) here.
#### TuningConfig
The tuningConfig is optional and default parameters will be used if no tuningConfig is specified. See below for more details.
|property|description|default|required?|
|--------|-----------|-------|---------|
|type|The task type, this should always be "index".|None.||yes|
|targetPartitionSize|Used in sharding. Determines how many rows are in each segment. Set this to -1 to use numShards instead for sharding.|5000000|no|
2015-01-22 00:48:48 -05:00
|rowFlushBoundary|Used in determining when intermediate persist should occur to disk.|500000|no|
|numShards|Directly specify the number of shards to create. You can skip the intermediate persist step if you specify the number of shards you want and set targetPartitionSize=-1.|null|no|
2013-10-08 19:34:58 -04:00
### Index Hadoop Task
2013-10-08 19:34:58 -04:00
The Hadoop Index Task is used to index larger data sets that require the parallelization and processing power of a Hadoop cluster.
```
{
"type" : "index_hadoop",
"spec": <Hadoop index spec>
2013-10-08 19:34:58 -04:00
}
```
|property|description|required?|
|--------|-----------|---------|
|type|The task type, this should always be "index_hadoop".|yes|
|spec|A Hadoop Index Spec. See [Batch Ingestion](Batch-ingestion.html)|yes|
2014-03-27 10:46:49 -04:00
|hadoopCoordinates|The Maven \<groupId\>:\<artifactId\>:\<version\> of Hadoop to use. The default is "org.apache.hadoop:hadoop-client:2.3.0".|no|
2013-10-16 21:08:36 -04:00
2013-10-08 19:34:58 -04:00
2014-07-29 14:40:35 -04:00
The Hadoop Index Config submitted as part of an Hadoop Index Task is identical to the Hadoop Index Config used by the `HadoopBatchIndexer` except that three fields must be omitted: `segmentOutputPath`, `workingPath`, `updaterJobSpec`. The Indexing Service takes care of setting these fields internally.
#### Using your own Hadoop distribution
Druid is compiled against Apache hadoop-client 2.3.0. However, if you happen to use a different flavor of hadoop that is API compatible with hadoop-client 2.3.0, you should only have to change the hadoopCoordinates property to point to the maven artifact used by your distribution. For non-API compatible versions, please see [here](Other-Hadoop.html).
#### Resolving dependency conflicts running HadoopIndexTask
Currently, the HadoopIndexTask creates a single classpath to run the HadoopDruidIndexerJob, which can lead to version conflicts between various dependencies of Druid, extension modules, and Hadoop's own dependencies.
The Hadoop index task will put Druid's dependencies first on the classpath, followed by any extensions dependencies, and any Hadoop dependencies last.
If you are having trouble with any extensions in HadoopIndexTask, it may be the case that Druid, or one of its dependencies, depends on a different version of a library than what you are using as part of your extensions, but Druid's version overrides the one in your extension. In that case you probably want to build your own Druid version and override the offending library by adding an explicit dependency to the pom.xml of each druid sub-module that depends on it.
### Realtime Index Task
2015-01-22 00:48:48 -05:00
The indexing service can also run real-time tasks. These tasks effectively transform a middle manager into a real-time node. We introduced real-time tasks as a way to programmatically add new real-time data sources without needing to manually add nodes. We recommend you use the library [tranquility](https://github.com/metamx/tranquility) to programmatically manage generating real-time index tasks. The grammar for the real-time task is as follows:
2013-10-10 18:05:01 -04:00
```json
{
2015-01-22 00:48:48 -05:00
"type": "index_realtime",
2013-10-10 18:05:01 -04:00
"id": "example",
"resource": {
2015-01-22 00:48:48 -05:00
"availabilityGroup": "someGroup",
"requiredCapacity": 1
},
2015-01-22 00:48:48 -05:00
"spec": {
"dataSchema": {
"dataSource": "wikipedia",
"parser": {
"type": "string",
"parseSpec": {
"format": "json",
"timestampSpec": {
"column": "timestamp",
"format": "iso"
},
"dimensionsSpec": {
"dimensions": [
"page",
"language",
"user",
"unpatrolled",
"newPage",
"robot",
"anonymous",
"namespace",
"continent",
"country",
"region",
"city"
],
"dimensionExclusions": [
],
"spatialDimensions": [
]
}
},
"metricsSpec": [
{
"type": "count",
"name": "count"
},
{
"type": "doubleSum",
"name": "added",
"fieldName": "added"
},
{
"type": "doubleSum",
"name": "deleted",
"fieldName": "deleted"
},
{
"type": "doubleSum",
"name": "delta",
"fieldName": "delta"
}
],
"granularitySpec": {
"type": "uniform",
"segmentGranularity": "DAY",
"queryGranularity": "NONE"
}
}
},
2015-01-22 00:48:48 -05:00
"ioConfig": {
"type": "realtime",
"firehose": {
"type": "kafka-0.8",
2015-01-22 00:48:48 -05:00
"consumerProps": {
"zookeeper.connect": "zk_connect_string",
"zookeeper.connection.timeout.ms" : "15000",
"zookeeper.session.timeout.ms" : "15000",
"zookeeper.sync.time.ms" : "5000",
"group.id": "consumer-group",
"fetch.message.max.bytes" : "1048586",
"auto.offset.reset": "largest",
"auto.commit.enable": "false"
2015-01-22 00:48:48 -05:00
},
"feed": "your_kafka_topic"
}
},
"tuningConfig": {
"type": "realtime",
"maxRowsInMemory": 500000,
"intermediatePersistPeriod": "PT10m",
"windowPeriod": "PT10m",
"basePersistDirectory": "\/tmp\/realtime\/basePersist",
"rejectionPolicy": {
"type": "serverTime"
}
}
2015-01-22 00:48:48 -05:00
}
}
```
|Field|Type|Description|Required|
|-----|----|-----------|--------|
|id|String|The ID of the task.|No|
|Resource|JSON object|Used for high availability purposes.|No|
|availabilityGroup|String|An uniqueness identifier for the task. Tasks with the same availability group will always run on different middle managers. Used mainly for replication. |yes|
|requiredCapacity|Integer|How much middle manager capacity this task will take.|yes|
For schema, windowPeriod, segmentGranularity, and other configuration information, see [Realtime Ingestion](Realtime-ingestion.html). For firehose configuration, see [Firehose](Firehose.html).
2013-10-08 19:34:58 -04:00
Segment Merging Tasks
---------------------
### Append Task
2013-09-13 18:20:39 -04:00
Append tasks append a list of segments together into a single segment (one after the other). The grammar is:
2013-10-10 18:05:01 -04:00
```json
2013-10-08 19:34:58 -04:00
{
"type": "append",
2013-10-08 19:34:58 -04:00
"id": <task_id>,
"dataSource": <task_datasource>,
"segments": <JSON list of DataSegment objects to append>
}
```
2013-09-13 18:20:39 -04:00
### Merge Task
2013-09-13 18:20:39 -04:00
Merge tasks merge a list of segments together. Any common timestamps are merged. The grammar is:
2013-10-10 18:05:01 -04:00
```json
2013-10-08 19:34:58 -04:00
{
"type": "merge",
2013-10-08 19:34:58 -04:00
"id": <task_id>,
"dataSource": <task_datasource>,
"segments": <JSON list of DataSegment objects to merge>
2013-10-08 19:34:58 -04:00
}
```
2013-09-13 18:20:39 -04:00
2013-10-08 19:34:58 -04:00
Segment Destroying Tasks
------------------------
### Delete Task
2013-09-13 18:20:39 -04:00
Delete tasks create empty segments with no data. The grammar is:
2013-10-10 18:05:01 -04:00
```json
2013-10-08 19:34:58 -04:00
{
"type": "delete",
2013-10-08 19:34:58 -04:00
"id": <task_id>,
"dataSource": <task_datasource>,
"segments": <JSON list of DataSegment objects to delete>
2013-10-08 19:34:58 -04:00
}
```
2013-09-13 18:20:39 -04:00
### Kill Task
2013-09-13 18:20:39 -04:00
Kill tasks delete all information about a segment and removes it from deep storage. Killable segments must be disabled (used==0) in the Druid segment table. The available grammar is:
2013-10-10 18:05:01 -04:00
```json
2013-10-08 19:34:58 -04:00
{
"type": "kill",
2013-10-08 19:34:58 -04:00
"id": <task_id>,
"dataSource": <task_datasource>,
2014-07-17 17:07:12 -04:00
"interval" : <all_segments_in_this_interval_will_die!>
2013-10-08 19:34:58 -04:00
}
```
2013-09-13 18:20:39 -04:00
2013-10-08 19:34:58 -04:00
Misc. Tasks
-----------
2013-09-13 18:20:39 -04:00
### Version Converter Task
2013-09-13 18:20:39 -04:00
2013-10-08 19:34:58 -04:00
These tasks convert segments from an existing older index version to the latest index version. The available grammar is:
2013-09-13 18:20:39 -04:00
2013-10-10 18:05:01 -04:00
```json
2013-10-08 19:34:58 -04:00
{
"type": "version_converter",
2013-10-08 19:34:58 -04:00
"id": <task_id>,
"groupId" : <task_group_id>,
"dataSource": <task_datasource>,
"interval" : <segment_interval>,
"segment": <JSON DataSegment object to convert>
}
```
2013-09-13 18:20:39 -04:00
### Noop Task
2013-09-13 18:20:39 -04:00
2013-10-08 19:34:58 -04:00
These tasks start, sleep for a time and are used only for testing. The available grammar is:
2013-10-10 18:05:01 -04:00
```json
2013-10-08 19:34:58 -04:00
{
"type": "noop",
2013-10-08 19:34:58 -04:00
"id": <optional_task_id>,
"interval" : <optional_segment_interval>,
"runTime" : <optional_millis_to_sleep>,
"firehose": <optional_firehose_to_test_connect>
}
```
2013-09-13 18:20:39 -04:00
2013-10-08 19:34:58 -04:00
Locking
-------
Once an overlord node accepts a task, a lock is created for the data source and interval specified in the task. Tasks do not need to explicitly release locks, they are released upon task completion. Tasks may potentially release locks early if they desire. Tasks ids are unique by naming them using UUIDs or the timestamp in which the task was created. Tasks are also part of a "task group", which is a set of tasks that can share interval locks.