From ae1f104c10cecfc45b9f1a6ad6ba6c7f40a1fc1e Mon Sep 17 00:00:00 2001 From: Bingkun Date: Wed, 26 Aug 2015 15:16:21 -0500 Subject: [PATCH] Fix batch ingestion doc --- docs/content/ingestion/batch-ingestion.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/docs/content/ingestion/batch-ingestion.md b/docs/content/ingestion/batch-ingestion.md index 4af56f2f21d..e0aa698779b 100644 --- a/docs/content/ingestion/batch-ingestion.md +++ b/docs/content/ingestion/batch-ingestion.md @@ -176,7 +176,9 @@ It is a type of inputSpec that reads data already stored inside druid. It is use |maxSplitSize|Number|Enables combining multiple segments into single Hadoop InputSplit according to size of segments. Default is none. |no| Here is what goes inside "ingestionSpec" + |Field|Type|Description|Required| +|-----|----|-----------|--------| |dataSource|String|Druid dataSource name from which you are loading the data.|yes| |interval|String|A string representing ISO-8601 Intervals.|yes| |granularity|String|Defines the granularity of the query while loading data. Default value is "none".See [Granularities](../querying/granularities.html).|no|