mirror of https://github.com/apache/druid.git
Docs: Clarify the meaning of maxSplitSize. (#5803)
This commit is contained in:
parent
33b4eb624d
commit
bc0ff251a3
|
@ -43,7 +43,7 @@ This is a type of `inputSpec` that reads data already stored inside Druid. This
|
|||
|-----|----|-----------|--------|
|
||||
|type|String.|This should always be 'dataSource'.|yes|
|
||||
|ingestionSpec|JSON object.|Specification of Druid segments to be loaded. See below.|yes|
|
||||
|maxSplitSize|Number|Enables combining multiple segments into single Hadoop InputSplit according to size of segments. With -1, druid calculates max split size based on user specified number of map task(mapred.map.tasks or mapreduce.job.maps). By default, one split is made for one segment. |no|
|
||||
|maxSplitSize|Number|Enables combining multiple segments into single Hadoop InputSplit according to size of segments. With -1, druid calculates max split size based on user specified number of map task(mapred.map.tasks or mapreduce.job.maps). By default, one split is made for one segment. maxSplitSize is specified in bytes.|no|
|
||||
|useNewAggs|Boolean|If "false", then list of aggregators in "metricsSpec" of hadoop indexing task must be same as that used in original indexing task while ingesting raw data. Default value is "false". This field can be set to "true" when "inputSpec" type is "dataSource" and not "multi" to enable arbitrary aggregators while reindexing. See below for "multi" type support for delta-ingestion.|no|
|
||||
|
||||
Here is what goes inside `ingestionSpec`:
|
||||
|
|
Loading…
Reference in New Issue