diff --git a/docs/content/Batch-ingestion.md b/docs/content/Batch-ingestion.md index f1c5a814987..2f53eb48b4e 100644 --- a/docs/content/Batch-ingestion.md +++ b/docs/content/Batch-ingestion.md @@ -272,7 +272,7 @@ The schema of the Hadoop Index Task contains a task "type" and a Hadoop Index Co |config|A Hadoop Index Config (see above).|yes| |hadoopCoordinates|The Maven `::` of Hadoop to use. The default is "org.apache.hadoop:hadoop-core:1.0.3".|no| -The Hadoop Index Config submitted as part of an Hadoop Index Task is identical to the Hadoop Index Config used by the `HadoopBatchIndexer` except that three fields must be omitted: `segmentOutputPath`, `workingPath`, `metadataUpdateSpec`. The Indexing Service takes care of setting these fields internally. +The Hadoop Index Config submitted as part of an Hadoop Index Task is identical to the Hadoop Index Config used by the `HadoopBatchIndexer` except that three fields must be omitted: `segmentOutputPath`, `workingPath`, `updaterJobSpec`. The Indexing Service takes care of setting these fields internally. To run the task: diff --git a/docs/content/Tasks.md b/docs/content/Tasks.md index 868e75efe88..6c5df35d25f 100644 --- a/docs/content/Tasks.md +++ b/docs/content/Tasks.md @@ -77,7 +77,7 @@ The Hadoop Index Task is used to index larger data sets that require the paralle |hadoopCoordinates|The Maven \:\:\ of Hadoop to use. The default is "org.apache.hadoop:hadoop-client:2.3.0".|no| -The Hadoop Index Config submitted as part of an Hadoop Index Task is identical to the Hadoop Index Config used by the `HadoopBatchIndexer` except that three fields must be omitted: `segmentOutputPath`, `workingPath`, `metadataUpdateSpec`. The Indexing Service takes care of setting these fields internally. +The Hadoop Index Config submitted as part of an Hadoop Index Task is identical to the Hadoop Index Config used by the `HadoopBatchIndexer` except that three fields must be omitted: `segmentOutputPath`, `workingPath`, `updaterJobSpec`. The Indexing Service takes care of setting these fields internally. #### Using your own Hadoop distribution