fix docs error: google to azure and hdfs to http (#9881)

This commit is contained in:
Jianhuan Liu 2020-05-21 01:17:39 +08:00 committed by GitHub
parent 427239f451
commit 2050f2b00a
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 3 additions and 3 deletions

View File

@ -1006,7 +1006,7 @@ Sample specs:
|property|description|default|required?|
|--------|-----------|-------|---------|
|type|This should be `google`.|None|yes|
|type|This should be `azure`.|None|yes|
|uris|JSON array of URIs where Azure Blob objects to be ingested are located. Should be in form "azure://\<container>/\<path-to-file\>"|None|`uris` or `prefixes` or `objects` must be set|
|prefixes|JSON array of URI prefixes for the locations of Azure Blob objects to be ingested. Should be in the form "azure://\<container>/\<prefix\>". Empty objects starting with one of the given prefixes will be skipped.|None|`uris` or `prefixes` or `objects` must be set|
|objects|JSON array of Azure Blob objects to be ingested.|None|`uris` or `prefixes` or `objects` must be set|
@ -1106,9 +1106,9 @@ the [S3 input source](#s3-input-source) or the [Google Cloud Storage input sourc
### HTTP Input Source
The HDFS input source is to support reading files directly
The HTTP input source is to support reading files directly
from remote sites via HTTP.
The HDFS input source is _splittable_ and can be used by the [Parallel task](#parallel-task),
The HTTP input source is _splittable_ and can be used by the [Parallel task](#parallel-task),
where each worker task of `index_parallel` will read only one file.
Sample specs: