mirror of https://github.com/apache/druid.git
6ddb828c7a
In a heterogeneous environment, sometimes you don't have control over the input folder. Upstream can put any folder they want. In this situation the S3InputSource.java is unusable. Most people like me solved it by using Airflow to fetch the full list of parquet files and pass it over to Druid. But doing this explodes the JSON spec. We had a situation where 1 of the JSON spec is 16MB and that's simply too much for Overlord. This patch allows users to pass {"filter": "*.parquet"} and let Druid performs the filtering of the input files. I am using the glob notation to be consistent with the LocalFirehose syntax. |
||
---|---|---|
.. | ||
automatic-compaction.md | ||
compaction.md | ||
data-formats.md | ||
data-management.md | ||
data-model.md | ||
faq.md | ||
hadoop.md | ||
index.md | ||
ingestion-spec.md | ||
native-batch-firehose.md | ||
native-batch-input-source.md | ||
native-batch-simple-task.md | ||
native-batch.md | ||
partitioning.md | ||
rollup.md | ||
schema-design.md | ||
standalone-realtime.md | ||
tasks.md | ||
tranquility.md |