diff --git a/docs/content/design/realtime.md b/docs/content/design/realtime.md index f1bac49b820..d2cb39c3f53 100644 --- a/docs/content/design/realtime.md +++ b/docs/content/design/realtime.md @@ -7,7 +7,7 @@ Real-time Node For Real-time Node Configuration, see [Realtime Configuration](../configuration/realtime.html). -For Real-time Ingestion, see [Realtime Ingestion](../ingestion/realtime-ingestion.html). +For Real-time Ingestion, see [Realtime Ingestion](../ingestion/stream-ingestion.html). Realtime nodes provide a realtime index. Data indexed via these nodes is immediately available for querying. Realtime nodes will periodically build segments representing the data they’ve collected over some span of time and transfer these segments off to [Historical](../design/historical.html) nodes. They use ZooKeeper to monitor the transfer and the metadata storage to store metadata about the transferred segment. Once transfered, segments are forgotten by the Realtime nodes. diff --git a/docs/content/development/kafka-simple-consumer-firehose.md b/docs/content/development/kafka-simple-consumer-firehose.md index 06fcae95b5c..39971bc7abd 100644 --- a/docs/content/development/kafka-simple-consumer-firehose.md +++ b/docs/content/development/kafka-simple-consumer-firehose.md @@ -3,7 +3,7 @@ layout: doc_page --- # KafkaSimpleConsumerFirehose This is an experimental firehose to ingest data from kafka using kafka simple consumer api. Currently, this firehose would only work inside standalone realtime nodes. -The configuration for KafkaSimpleConsumerFirehose is similar to the KafkaFirehose [Kafka firehose example](realtime-ingestion.html#realtime-specfile), except `firehose` should be replaced with `firehoseV2` like this: +The configuration for KafkaSimpleConsumerFirehose is similar to the KafkaFirehose [Kafka firehose example](../ingestion/stream-pull.html#realtime-specfile), except `firehose` should be replaced with `firehoseV2` like this: ```json "firehoseV2": { @@ -28,4 +28,3 @@ The configuration for KafkaSimpleConsumerFirehose is similar to the KafkaFirehos |feed|kafka topic|yes| For using this firehose at scale and possibly in production, it is recommended to set replication factor to at least three, which means at least three Kafka brokers in the `brokerList`. For a 1*10^4 events per second kafka topic, keeping one partition can work properly, but more partitions could be added if higher throughput is required. - diff --git a/docs/content/ingestion/firehose.md b/docs/content/ingestion/firehose.md index 075c5da963d..0d95034e5d4 100644 --- a/docs/content/ingestion/firehose.md +++ b/docs/content/ingestion/firehose.md @@ -9,7 +9,7 @@ Firehoses describe the data stream source. They are pluggable and thus the confi |-------|------|-------------|----------| | type | String | Specifies the type of firehose. Each value will have its own configuration schema, firehoses packaged with Druid are described below. | yes | -We describe the configuration of the [Kafka firehose example](realtime-ingestion.html#realtime-specfile), but there are other types available in Druid (see below). +We describe the configuration of the [Kafka firehose example](../ingestion/stream-pull.html#realtime-specfile), but there are other types available in Druid (see below). - `consumerProps` is a map of properties for the Kafka consumer. The JSON object is converted into a Properties object and passed along to the Kafka consumer. - `feed` is the feed that the Kafka consumer should read from. diff --git a/docs/content/ingestion/index.md b/docs/content/ingestion/index.md index df1c8786616..986ef75b263 100644 --- a/docs/content/ingestion/index.md +++ b/docs/content/ingestion/index.md @@ -278,12 +278,14 @@ This spec is used to generate segments with arbitrary intervals (it tries to cre # IO Config -Real-time Ingestion: See [Real-time ingestion](../ingestion/realtime-ingestion.html). +Stream Push Ingestion: Stream push ingestion with Tranquility does not require an IO Config. +Stream Pull Ingestion: See [Stream pull ingestion](../ingestion/stream-pull.html). Batch Ingestion: See [Batch ingestion](../ingestion/batch-ingestion.html) -# Ingestion Spec +# Tuning Config -Real-time Ingestion: See [Real-time ingestion](../ingestion/realtime-ingestion.html). +Stream Push Ingestion: See [Stream push ingestion](../ingestion/stream-push.html). +Stream Pull Ingestion: See [Stream pull ingestion](../ingestion/stream-pull.html). Batch Ingestion: See [Batch ingestion](../ingestion/batch-ingestion.html) # Evaluating Timestamp, Dimensions and Metrics diff --git a/docs/content/ingestion/stream-push.md b/docs/content/ingestion/stream-push.md index 327b120b13a..1ea8f9f7a63 100644 --- a/docs/content/ingestion/stream-push.md +++ b/docs/content/ingestion/stream-push.md @@ -134,3 +134,13 @@ at-least-once design and can lead to duplicated events. Under normal operation, these risks are minimal. But if you need absolute 100% fidelity for historical data, we recommend a [hybrid batch/streaming](../tutorials/ingestion.html#hybrid-batch-streaming) architecture. + +## Documentation + +Tranquility documentation be found [here](https://github.com/druid-io/tranquility/blob/master/README.md). + +## Configuration + +Tranquility configuration can be found [here](https://github.com/druid-io/tranquility/blob/master/docs/configuration.md). + +Tranquility's tuningConfig can be found [here](http://static.druid.io/tranquility/api/latest/#com.metamx.tranquility.druid.DruidTuning).