Merge pull request #2416 from druid-io/fix-docs

fix docs
This commit is contained in:
David Lim 2016-02-08 15:09:20 -07:00
commit 57bf61029b
5 changed files with 18 additions and 7 deletions

View File

@ -7,7 +7,7 @@ Real-time Node
For Real-time Node Configuration, see [Realtime Configuration](../configuration/realtime.html).
For Real-time Ingestion, see [Realtime Ingestion](../ingestion/realtime-ingestion.html).
For Real-time Ingestion, see [Realtime Ingestion](../ingestion/stream-ingestion.html).
Realtime nodes provide a realtime index. Data indexed via these nodes is immediately available for querying. Realtime nodes will periodically build segments representing the data theyve collected over some span of time and transfer these segments off to [Historical](../design/historical.html) nodes. They use ZooKeeper to monitor the transfer and the metadata storage to store metadata about the transferred segment. Once transfered, segments are forgotten by the Realtime nodes.

View File

@ -3,7 +3,7 @@ layout: doc_page
---
# KafkaSimpleConsumerFirehose
This is an experimental firehose to ingest data from kafka using kafka simple consumer api. Currently, this firehose would only work inside standalone realtime nodes.
The configuration for KafkaSimpleConsumerFirehose is similar to the KafkaFirehose [Kafka firehose example](realtime-ingestion.html#realtime-specfile), except `firehose` should be replaced with `firehoseV2` like this:
The configuration for KafkaSimpleConsumerFirehose is similar to the KafkaFirehose [Kafka firehose example](../ingestion/stream-pull.html#realtime-specfile), except `firehose` should be replaced with `firehoseV2` like this:
```json
"firehoseV2": {
@ -28,4 +28,3 @@ The configuration for KafkaSimpleConsumerFirehose is similar to the KafkaFirehos
|feed|kafka topic|yes|
For using this firehose at scale and possibly in production, it is recommended to set replication factor to at least three, which means at least three Kafka brokers in the `brokerList`. For a 1*10^4 events per second kafka topic, keeping one partition can work properly, but more partitions could be added if higher throughput is required.

View File

@ -9,7 +9,7 @@ Firehoses describe the data stream source. They are pluggable and thus the confi
|-------|------|-------------|----------|
| type | String | Specifies the type of firehose. Each value will have its own configuration schema, firehoses packaged with Druid are described below. | yes |
We describe the configuration of the [Kafka firehose example](realtime-ingestion.html#realtime-specfile), but there are other types available in Druid (see below).
We describe the configuration of the [Kafka firehose example](../ingestion/stream-pull.html#realtime-specfile), but there are other types available in Druid (see below).
- `consumerProps` is a map of properties for the Kafka consumer. The JSON object is converted into a Properties object and passed along to the Kafka consumer.
- `feed` is the feed that the Kafka consumer should read from.

View File

@ -278,12 +278,14 @@ This spec is used to generate segments with arbitrary intervals (it tries to cre
# IO Config
Real-time Ingestion: See [Real-time ingestion](../ingestion/realtime-ingestion.html).
Stream Push Ingestion: Stream push ingestion with Tranquility does not require an IO Config.
Stream Pull Ingestion: See [Stream pull ingestion](../ingestion/stream-pull.html).
Batch Ingestion: See [Batch ingestion](../ingestion/batch-ingestion.html)
# Ingestion Spec
# Tuning Config
Real-time Ingestion: See [Real-time ingestion](../ingestion/realtime-ingestion.html).
Stream Push Ingestion: See [Stream push ingestion](../ingestion/stream-push.html).
Stream Pull Ingestion: See [Stream pull ingestion](../ingestion/stream-pull.html).
Batch Ingestion: See [Batch ingestion](../ingestion/batch-ingestion.html)
# Evaluating Timestamp, Dimensions and Metrics

View File

@ -134,3 +134,13 @@ at-least-once design and can lead to duplicated events.
Under normal operation, these risks are minimal. But if you need absolute 100% fidelity for
historical data, we recommend a [hybrid batch/streaming](../tutorials/ingestion.html#hybrid-batch-streaming)
architecture.
## Documentation
Tranquility documentation be found [here](https://github.com/druid-io/tranquility/blob/master/README.md).
## Configuration
Tranquility configuration can be found [here](https://github.com/druid-io/tranquility/blob/master/docs/configuration.md).
Tranquility's tuningConfig can be found [here](http://static.druid.io/tranquility/api/latest/#com.metamx.tranquility.druid.DruidTuning).