shorten links and file names

* remove redundant parts in file names
* delete unsupported "Druid-Personal-Demo-Cluster"
This commit is contained in:
Xavier Léauté 2015-05-28 17:10:34 -07:00 committed by Himanshu Gupta
parent 8edc2aaca3
commit d2346b6834
57 changed files with 64 additions and 149 deletions

View File

@ -1,5 +1,5 @@
--- ---
title: this page has moved. title: this page has moved.
layout: simple_page layout: simple_page
redirect_to: development/about-experimental-features.html redirect_to: development/experimental.html
--- ---

View File

@ -1,5 +1,5 @@
--- ---
title: this page has moved. title: this page has moved.
layout: simple_page layout: simple_page
redirect_to: development/approxhisto.html redirect_to: development/approximate-histograms.html
--- ---

View File

@ -1,5 +1,5 @@
--- ---
title: this page has moved. title: this page has moved.
layout: simple_page layout: simple_page
redirect_to: configuration/broker-config.html redirect_to: configuration/broker.html
--- ---

View File

@ -1,5 +1,5 @@
--- ---
title: this page has moved. title: this page has moved.
layout: simple_page layout: simple_page
redirect_to: development/build-from-source.html redirect_to: development/build.html
--- ---

View File

@ -1,5 +1,5 @@
--- ---
title: this page has moved. title: this page has moved.
layout: simple_page layout: simple_page
redirect_to: configuration/configuration.html redirect_to: configuration/
--- ---

View File

@ -1,5 +1,5 @@
--- ---
title: this page has moved. title: this page has moved.
layout: simple_page layout: simple_page
redirect_to: configuration/coordinator-config.html redirect_to: configuration/coordinator.html
--- ---

View File

@ -1,5 +0,0 @@
---
title: this page has moved.
layout: simple_page
redirect_to: misc/druid-personal-demo-cluster.html
---

View File

@ -1,5 +1,5 @@
--- ---
title: this page has moved. title: this page has moved.
layout: simple_page layout: simple_page
redirect_to: development/geographicqueries.html redirect_to: development/geo.html
--- ---

View File

@ -1,5 +1,5 @@
--- ---
title: this page has moved. title: this page has moved.
layout: simple_page layout: simple_page
redirect_to: configuration/hadoop-configuration.html redirect_to: configuration/hadoop.html
--- ---

View File

@ -1,5 +1,5 @@
--- ---
title: this page has moved. title: this page has moved.
layout: simple_page layout: simple_page
redirect_to: configuration/historical-config.html redirect_to: configuration/historical.html
--- ---

View File

@ -1,5 +1,5 @@
--- ---
title: this page has moved. title: this page has moved.
layout: simple_page layout: simple_page
redirect_to: configuration/indexing-service-config.html redirect_to: configuration/indexing-service.html
--- ---

View File

@ -1,5 +1,5 @@
--- ---
title: this page has moved. title: this page has moved.
layout: simple_page layout: simple_page
redirect_to: ingestion/ingestion-faq.html redirect_to: ingestion/faq.html
--- ---

View File

@ -1,5 +1,5 @@
--- ---
title: this page has moved. title: this page has moved.
layout: simple_page layout: simple_page
redirect_to: ingestion/ingestion-overview.html redirect_to: ingestion/overview.html
--- ---

View File

@ -1,5 +1,5 @@
--- ---
title: this page has moved. title: this page has moved.
layout: simple_page layout: simple_page
redirect_to: ingestion/ingestion.html redirect_to: ingestion/
--- ---

View File

@ -1,5 +1,5 @@
--- ---
title: this page has moved. title: this page has moved.
layout: simple_page layout: simple_page
redirect_to: configuration/production-cluster-configuration.html redirect_to: configuration/production-cluster.html
--- ---

View File

@ -1,5 +1,5 @@
--- ---
title: this page has moved. title: this page has moved.
layout: simple_page layout: simple_page
redirect_to: configuration/realtime-config.html redirect_to: configuration/realtime.html
--- ---

View File

@ -1,5 +1,5 @@
--- ---
title: this page has moved. title: this page has moved.
layout: simple_page layout: simple_page
redirect_to: development/selectquery.html redirect_to: development/select-query.html
--- ---

View File

@ -1,5 +1,5 @@
--- ---
title: this page has moved. title: this page has moved.
layout: simple_page layout: simple_page
redirect_to: configuration/simple-cluster-configuration.html redirect_to: configuration/simple-cluster.html
--- ---

View File

@ -1,5 +1,5 @@
--- ---
title: this page has moved. title: this page has moved.
layout: simple_page layout: simple_page
redirect_to: tutorials/tutorials.html redirect_to: tutorials/
--- ---

View File

@ -8,7 +8,7 @@ For general Broker Node information, see [here](../design/broker.html).
Runtime Configuration Runtime Configuration
--------------------- ---------------------
The broker node uses several of the global configs in [Configuration](../configuration/configuration.html) and has the following set of configurations as well: The broker node uses several of the global configs in [Configuration](../configuration/index.html) and has the following set of configurations as well:
### Node Configs ### Node Configs

View File

@ -8,7 +8,7 @@ For general Coordinator Node information, see [here](../design/coordinator.html)
Runtime Configuration Runtime Configuration
--------------------- ---------------------
The coordinator node uses several of the global configs in [Configuration](../configuration/configuration.html) and has the following set of configurations as well: The coordinator node uses several of the global configs in [Configuration](../configuration/index.html) and has the following set of configurations as well:
### Node Config ### Node Config

View File

@ -8,7 +8,7 @@ For general Historical Node information, see [here](../design/historical.html).
Runtime Configuration Runtime Configuration
--------------------- ---------------------
The historical node uses several of the global configs in [Configuration](../configuration/configuration.html) and has the following set of configurations as well: The historical node uses several of the global configs in [Configuration](../configuration/index.html) and has the following set of configurations as well:
### Node Configs ### Node Configs

View File

@ -5,7 +5,7 @@ For general Indexing Service information, see [here](../design/indexing-service.
## Runtime Configuration ## Runtime Configuration
The indexing service uses several of the global configs in [Configuration](../configuration/configuration.html) and has the following set of configurations as well: The indexing service uses several of the global configs in [Configuration](../configuration/index.html) and has the following set of configurations as well:
### Must be set on Overlord and Middle Manager ### Must be set on Overlord and Middle Manager

View File

@ -4,7 +4,7 @@ layout: doc_page
Logging Logging
========================== ==========================
Druid nodes will emit logs that are useful for debugging to the console. Druid nodes also emit periodic metrics about their state. For more about metrics, see [Configuration](../configuration/configuration.html). Metric logs are printed to the console by default, and can be disabled with `-Ddruid.emitter.logging.logLevel=debug`. Druid nodes will emit logs that are useful for debugging to the console. Druid nodes also emit periodic metrics about their state. For more about metrics, see [Configuration](../configuration/index.html). Metric logs are printed to the console by default, and can be disabled with `-Ddruid.emitter.logging.logLevel=debug`.
Druid uses [log4j2](http://logging.apache.org/log4j/2.x/) for logging. Logging can be configured with a log4j2.xml file. Add the path to the directory containing the log4j2.xml file (eg a config dir) to your classpath if you want to override default Druid log configuration. Note that this directory should be earlier in the classpath than the druid jars. The easiest way to do this is to prefix the classpath with the config dir. For example, if the log4j2.xml file is in config/_common: Druid uses [log4j2](http://logging.apache.org/log4j/2.x/) for logging. Logging can be configured with a log4j2.xml file. Add the path to the directory containing the log4j2.xml file (eg a config dir) to your classpath if you want to override default Druid log configuration. Note that this directory should be earlier in the classpath than the druid jars. The easiest way to do this is to prefix the classpath with the config dir. For example, if the log4j2.xml file is in config/_common:

View File

@ -18,7 +18,7 @@ We'll use r3.8xlarge nodes for query facing nodes and m1.xlarge nodes for coordi
For general purposes of high availability, there should be at least 2 of every node type. For general purposes of high availability, there should be at least 2 of every node type.
To setup a local Druid cluster, see [Simple Cluster Configuration](../configuration/simple-cluster-configuration.html). To setup a local Druid cluster, see [Simple Cluster Configuration](../configuration/simple-cluster.html).
### Common Configuration (common.runtime.properties) ### Common Configuration (common.runtime.properties)

View File

@ -8,7 +8,7 @@ For general Realtime Node information, see [here](../design/realtime.html).
Runtime Configuration Runtime Configuration
--------------------- ---------------------
The realtime node uses several of the global configs in [Configuration](../configuration/configuration.html) and has the following set of configurations as well: The realtime node uses several of the global configs in [Configuration](../configuration/index.html) and has the following set of configurations as well:
### Node Config ### Node Config

View File

@ -4,7 +4,7 @@ layout: doc_page
Simple Cluster Configuration Simple Cluster Configuration
=============================== ===============================
This simple Druid cluster configuration can be used for initially experimenting with Druid on your local machine. For a more realistic production Druid cluster, see [Production Cluster Configuration](../configuration/production-cluster-configuration.html). This simple Druid cluster configuration can be used for initially experimenting with Druid on your local machine. For a more realistic production Druid cluster, see [Production Cluster Configuration](../configuration/production-cluster.html).
### Common Configuration (common.runtime.properties) ### Common Configuration (common.runtime.properties)

View File

@ -3,7 +3,7 @@ layout: doc_page
--- ---
Broker Broker
====== ======
For Broker Node Configuration, see [Broker Configuration](../configuration/broker-config.html). For Broker Node Configuration, see [Broker Configuration](../configuration/broker.html).
The Broker is the node to route queries to if you want to run a distributed cluster. It understands the metadata published to ZooKeeper about what segments exist on what nodes and routes queries such that they hit the right nodes. This node also merges the result sets from all of the individual nodes together. The Broker is the node to route queries to if you want to run a distributed cluster. It understands the metadata published to ZooKeeper about what segments exist on what nodes and routes queries such that they hit the right nodes. This node also merges the result sets from all of the individual nodes together.
On start up, Realtime nodes announce themselves and the segments they are serving in Zookeeper. On start up, Realtime nodes announce themselves and the segments they are serving in Zookeeper.

View File

@ -3,7 +3,7 @@ layout: doc_page
--- ---
Coordinator Node Coordinator Node
================ ================
For Coordinator Node Configuration, see [Coordinator Configuration](../configuration/coordinator-config.html). For Coordinator Node Configuration, see [Coordinator Configuration](../configuration/coordinator.html).
The Druid coordinator node is primarily responsible for segment management and distribution. More specifically, the Druid coordinator node communicates to historical nodes to load or drop segments based on configurations. The Druid coordinator is responsible for loading new segments, dropping outdated segments, managing segment replication, and balancing segment load. The Druid coordinator node is primarily responsible for segment management and distribution. More specifically, the Druid coordinator node communicates to historical nodes to load or drop segments based on configurations. The Druid coordinator is responsible for loading new segments, dropping outdated segments, managing segment replication, and balancing segment load.

View File

@ -3,7 +3,7 @@ layout: doc_page
--- ---
Historical Node Historical Node
=============== ===============
For Historical Node Configuration, see [Historial Configuration](../configuration/historical-config.html). For Historical Node Configuration, see [Historial Configuration](../configuration/historical.html).
Historical nodes load up historical segments and expose them for querying. Historical nodes load up historical segments and expose them for querying.

View File

@ -3,7 +3,7 @@ layout: doc_page
--- ---
Indexing Service Indexing Service
================ ================
For Indexing Service Configuration, see [Indexing Service Configuration](../configuration/indexing-service-config.html). For Indexing Service Configuration, see [Indexing Service Configuration](../configuration/indexing-service.html).
The indexing service is a highly-available, distributed service that runs indexing related tasks. Indexing service [tasks](../misc/tasks.html) create (and sometimes destroy) Druid [segments](../design/segments.html). The indexing service has a master/slave like architecture. The indexing service is a highly-available, distributed service that runs indexing related tasks. Indexing service [tasks](../misc/tasks.html) create (and sometimes destroy) Druid [segments](../design/segments.html). The indexing service has a master/slave like architecture.

View File

@ -5,7 +5,7 @@ layout: doc_page
Middle Manager Node Middle Manager Node
------------------ ------------------
For Middlemanager Node Configuration, see [Indexing Service Configuration](../configuration/indexing-service-config.html). For Middlemanager Node Configuration, see [Indexing Service Configuration](../configuration/indexing-service.html).
The middle manager node is a worker node that executes submitted tasks. Middle Managers forward tasks to peons that run in separate JVMs. The middle manager node is a worker node that executes submitted tasks. Middle Managers forward tasks to peons that run in separate JVMs.
The reason we have separate JVMs for tasks is for resource and log isolation. Each [Peon](../design/peons.html) is capable of running only one task at a time, however, a middle manager may have multiple peons. The reason we have separate JVMs for tasks is for resource and log isolation. Each [Peon](../design/peons.html) is capable of running only one task at a time, however, a middle manager may have multiple peons.

View File

@ -5,7 +5,7 @@ layout: doc_page
Peons Peons
----- -----
For Peon Configuration, see [Peon Configuration](../configuration/indexing-service-config.html). For Peon Configuration, see [Peon Configuration](../configuration/indexing-service.html).
Peons run a single task in a single JVM. MiddleManager is responsible for creating Peons for running tasks. Peons run a single task in a single JVM. MiddleManager is responsible for creating Peons for running tasks.
Peons should rarely (if ever for testing purposes) be run on their own. Peons should rarely (if ever for testing purposes) be run on their own.

View File

@ -3,7 +3,7 @@ layout: doc_page
--- ---
Real-time Node Real-time Node
============== ==============
For Real-time Node Configuration, see [Realtime Configuration](../configuration/realtime-config.html). For Real-time Node Configuration, see [Realtime Configuration](../configuration/realtime.html).
For Real-time Ingestion, see [Realtime Ingestion](../ingestion/realtime-ingestion.html). For Real-time Ingestion, see [Realtime Ingestion](../ingestion/realtime-ingestion.html).

View File

@ -65,7 +65,7 @@ druid.server.http.numThreads=100
Runtime Configuration Runtime Configuration
--------------------- ---------------------
The router module uses several of the default modules in [Configuration](../configuration/configuration.html) and has the following set of configurations as well: The router module uses several of the default modules in [Configuration](../configuration/index.html) and has the following set of configurations as well:
|Property|Possible Values|Description|Default| |Property|Possible Values|Description|Default|
|--------|---------------|-----------|-------| |--------|---------------|-----------|-------|

View File

@ -117,7 +117,7 @@ The spec\_file is a path to a file that contains JSON and an example looks like:
This field is required. This field is required.
See [Ingestion](../ingestion/ingestion.html) See [Ingestion](../ingestion/index.html)
### IOConfig ### IOConfig
@ -337,7 +337,7 @@ The schema of the Hadoop Index Task contains a task "type" and a Hadoop Index Co
This field is required. This field is required.
See [Ingestion](../ingestion/ingestion.html) See [Ingestion](../ingestion/index.html)
### IOConfig ### IOConfig

View File

@ -148,7 +148,7 @@ This is a special variation of the JSON ParseSpec that lower cases all the colum
|-------|------|-------------|----------| |-------|------|-------------|----------|
| dimensions | JSON String array | The names of the dimensions. | yes | | dimensions | JSON String array | The names of the dimensions. | yes |
| dimensionExclusions | JSON String array | The names of dimensions to exclude from ingestion. | no (default == [] | | dimensionExclusions | JSON String array | The names of dimensions to exclude from ingestion. | no (default == [] |
| spatialDimensions | JSON Object array | An array of [spatial dimensions](../development/geographicqueries.html) | no (default == [] | | spatialDimensions | JSON Object array | An array of [spatial dimensions](../development/geo.html) | no (default == [] |
## GranularitySpec ## GranularitySpec

View File

@ -6,7 +6,7 @@ Realtime Data Ingestion
======================= =======================
For general Real-time Node information, see [here](../design/realtime.html). For general Real-time Node information, see [here](../design/realtime.html).
For Real-time Node Configuration, see [Realtime Configuration](../configuration/realtime-config.html). For Real-time Node Configuration, see [Realtime Configuration](../configuration/realtime.html).
For writing your own plugins to the real-time node, see [Firehose](../ingestion/firehose.html). For writing your own plugins to the real-time node, see [Firehose](../ingestion/firehose.html).
@ -105,7 +105,7 @@ There are three parts to a realtime stream specification, `dataSchema`, `IOConfi
This field is required. This field is required.
See [Ingestion](../ingestion/ingestion.html) See [Ingestion](../ingestion/index.html)
### IOConfig ### IOConfig

View File

@ -77,7 +77,7 @@ Local disk ("ephemeral" on AWS EC2) for caching is recommended over network moun
Setup Setup
----- -----
Setting up a cluster is essentially just firing up all of the nodes you want with the proper [configuration](../configuration/configuration.html). One thing to be aware of is that there are a few properties in the configuration that potentially need to be set individually for each process: Setting up a cluster is essentially just firing up all of the nodes you want with the proper [configuration](../configuration/index.html). One thing to be aware of is that there are a few properties in the configuration that potentially need to be set individually for each process:
``` ```
druid.server.type=historical|realtime druid.server.type=historical|realtime

View File

@ -1,80 +0,0 @@
---
layout: doc_page
---
# Druid Personal Demo Cluster (DPDC)
Note, there are currently some issues with the CloudFormation. We are working through them and will update the documentation here when things work properly. In the meantime, the simplest way to get your feet wet with a cluster setup is to run through the instructions at [housejester/druid-test-harness](https://github.com/housejester/druid-test-harness), though it is based on an older version. If you just want to get a feel for the types of data and queries that you can issue, check out [Realtime Examples](realtime-examples.html)
## Introduction
To make it easy for you to get started with Druid, we created an AWS (Amazon Web Services) [CloudFormation](http://aws.amazon.com/cloudformation/) Template that allows you to create a small pre-configured Druid cluster using your own AWS account. The cluster contains a pre-loaded sample workload, the Wikipedia edit stream, and a basic query interface that gets you familiar with Druid capabilities like drill-downs and filters.
This guide walks you through the steps to create the cluster and then how to create basic queries. (The cluster setup should take you about 15-20 minutes depending on AWS response times).
## Whats in this Druid Demo Cluster?
1. A single "Coordinator" node. This node co-locates the [Coordinator](../design/coordinator.html) process, the [Broker](../design/broker.html) process, Zookeeper, and the metadata storage instance. You can read more about Druid architecture [Design](../design/design.html).
1. Three historical nodes; these historical nodes, have been pre-configured to work with the Coordinator node and should automatically load up the Wikipedia edit stream data (no specific setup is required).
## Setup Instructions
1. Log in to your AWS account: Start by logging into the [Console page](https://console.aws.amazon.com) of your AWS account; if you dont have one, follow this link to sign up for one [http://aws.amazon.com/](http://aws.amazon.com/).
![AWS Console Page](images/demo/setup-01-console.png)
1. If you have a [Key Pair](http://docs.aws.amazon.com/gettingstarted/latest/wah/getting-started-create-key-pair.html) already created you may skip this step. Note: this is required to create the demo cluster and is generally not used unless instances need to be accessed directly (e.g. via SSH).
1. Click **EC2** to go to the EC2 Dashboard. From there, click **Key Pairs** under Network & Security.
![EC2 Dashboard](images/demo/setup-02a-keypair.png)
1. Click on the button **Create Key Pair**. A dialog box will appear prompting you to enter a Key Pair name (as long as you remember it, the name is arbitrary, for this example we entered `Druid`). Click **Create**. You will be prompted to download a .pam; store this file in a safe place.
![Create Key Pair](images/demo/setup-02b-keypair.png)
1. Unless youre there already, go back to the Console page, or follow this link: https://console.aws.amazon.com. Click **CloudFormation** under Deployment & Management.
![CloudFormation](images/demo/setup-03-ec2.png)
1. Click **Create New Stack**, which will bring up the **Create Stack** dialog.
![Create New Stack](images/demo/setup-04-newstack.png)
1. Enter a **Stack Name** (its arbitrary, we chose, `DruidStack`). Click **Provide a Template URL** type in the following template URL: _**https://s3.amazonaws.com/cf-templates-jm2ikmzj3y6x-us-east-1/2013081cA9-Druid04012013.template**_. Press **Continue**, this will take you to the Create Stack dialog.
![Stack Name & URL](images/demo/setup-05-createstack.png)
1. Enter `Druid` (or the Key Pair name you created in Step 2) in the **KeyPairName** field; click **Continue**. This should bring up another dialog prompting you to enter a **Key** and **Value**.
![Stack Parameters](images/demo/setup-06-parameters.png)
1. While the inputs are arbitrary, its important to remember this information; we chose to enter `version` for **Key** and `1` for **Value**. Press **Continue** to bring up a confirmation dialog.
![Add Tags](images/demo/setup-07a-tags.png)
1. Click **Continue** to start creating your Druid Demo environment (this will bring up another dialog box indicating your environment is being created; click **Close** to take you to a more detailed view of the Stack creation process). Note: depending on AWS, this step could take over 15 minutes initialization continues even after the instances are created. (So yes, now would be a good time to grab that cup of coffee).
![Review](images/demo/setup-07b-review.png)
![Create Stack Complete](images/demo/setup-07c-complete.png)
1. Click and expand the **Events** tab in the CloudFormation Stacks window to get a more detailed view of the Druid Demo Cluster setup.
![CloudFormations](images/demo/setup-09-events.png)
1. Get the IP address of your Druid Coordinator Node:
1. Go to the following URL: [https://console.aws.amazon.com/ec2](https://console.aws.amazon.com/ec2)
1. Click **Instances** in the left pane you should see something similar to the following figure.
1. Select the **DruidCoordinator** instance
1. Your IP address is right under the heading: **EC2 Instance: DruidCoordinator**. Select and copy that entire line, which ends with `amazonaws.com`.
![EC2 Instances](images/demo/setup-10-ip.png)
## Querying Data
1. Use the following URL to bring up the Druid Demo Cluster query interface (replace **IPAddressDruidCoordinator** with the actual druid coordinator IP Address):
**`http://IPAddressDruidCoordinator:8081/druid/v3/demoServlet`**
As you can see from the image below, there are default values in the Dimensions and Granularity fields. Clicking **Execute** will produce a basic query result.
![Demo Query Interface](images/demo/query-1.png)
1. Note: when the Query is in running the **Execute** button will be disabled and read: **Fetching…**
![Demo Query](images/demo/query-2.png)
1. You can add multiple Aggregation values, adjust Granularity, and Dimensions; query results will appear at the bottom of the window.
Enjoy! And for sure, please send along your comments and feedback or, aspirations on expanding and developing this demo. https://groups.google.com/d/forum/druid-development. Attention R users: we just open-sourced our R Druid connector: https://github.com/metamx/RDruid.

View File

@ -24,7 +24,7 @@ coordination nodes (coordinators and overlords). Coordination nodes should requi
## Selecting Hardware ## Selecting Hardware
Druid is designed to run on commodity hardware and we've tried to provide some general guidelines on [how things should be tuned]() for various deployments. We've also provided Druid is designed to run on commodity hardware and we've tried to provide some general guidelines on [how things should be tuned]() for various deployments. We've also provided
some [example specs](../configuration/production-cluster-configuration.html) for hardware for a production cluster. some [example specs](../configuration/production-cluster.html) for hardware for a production cluster.
## Benchmarking Druid ## Benchmarking Druid

View File

@ -92,7 +92,7 @@ The Index Task is a simpler variation of the Index Hadoop task that is designed
This field is required. This field is required.
See [Ingestion](../ingestion/ingestion.html) See [Ingestion](../ingestion/index.html)
#### IOConfig #### IOConfig

View File

@ -6,13 +6,13 @@
h2. Getting Started h2. Getting Started
* "Concepts":./ * "Concepts":./
* "Hello, Druid":../tutorials/tutorial-a-first-look-at-druid.html * "Hello, Druid":../tutorials/tutorial-a-first-look-at-druid.html
* "Tutorials":../tutorials/tutorials.html * "Tutorials":../tutorials/index.html
* "Evaluate Druid":../misc/evaluate.html * "Evaluate Druid":../misc/evaluate.html
h2. Data Ingestion h2. Data Ingestion
* "Overview":../ingestion/ingestion-overview.html * "Overview":../ingestion/ingestion-overview.html
* "Data Formats":../ingestion/data-formats.html * "Data Formats":../ingestion/data-formats.html
* "Data Schema":../ingestion/ingestion.html * "Data Schema":../ingestion/index.html
* "Realtime Ingestion":../ingestion/realtime-ingestion.html * "Realtime Ingestion":../ingestion/realtime-ingestion.html
* "Batch Ingestion":../ingestion/batch-ingestion.html * "Batch Ingestion":../ingestion/batch-ingestion.html
* "FAQ":../ingestion/ingestion-faq.html * "FAQ":../ingestion/ingestion-faq.html
@ -59,28 +59,28 @@ h2. Operations
* "Performance FAQ":../operations/performance-faq.html * "Performance FAQ":../operations/performance-faq.html
h2. Configuration h2. Configuration
* "Common Configuration":../configuration/configuration.html * "Common Configuration":../configuration/index.html
* "Indexing Service":../configuration/indexing-service-config.html * "Indexing Service":../configuration/indexing-service.html
* "Coordinator":../configuration/coordinator-config.html * "Coordinator":../configuration/coordinator.html
* "Historical":../configuration/historical-config.html * "Historical":../configuration/historical.html
* "Broker":../configuration/broker-config.html * "Broker":../configuration/broker.html
* "Realtime":../configuration/realtime-config.html * "Realtime":../configuration/realtime.html
* "Configuring Logging":../configuration/logging.html * "Configuring Logging":../configuration/logging.html
* "Simple Cluster Configuration":../configuration/simple-cluster-configuration.html * "Simple Cluster Configuration":../configuration/simple-cluster.html
* "Production Cluster Configuration":../configuration/production-cluster-configuration.html * "Production Cluster Configuration":../configuration/production-cluster.html
* "Production Hadoop Configuration":../configuration/hadoop-configuration.html * "Production Hadoop Configuration":../configuration/hadoop.html
h2. Development h2. Development
* "Libraries":../development/libraries.html * "Libraries":../development/libraries.html
* "Extending Druid":../development/modules.html * "Extending Druid":../development/modules.html
* "Build From Source":../development/build-from-source.html * "Build From Source":../development/build.html
* "Versioning":../development/versioning.html * "Versioning":../development/versioning.html
"Integration":../development/integrating-druid-with-other-technologies.html "Integration":../development/integrating-druid-with-other-technologies.html
* Experimental Features * Experimental Features
** "Overview":../development/about-experimental-features.html ** "Overview":../development/experimental.html
** "Geographic Queries":../development/geographicqueries.html ** "Geographic Queries":../development/geo.html
** "Select Query":../development/selectquery.html ** "Select Query":../development/select-query.html
** "Approximate Histograms and Quantiles":../development/approxhisto.html ** "Approximate Histograms and Quantiles":../development/approximate-histograms.html
** "Router node":../development/router.html ** "Router node":../development/router.html
h2. Misc h2. Misc

View File

@ -51,7 +51,7 @@ The following AWS information must be set in `druid.properties`, as environment
How to get the IDENTITY and CREDENTIAL keys is discussed above. How to get the IDENTITY and CREDENTIAL keys is discussed above.
In order to configure each node, you can edit `services/druid/src/main/resources/functions/start_druid.sh` for JVM configuration and `services/druid/src/main/resources/functions/configure_[NODE_NAME].sh` for specific node configuration. For more information on configuration, see the [Druid configuration documentation](../configuration/configuration.html). In order to configure each node, you can edit `services/druid/src/main/resources/functions/start_druid.sh` for JVM configuration and `services/druid/src/main/resources/functions/configure_[NODE_NAME].sh` for specific node configuration. For more information on configuration, see the [Druid configuration documentation](../configuration/index.html).
### Start a Test Cluster With Whirr ### Start a Test Cluster With Whirr
Run the following command: Run the following command:

View File

@ -43,7 +43,7 @@ Metrics (things to aggregate over):
Setting Up Setting Up
---------- ----------
To start, we need to get our hands on a Druid build. There are two ways to get Druid: download a tarball, or [Build From Source](../development/build-from-source.html). You only need to do one of these. To start, we need to get our hands on a Druid build. There are two ways to get Druid: download a tarball, or [Build From Source](../development/build.html). You only need to do one of these.
### Download a Tarball ### Download a Tarball
@ -51,7 +51,7 @@ We've built a tarball that contains everything you'll need. You'll find it [here
### Build From Source ### Build From Source
Follow the [Build From Source](../development/build-from-source.html) guide to build from source. Then grab the tarball from services/target/druid-<version>-bin.tar.gz. Follow the [Build From Source](../development/build.html) guide to build from source. Then grab the tarball from services/target/druid-<version>-bin.tar.gz.
### Unpack the Tarball ### Unpack the Tarball

View File

@ -239,7 +239,7 @@ On this console, you can look at statuses and logs of recently submitted and com
If you decide to reuse the local firehose to ingest your own data and if you run into problems, you can use the console to read the individual task logs. If you decide to reuse the local firehose to ingest your own data and if you run into problems, you can use the console to read the individual task logs.
Task logs can be stored locally or uploaded to [Deep Storage](../dependencies/deep-storage.html). More information about how to configure this is [here](../configuration/configuration.html). Task logs can be stored locally or uploaded to [Deep Storage](../dependencies/deep-storage.html). More information about how to configure this is [here](../configuration/index.html).
Most common data ingestion problems are around timestamp formats and other malformed data issues. Most common data ingestion problems are around timestamp formats and other malformed data issues.

View File

@ -13,7 +13,7 @@ In this tutorial, we will set up other types of Druid nodes and external depende
If you followed the first tutorial, you should already have Druid downloaded. If not, let's go back and do that first. If you followed the first tutorial, you should already have Druid downloaded. If not, let's go back and do that first.
You can download the latest version of druid [here](http://static.druid.io/artifacts/releases/druid-0.7.1-bin.tar.gz). You can also [Build From Source](../development/build-from-source.html) and grab the tarball from services/target/druid-0.7.1-bin.tar.gz. You can download the latest version of druid [here](http://static.druid.io/artifacts/releases/druid-0.7.1-bin.tar.gz). You can also [Build From Source](../development/build.html) and grab the tarball from services/target/druid-0.7.1-bin.tar.gz.
Either way, once you have the tarball, untar the contents within by issuing: Either way, once you have the tarball, untar the contents within by issuing:
@ -95,7 +95,7 @@ Before we get started, let's make sure we have configs in the config directory f
ls config ls config
``` ```
If you are interested in learning more about Druid configuration files, check out this [link](../configuration/configuration.html). Many aspects of Druid are customizable. For the purposes of this tutorial, we are going to use default values for most things. If you are interested in learning more about Druid configuration files, check out this [link](../configuration/index.html). Many aspects of Druid are customizable. For the purposes of this tutorial, we are going to use default values for most things.
#### Common Configuration #### Common Configuration