Merge pull request #1276 from metamx/fix-broken-links

[docs] fix broken links
This commit is contained in:
Fangjin Yang 2015-04-09 18:19:50 -07:00
commit 9f7ab92d81
5 changed files with 6 additions and 6 deletions

View File

@ -3,7 +3,7 @@ layout: doc_page
--- ---
# Booting a Druid Cluster # Booting a Druid Cluster
[Loading Your Data](Tutorial%3A-Loading-Your-Data-Part-2.html) and [All About Queries](Tutorial%3A-All-About-Queries.html) contain recipes to boot a small druid cluster on localhost. However, when it's time to run a more realistic setup—for production or just for testing production—you'll want to find a way to start the cluster on multiple hosts. This document describes two different ways to do this: manually, or as a cloud service via Apache Whirr. [Loading Your Data](Tutorial%3A-Loading-Batch-Data.html) and [All About Queries](Tutorial%3A-All-About-Queries.html) contain recipes to boot a small druid cluster on localhost. However, when it's time to run a more realistic setup—for production or just for testing production—you'll want to find a way to start the cluster on multiple hosts. This document describes two different ways to do this: manually, or as a cloud service via Apache Whirr.
## Manually Booting a Druid Cluster ## Manually Booting a Druid Cluster
You can provision individual servers, loading Druid onto each machine (or building it) and setting the required configuration for each type of node. You'll also have to set up required external dependencies. Then you'll have to start each node. This process is outlined in [Tutorial: The Druid Cluster](Tutorial%3A-The-Druid-Cluster.html). You can provision individual servers, loading Druid onto each machine (or building it) and setting the required configuration for each type of node. You'll also have to set up required external dependencies. Then you'll have to start each node. This process is outlined in [Tutorial: The Druid Cluster](Tutorial%3A-The-Druid-Cluster.html).

View File

@ -7,7 +7,7 @@ Data Formats for Ingestion
Druid can ingest data in JSON, CSV, or custom delimited data such as TSV. While most examples in the documentation use data in JSON format, it is not difficult to configure Druid to ingest CSV or other delimited data. Druid can ingest data in JSON, CSV, or custom delimited data such as TSV. While most examples in the documentation use data in JSON format, it is not difficult to configure Druid to ingest CSV or other delimited data.
## Formatting the Data ## Formatting the Data
The following are three samples of the data used in the [Wikipedia example](Tutorial:-Loading-Streaming-Data.html). The following are three samples of the data used in the [Wikipedia example](Tutorial%3A-Loading-Streaming-Data.html).
_JSON_ _JSON_
@ -133,4 +133,4 @@ The `parser` entry for the `dataSchema` should be changed to describe the tsv fo
Be sure to change the `delimiter` to the appropriate delimiter for your data. Like CSV, you must specify the columns and which subset of the columns you want indexed. Be sure to change the `delimiter` to the appropriate delimiter for your data. Like CSV, you must specify the columns and which subset of the columns you want indexed.
### Multi-value dimensions ### Multi-value dimensions
Dimensions can have multiple values for TSV and CSV data. To specify the delimiter for a multi-value dimension, set the `listDelimiter` Dimensions can have multiple values for TSV and CSV data. To specify the delimiter for a multi-value dimension, set the `listDelimiter`

View File

@ -37,7 +37,7 @@ java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 -classpath config/_commo
Querying Your Data Querying Your Data
------------------ ------------------
Make sure you've completed [Loading Your Data](Loading-Your-Data-Part-1.html) so we have some data to query. Having done that, it's time to query our data! For a complete specification of queries, see [Querying](Querying.html). Make sure you've completed [Loading Your Data](Tutorial%3A-Loading-Streaming-Data.html) so we have some data to query. Having done that, it's time to query our data! For a complete specification of queries, see [Querying](Querying.html).
#### Construct a Query #### Construct a Query
```json ```json

View File

@ -39,7 +39,7 @@ Metrics (things to aggregate over):
Batch Ingestion Batch Ingestion
--------------- ---------------
For the purposes of this tutorial, we are going to use our very small and simple Wikipedia data set. This data can directly be ingested via other means as shown in the previous [tutorial](Tutorial%3A-Loading-Your-Data-Part-1.html). For the purposes of this tutorial, we are going to use our very small and simple Wikipedia data set. This data can directly be ingested via other means as shown in the previous [tutorial](Tutorial%3A-Loading-Streaming-Data.html).
Our data is located at: Our data is located at:

View File

@ -12,7 +12,7 @@ first two tutorials.
## About the Data ## About the Data
We will be working with the same Wikipedia edits data schema [from out previous We will be working with the same Wikipedia edits data schema [from out previous
tutorials](http://localhost:4000/content/Tutorial:-A-First-Look-at-Druid.html#about-the-data). tutorials](Tutorial%3A-A-First-Look-at-Druid.html#about-the-data).
## Set Up ## Set Up