diff --git a/docs/content/Booting-a-production-cluster.md b/docs/content/Booting-a-production-cluster.md index 33b5898fe15..bc15c4f5e04 100644 --- a/docs/content/Booting-a-production-cluster.md +++ b/docs/content/Booting-a-production-cluster.md @@ -3,7 +3,7 @@ layout: doc_page --- # Booting a Druid Cluster -[Loading Your Data](Tutorial%3A-Loading-Your-Data-Part-2.html) and [All About Queries](Tutorial%3A-All-About-Queries.html) contain recipes to boot a small druid cluster on localhost. However, when it's time to run a more realistic setup—for production or just for testing production—you'll want to find a way to start the cluster on multiple hosts. This document describes two different ways to do this: manually, or as a cloud service via Apache Whirr. +[Loading Your Data](Tutorial%3A-Loading-Batch-Data.html) and [All About Queries](Tutorial%3A-All-About-Queries.html) contain recipes to boot a small druid cluster on localhost. However, when it's time to run a more realistic setup—for production or just for testing production—you'll want to find a way to start the cluster on multiple hosts. This document describes two different ways to do this: manually, or as a cloud service via Apache Whirr. ## Manually Booting a Druid Cluster You can provision individual servers, loading Druid onto each machine (or building it) and setting the required configuration for each type of node. You'll also have to set up required external dependencies. Then you'll have to start each node. This process is outlined in [Tutorial: The Druid Cluster](Tutorial%3A-The-Druid-Cluster.html). diff --git a/docs/content/Data_formats.md b/docs/content/Data_formats.md index 520c314e726..c5ec5e321c5 100644 --- a/docs/content/Data_formats.md +++ b/docs/content/Data_formats.md @@ -7,7 +7,7 @@ Data Formats for Ingestion Druid can ingest data in JSON, CSV, or custom delimited data such as TSV. While most examples in the documentation use data in JSON format, it is not difficult to configure Druid to ingest CSV or other delimited data. ## Formatting the Data -The following are three samples of the data used in the [Wikipedia example](Tutorial:-Loading-Streaming-Data.html). +The following are three samples of the data used in the [Wikipedia example](Tutorial%3A-Loading-Streaming-Data.html). _JSON_ @@ -133,4 +133,4 @@ The `parser` entry for the `dataSchema` should be changed to describe the tsv fo Be sure to change the `delimiter` to the appropriate delimiter for your data. Like CSV, you must specify the columns and which subset of the columns you want indexed. ### Multi-value dimensions -Dimensions can have multiple values for TSV and CSV data. To specify the delimiter for a multi-value dimension, set the `listDelimiter` \ No newline at end of file +Dimensions can have multiple values for TSV and CSV data. To specify the delimiter for a multi-value dimension, set the `listDelimiter` diff --git a/docs/content/Tutorial:-All-About-Queries.md b/docs/content/Tutorial:-All-About-Queries.md index 80a0786fdd8..1eb2fe26487 100644 --- a/docs/content/Tutorial:-All-About-Queries.md +++ b/docs/content/Tutorial:-All-About-Queries.md @@ -37,7 +37,7 @@ java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 -classpath config/_commo Querying Your Data ------------------ -Make sure you've completed [Loading Your Data](Loading-Your-Data-Part-1.html) so we have some data to query. Having done that, it's time to query our data! For a complete specification of queries, see [Querying](Querying.html). +Make sure you've completed [Loading Your Data](Tutorial%3A-Loading-Streaming-Data.html) so we have some data to query. Having done that, it's time to query our data! For a complete specification of queries, see [Querying](Querying.html). #### Construct a Query ```json diff --git a/docs/content/Tutorial:-Loading-Batch-Data.md b/docs/content/Tutorial:-Loading-Batch-Data.md index 0819f245bf5..cc82b4cece7 100644 --- a/docs/content/Tutorial:-Loading-Batch-Data.md +++ b/docs/content/Tutorial:-Loading-Batch-Data.md @@ -39,7 +39,7 @@ Metrics (things to aggregate over): Batch Ingestion --------------- -For the purposes of this tutorial, we are going to use our very small and simple Wikipedia data set. This data can directly be ingested via other means as shown in the previous [tutorial](Tutorial%3A-Loading-Your-Data-Part-1.html). +For the purposes of this tutorial, we are going to use our very small and simple Wikipedia data set. This data can directly be ingested via other means as shown in the previous [tutorial](Tutorial%3A-Loading-Streaming-Data.html). Our data is located at: diff --git a/docs/content/Tutorial:-Loading-Streaming-Data.md b/docs/content/Tutorial:-Loading-Streaming-Data.md index 33376677715..560279ad833 100644 --- a/docs/content/Tutorial:-Loading-Streaming-Data.md +++ b/docs/content/Tutorial:-Loading-Streaming-Data.md @@ -12,7 +12,7 @@ first two tutorials. ## About the Data We will be working with the same Wikipedia edits data schema [from out previous -tutorials](http://localhost:4000/content/Tutorial:-A-First-Look-at-Druid.html#about-the-data). +tutorials](Tutorial%3A-A-First-Look-at-Druid.html#about-the-data). ## Set Up