mirror of https://github.com/apache/druid.git
[docs] fix broken links
This commit is contained in:
parent
700dc6fbc0
commit
66a4daa363
|
@ -3,7 +3,7 @@ layout: doc_page
|
|||
---
|
||||
|
||||
# Booting a Druid Cluster
|
||||
[Loading Your Data](Tutorial%3A-Loading-Your-Data-Part-2.html) and [All About Queries](Tutorial%3A-All-About-Queries.html) contain recipes to boot a small druid cluster on localhost. However, when it's time to run a more realistic setup—for production or just for testing production—you'll want to find a way to start the cluster on multiple hosts. This document describes two different ways to do this: manually, or as a cloud service via Apache Whirr.
|
||||
[Loading Your Data](Tutorial%3A-Loading-Batch-Data.html) and [All About Queries](Tutorial%3A-All-About-Queries.html) contain recipes to boot a small druid cluster on localhost. However, when it's time to run a more realistic setup—for production or just for testing production—you'll want to find a way to start the cluster on multiple hosts. This document describes two different ways to do this: manually, or as a cloud service via Apache Whirr.
|
||||
|
||||
## Manually Booting a Druid Cluster
|
||||
You can provision individual servers, loading Druid onto each machine (or building it) and setting the required configuration for each type of node. You'll also have to set up required external dependencies. Then you'll have to start each node. This process is outlined in [Tutorial: The Druid Cluster](Tutorial%3A-The-Druid-Cluster.html).
|
||||
|
|
|
@ -7,7 +7,7 @@ Data Formats for Ingestion
|
|||
Druid can ingest data in JSON, CSV, or custom delimited data such as TSV. While most examples in the documentation use data in JSON format, it is not difficult to configure Druid to ingest CSV or other delimited data.
|
||||
|
||||
## Formatting the Data
|
||||
The following are three samples of the data used in the [Wikipedia example](Tutorial:-Loading-Streaming-Data.html).
|
||||
The following are three samples of the data used in the [Wikipedia example](Tutorial%3A-Loading-Streaming-Data.html).
|
||||
|
||||
_JSON_
|
||||
|
||||
|
|
|
@ -37,7 +37,7 @@ java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 -classpath config/_commo
|
|||
Querying Your Data
|
||||
------------------
|
||||
|
||||
Make sure you've completed [Loading Your Data](Loading-Your-Data-Part-1.html) so we have some data to query. Having done that, it's time to query our data! For a complete specification of queries, see [Querying](Querying.html).
|
||||
Make sure you've completed [Loading Your Data](Tutorial%3A-Loading-Streaming-Data.html) so we have some data to query. Having done that, it's time to query our data! For a complete specification of queries, see [Querying](Querying.html).
|
||||
|
||||
#### Construct a Query
|
||||
```json
|
||||
|
|
|
@ -39,7 +39,7 @@ Metrics (things to aggregate over):
|
|||
Batch Ingestion
|
||||
---------------
|
||||
|
||||
For the purposes of this tutorial, we are going to use our very small and simple Wikipedia data set. This data can directly be ingested via other means as shown in the previous [tutorial](Tutorial%3A-Loading-Your-Data-Part-1.html).
|
||||
For the purposes of this tutorial, we are going to use our very small and simple Wikipedia data set. This data can directly be ingested via other means as shown in the previous [tutorial](Tutorial%3A-Loading-Streaming-Data.html).
|
||||
|
||||
Our data is located at:
|
||||
|
||||
|
|
|
@ -12,7 +12,7 @@ first two tutorials.
|
|||
## About the Data
|
||||
|
||||
We will be working with the same Wikipedia edits data schema [from out previous
|
||||
tutorials](http://localhost:4000/content/Tutorial:-A-First-Look-at-Druid.html#about-the-data).
|
||||
tutorials](Tutorial%3A-A-First-Look-at-Druid.html#about-the-data).
|
||||
|
||||
## Set Up
|
||||
|
||||
|
|
Loading…
Reference in New Issue