added advice about restarting zk and mysql if not already running

This commit is contained in:
Igal Levy 2014-02-19 15:28:48 -08:00
parent c57cf6ab4d
commit 75ae4d434d
3 changed files with 6 additions and 0 deletions

View File

@ -14,6 +14,8 @@ Before we start digging into how to query Druid, make sure you've gone through t
Let's start up a simple Druid cluster so we can query all the things.
Note: If Zookeeper and MySQL aren't running, you'll have to start them again as described in [The Druid Cluster](Tutorial%3A-The-Druid-Cluster.html).
To start a Coordinator node:
```bash

View File

@ -66,6 +66,8 @@ There are five data points spread across the day of 2013-08-31. Talk about big d
In order to ingest and query this data, we are going to need to run a historical node, a coordinator node, and an indexing service to run the batch ingestion.
Note: If Zookeeper and MySQL aren't running, you'll have to start them again as described in [The Druid Cluster](Tutorial%3A-The-Druid-Cluster.html).
#### Starting a Local Indexing Service
The simplest indexing service we can start up is to run an [overlord](Indexing-Service.html) node in local mode. You can do so by issuing:

View File

@ -231,6 +231,8 @@ The following events should exist in the file:
To index the data, we are going to need an indexing service, a historical node, and a coordinator node.
Note: If Zookeeper and MySQL aren't running, you'll have to start them again as described in [The Druid Cluster](Tutorial%3A-The-Druid-Cluster.html).
To start the Indexing Service:
```bash