fix mysql references in tutorial docs

This commit is contained in:
Himanshu Gupta 2015-07-30 22:05:05 -05:00
parent 99c240d1eb
commit 7ee509bcd0
4 changed files with 6 additions and 6 deletions

View File

@ -118,7 +118,7 @@ Druid has a couple of external dependencies for cluster operations.
* **Metadata Storage** Druid relies on a metadata storage to store metadata about segments and configuration. Services that create segments write new entries to the metadata store * **Metadata Storage** Druid relies on a metadata storage to store metadata about segments and configuration. Services that create segments write new entries to the metadata store
and the coordinator nodes monitor the metadata store to know when new data needs to be loaded or old data needs to be dropped. The metadata store is not and the coordinator nodes monitor the metadata store to know when new data needs to be loaded or old data needs to be dropped. The metadata store is not
involved in the query path. MySQL and PostgreSQL are popular metadata stores. involved in the query path. MySQL and PostgreSQL are popular metadata stores for production, but Derby can be used for experimentation when you are running all druid nodes on a single machine.
* **Deep Storage** Deep storage acts as a permanent backup of segments. Services that create segments upload segments to deep storage and historical nodes download * **Deep Storage** Deep storage acts as a permanent backup of segments. Services that create segments upload segments to deep storage and historical nodes download
segments from deep storage. Deep storage is not involved in the query path. S3 and HDFS are popular deep storages. segments from deep storage. Deep storage is not involved in the query path. S3 and HDFS are popular deep storages.

View File

@ -79,7 +79,7 @@ The spec\_file is a path to a file that contains JSON and an example looks like:
}, },
"metadataUpdateSpec" : { "metadataUpdateSpec" : {
"type":"mysql", "type":"mysql",
"connectURI" : "jdbc:metadata storage://localhost:3306/druid", "connectURI" : "jdbc:mysql://localhost:3306/druid",
"password" : "diurd", "password" : "diurd",
"segmentTable" : "druid_segments", "segmentTable" : "druid_segments",
"user" : "druid" "user" : "druid"

View File

@ -59,9 +59,9 @@ The following events should exist in the file:
#### Set Up a Druid Cluster #### Set Up a Druid Cluster
To index the data, we are going to need an indexing service, a historical node, and a coordinator node. To index the data, we are going to need the overlord, a historical node, and a coordinator node.
Note: If Zookeeper and MySQL aren't running, you'll have to start them again as described in [The Druid Cluster](../tutorials/tutorial-the-druid-cluster.html). Note: If Zookeeper isn't running, you'll have to start it again as described in [The Druid Cluster](../tutorials/tutorial-the-druid-cluster.html).
To start the Indexing Service: To start the Indexing Service:

View File

@ -18,8 +18,8 @@ tutorials](tutorial-a-first-look-at-druid.html#about-the-data).
At this point, you should already have Druid downloaded and be comfortable At this point, you should already have Druid downloaded and be comfortable
running a Druid cluster locally. If not, [have a look at our second running a Druid cluster locally. If not, [have a look at our second
tutorial](../tutorials/tutorial-the-druid-cluster.html). If Zookeeper and MySQL are not tutorial](../tutorials/tutorial-the-druid-cluster.html). If Zookeeper is not
running, you will have to start them as described in [The Druid running, you will have to start it as described in [The Druid
Cluster](../tutorials/tutorial-the-druid-cluster.html). Cluster](../tutorials/tutorial-the-druid-cluster.html).
With real-world data, we recommend having a message bus such as [Apache With real-world data, we recommend having a message bus such as [Apache