druid/docs/content/Cluster-setup.md

117 lines
7.2 KiB
Markdown
Raw Normal View History

2013-10-10 18:05:01 -04:00
---
layout: doc_page
---
# Setting Up a Druid Cluster
A Druid cluster consists of various node types that need to be set up depending on your use case. See our [Design](Design.html) docs for a description of the different node types.
2013-10-10 18:05:01 -04:00
Minimum Physical Layout: Absolute Minimum
-----------------------------------------
2013-10-10 18:05:01 -04:00
As a special case, the absolute minimum setup is one of the standalone examples for real-time ingestion and querying; see [Examples](Examples.html) that can easily run on one machine with one core and 1GB RAM. This layout can be set up to try some basic queries with Druid.
2013-10-10 18:05:01 -04:00
Minimum Physical Layout: Experimental Testing with 4GB of RAM
-------------------------------------------------------------
2013-10-10 18:05:01 -04:00
This layout can be used to load some data from deep storage onto a Druid historical node for the first time. A minimal physical layout for a 1 or 2 core machine with 4GB of RAM is:
2013-10-10 18:05:01 -04:00
1. node1: [Coordinator](Coordinator.html) + metadata service + zookeeper + [Historical](Historical.html)
2013-11-13 11:15:11 -05:00
2. transient nodes: [Indexing Service](Indexing-Service.html)
2013-10-10 18:05:01 -04:00
This setup is only reasonable to prove that a configuration works. It would not be worthwhile to use this layout for performance measurement.
2013-10-10 18:05:01 -04:00
Comfortable Physical Layout: Pilot Project with Multiple Machines
-----------------------------------------------------------------
2013-10-10 18:05:01 -04:00
The machine size "flavors" are using AWS/EC2 terminology for descriptive purposes only and is not meant to imply that AWS/EC2 is required or recommended. Another cloud provider or your own hardware can also work.
2013-10-10 18:05:01 -04:00
A minimal physical layout not constrained by cores that demonstrates parallel querying and realtime, using AWS-EC2 "small"/m1.small (one core, with 1.7GB of RAM) or larger, no real-time, is:
2013-10-10 18:05:01 -04:00
1. node1: [Coordinator](Coordinator.html) (m1.small)
2. node2: metadata service (m1.small)
3. node3: zookeeper (m1.small)
2013-11-13 11:15:11 -05:00
4. node4: [Broker](Broker.html) (m1.small or m1.medium or m1.large)
5. node5: [Historical](Historical.html) (m1.small or m1.medium or m1.large)
6. node6: [Historical](Historical.html) (m1.small or m1.medium or m1.large)
2013-11-13 11:15:11 -05:00
7. node7: [Realtime](Realtime.html) (m1.small or m1.medium or m1.large)
8. transient nodes: [Indexing Service](Indexing-Service.html)
2013-10-10 18:05:01 -04:00
This layout naturally lends itself to adding more RAM and core to Historical nodes, and to adding many more Historical nodes. Depending on the actual load, the Coordinator, metadata server, and Zookeeper might need to use larger machines.
2013-10-10 18:05:01 -04:00
High Availability Physical Layout
---------------------------------
2013-10-10 18:05:01 -04:00
The machine size "flavors" are using AWS/EC2 terminology for descriptive purposes only and is not meant to imply that AWS/EC2 is required or recommended. Another cloud provider or your own hardware can also work.
2013-10-10 18:05:01 -04:00
An HA layout allows full rolling restarts and heavy volume:
1. node1: [Coordinator](Coordinator.html) (m1.small or m1.medium or m1.large)
2. node2: [Coordinator](Coordinator.html) (m1.small or m1.medium or m1.large) (backup)
3. node3: metadata service (c1.medium or m1.large)
4. node4: metadata service (c1.medium or m1.large) (backup)
5. node5: zookeeper (c1.medium)
6. node6: zookeeper (c1.medium)
7. node7: zookeeper (c1.medium)
2013-11-13 11:15:11 -05:00
8. node8: [Broker](Broker.html) (m1.small or m1.medium or m1.large or m2.xlarge or m2.2xlarge or m2.4xlarge)
9. node9: [Broker](Broker.html) (m1.small or m1.medium or m1.large or m2.xlarge or m2.2xlarge or m2.4xlarge) (backup)
10. node10: [Historical](Historical.html) (m1.small or m1.medium or m1.large or m2.xlarge or m2.2xlarge or m2.4xlarge)
11. node11: [Historical](Historical.html) (m1.small or m1.medium or m1.large or m2.xlarge or m2.2xlarge or m2.4xlarge)
2013-11-13 11:15:11 -05:00
12. node12: [Realtime](Realtime.html) (m1.small or m1.medium or m1.large or m2.xlarge or m2.2xlarge or m2.4xlarge)
13. transient nodes: [Indexing Service](Indexing-Service.html)
2013-10-10 18:05:01 -04:00
Sizing for Cores and RAM
------------------------
2013-10-10 18:05:01 -04:00
The Historical and Broker nodes will use as many cores as are available, depending on usage, so it is best to keep these on dedicated machines. The upper limit of effectively utilized cores is not well characterized yet and would depend on types of queries, query load, and the schema. Historical daemons should have a heap a size of at least 1GB per core for normal usage, but could be squeezed into a smaller heap for testing. Since in-memory caching is essential for good performance, even more RAM is better. Broker nodes will use RAM for caching, so they do more than just route queries.
2013-10-10 18:05:01 -04:00
The effective utilization of cores by Zookeeper, MySQL, and Coordinator nodes is likely to be between 1 and 2 for each process/daemon, so these could potentially share a machine with lots of cores. These daemons work with heap a size between 500MB and 1GB.
2013-10-10 18:05:01 -04:00
Storage
-------
2013-10-10 18:05:01 -04:00
2014-03-04 14:40:35 -05:00
Indexed segments should be kept in a permanent store accessible by all nodes like AWS S3 or HDFS or equivalent. Refer to [Deep-Storage](deep-storage.html) for more details on supported storage types.
2013-10-10 18:05:01 -04:00
Local disk ("ephemeral" on AWS EC2) for caching is recommended over network mounted storage (example of mounted: AWS EBS, Elastic Block Store) in order to avoid network delays during times of heavy usage. If your data center is suitably provisioned for networked storage, perhaps with separate LAN/NICs just for storage, then mounted might work fine.
2013-10-10 18:05:01 -04:00
Setup
-----
2013-10-10 18:05:01 -04:00
Setting up a cluster is essentially just firing up all of the nodes you want with the proper [configuration](Configuration.html). One thing to be aware of is that there are a few properties in the configuration that potentially need to be set individually for each process:
2013-10-10 18:05:01 -04:00
```
2013-10-10 18:05:01 -04:00
druid.server.type=historical|realtime
druid.host=someHostOrIPaddrWithPort
druid.port=8080
```
2013-10-10 18:05:01 -04:00
`druid.server.type` should be set to "historical" for your historical nodes and realtime for the realtime nodes. The Coordinator will only assign segments to a "historical" node and the broker has some intelligence around its ability to cache results when talking to a realtime node. This does not need to be set for the coordinator or the broker.
2013-10-10 18:05:01 -04:00
2015-02-18 14:46:27 -05:00
`druid.host` should be set to the hostname that can be used to talk to the given server process. Basically, someone should be able to send a request to http://${druid.host}:${druid.port}/ and actually talk to the process.
2013-10-10 18:05:01 -04:00
2015-02-18 14:46:27 -05:00
`druid.port` should be set to the port that the server should listen on.
2013-10-10 18:05:01 -04:00
Build/Run
---------
2013-10-10 18:05:01 -04:00
The simplest way to build and run from the repository is to run `mvn package` from the base directory and then take `druid-services/target/druid-services-*-selfcontained.jar` and push that around to your machines; the jar does not need to be expanded, and since it contains the main() methods for each kind of service, it is *not* invoked with java -jar. It can be run from a normal java command-line by just including it on the classpath and then giving it the main class that you want to run. For example one instance of the Historical node/service can be started like this:
2013-10-10 18:05:01 -04:00
```
java -Duser.timezone=UTC -Dfile.encoding=UTF-8 -cp services/target/druid-services-*-selfcontained.jar io.druid.cli.Main server historical
```
2013-10-10 18:05:01 -04:00
All Druid server nodes can be started with:
```
io.druid.cli.Main server <node_type>
```
The table below show the program arguments for the different node types.
|service|program arguments|
|-------|----------------|
|Realtime|realtime|
|Coordinator|coordinator|
|Broker|broker|
|Historical|historical|