Merge pull request #413 from metamx/igalDruid

updated info re deep storage
This commit is contained in:
fjy 2014-03-04 15:41:50 -07:00
commit 3691d40240
2 changed files with 2 additions and 2 deletions

View File

@ -70,7 +70,7 @@ The effective utilization of cores by Zookeeper, MySQL, and Coordinator nodes is
Storage
-------
Indexed segments should be kept in a permanent store accessible by all nodes like AWS S3 or HDFS or equivalent. Refer [Deep-Storage](deep-storage.html) for more details on supported storage types.
Indexed segments should be kept in a permanent store accessible by all nodes like AWS S3 or HDFS or equivalent. Refer to [Deep-Storage](deep-storage.html) for more details on supported storage types.
Local disk ("ephemeral" on AWS EC2) for caching is recommended over network mounted storage (example of mounted: AWS EBS, Elastic Block Store) in order to avoid network delays during times of heavy usage. If your data center is suitably provisioned for networked storage, perhaps with separate LAN/NICs just for storage, then mounted might work fine.

View File

@ -4,7 +4,7 @@ layout: doc_page
# Deep Storage
Deep storage is where segments are stored. It is a storage mechanism that Druid does not provide. This deep storage infrastructure defines the level of durability of your data, as long as Druid nodes can see this storage infrastructure and get at the segments stored on it, you will not lose data no matter how many Druid nodes you lose. If segments disappear from this storage layer, then you will lose whatever data those segments represented.
The currently supported types of deep storage follow.
The currently supported types of deep storage follow. Other deep-storage options, such as [Cassandra](http://planetcassandra.org/blog/post/cassandra-as-a-deep-storage-mechanism-for-druid-real-time-analytics-engine/), have been developed by members of the community.
## S3-compatible