mirror of https://github.com/apache/druid.git
add more docs based on proposed wishlist
This commit is contained in:
parent
b9eebfd73b
commit
42ac41d55e
|
@ -31,3 +31,20 @@ some [example specs](../configuration/production-cluster.html) for hardware for
|
|||
The best resource to benchmark Druid is to follow the steps outlined in our [blog post](http://druid.io/blog/2014/03/17/benchmarking-druid.html) about the topic.
|
||||
The code to reproduce the results in the blog post are all open source. The blog post covers Druid queries on TPC-H data, but you should be able to customize
|
||||
configuration parameters to your data set. The blog post is a little outdated and uses an older version of Druid, but is still mostly relevant to demonstrate performance.
|
||||
|
||||
## Colocating Druid Processes for a POC
|
||||
|
||||
Not all Druid node processes need to run on separate machines. You can set up a small cluster with colocated processes to load several gigabytes of data.
|
||||
|
||||
It is recommended you follow the [example production configuration](../configuration/production-cluster.html) for an actual production setup.
|
||||
|
||||
1. node1: [Coordinator](../design/coordinator.html) + metadata store + zookeeper
|
||||
2. node2: [Broker](../design/broker.html) + [Historical](../design/historical.html)
|
||||
3. node3: [Overlord](../design/indexing-service.html)
|
||||
|
||||
The coordination pieces (coordinator, metadata store, ZK) can be colocated on the same node. These processes do not require many resources, even for reasonably large clusters.
|
||||
|
||||
The query pieces (broker + historical) can be colocated. You can add more of these nodes if your data doesn't fit on a single machine. Make sure to allocate enough heap/off-heap size to both processes.
|
||||
|
||||
For small ingest workloads, you can run the overlord in local mode to load your data.
|
||||
|
||||
|
|
|
@ -13,6 +13,12 @@ The Hadoop Index Task takes this parameter has part of the task JSON and the sta
|
|||
|
||||
If you are still having problems, include all relevant hadoop jars at the beginning of the classpath of your indexing or historical nodes.
|
||||
|
||||
Working with CDH
|
||||
----------------
|
||||
Members of the community have reported dependency conflicts between the version of Jackson used in CDH and Druid. Currently, our best workaround is to edit Druid's pom.xml dependencies to match the version of Jackson in your hadoop version and recompile Druid.
|
||||
|
||||
For more about building Druid, please see [Building Druid](../development/build.html).
|
||||
|
||||
|
||||
Working with Hadoop 1.x and older
|
||||
---------------------------------
|
||||
|
|
|
@ -0,0 +1,7 @@
|
|||
---
|
||||
layout: doc_page
|
||||
---
|
||||
# SQL Support for Druid
|
||||
Full SQL is currently not supported with Druid. SQL libraries on top of Druid have been contributed by the community and can be found on our [libraries](../development/libraries.html) page.
|
||||
|
||||
The community SQL libraries are not yet as expressive as Druid's native query language.
|
|
@ -32,6 +32,7 @@ h2. Querying
|
|||
** "Post Aggregations":../querying/post-aggregations.html
|
||||
** "Granularities":../querying/granularities.html
|
||||
** "DimensionSpecs":../querying/dimensionspecs.html
|
||||
* "SQL":../querying/sql.html
|
||||
|
||||
h2. Design
|
||||
* "Overview":../design/design.html
|
||||
|
|
Loading…
Reference in New Issue