mirror of https://github.com/apache/druid.git
Merge pull request #1899 from itsmee/docs-historical-improvements
Docs improved: more details about caching and memory for segments on historicals
This commit is contained in:
commit
0d85774a27
|
@ -24,7 +24,7 @@ for both broker and historical nodes, when defined in the common properties file
|
||||||
|
|
||||||
#### Local Cache
|
#### Local Cache
|
||||||
|
|
||||||
A simple in-memory LRU cache.
|
A simple in-memory LRU cache. Local cache resides in JVM heap memory, so if you enable it, make sure you increase heap size accordingly.
|
||||||
|
|
||||||
|Property|Description|Default|
|
|Property|Description|Default|
|
||||||
|--------|-----------|-------|
|
|--------|-----------|-------|
|
||||||
|
|
|
@ -10,7 +10,7 @@ The size of the JVM heap really depends on the type of Druid node you are runnin
|
||||||
|
|
||||||
[Broker nodes](../design/broker.html) uses the JVM heap mainly to merge results from historicals and real-times. Brokers also use off-heap memory and processing threads for groupBy queries. We recommend 20G-30G of heap here.
|
[Broker nodes](../design/broker.html) uses the JVM heap mainly to merge results from historicals and real-times. Brokers also use off-heap memory and processing threads for groupBy queries. We recommend 20G-30G of heap here.
|
||||||
|
|
||||||
[Historical nodes](../design/historical.html) use off-heap memory to store intermediate results, and by default, all segments are memory mapped before they can be queried. Typically, the more memory is available on a historical node, the more segments can be served without the possibility of data being paged on to disk. On historicals, the JVM heap is used for [GroupBy queries](../querying/groupbyquery.html), some data structures used for intermediate computation, and general processing. One way to calculate how much space there is for segments is: memory_for_segments = total_memory - heap - direct_memory - jvm_overhead.
|
[Historical nodes](../design/historical.html) use off-heap memory to store intermediate results, and by default, all segments are memory mapped before they can be queried. Typically, the more memory is available on a historical node, the more segments can be served without the possibility of data being paged on to disk. On historicals, the JVM heap is used for [GroupBy queries](../querying/groupbyquery.html), some data structures used for intermediate computation, and general processing. One way to calculate how much space there is for segments is: memory_for_segments = total_memory - heap - direct_memory - jvm_overhead. Note that total_memory here refers to the memory available to the cgroup (if running on Linux), which for default cases is going to be all the system memory.
|
||||||
|
|
||||||
We recommend 250mb * (processing.numThreads) for the heap.
|
We recommend 250mb * (processing.numThreads) for the heap.
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue