mirror of https://github.com/apache/druid.git
Merge pull request #960 from metamx/new-config
Rewrite config docs for 0.7.x
This commit is contained in:
commit
335fbebb94
|
@ -8,34 +8,93 @@ For general Broker Node information, see [here](Broker.html).
|
|||
Runtime Configuration
|
||||
---------------------
|
||||
|
||||
The broker module uses several of the default modules in [Configuration](Configuration.html) and has the following set of configurations as well:
|
||||
The broker node uses several of the global configs in [Configuration](Configuration.html) and has the following set of configurations as well:
|
||||
|
||||
### Node Configs
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.host`|The host for the current node. This is used to advertise the current processes location as reachable from another node and should generally be specified such that `http://${druid.host}/` could actually talk to this process|none|
|
||||
|`druid.port`|This is the port to actually listen on; unless port mapping is used, this will be the same port as is on `druid.host`|none|
|
||||
|`druid.service`|The name of the service. This is used as a dimension when emitting metrics and alerts to differentiate between the various services|none|
|
||||
|
||||
### Query Configs
|
||||
|
||||
#### Query Prioritization
|
||||
|
||||
|Property|Possible Values|Description|Default|
|
||||
|--------|---------------|-----------|-------|
|
||||
|`druid.broker.balancer.type`|`random`, `connectionCount`|Determines how the broker balances connections to historical nodes. `random` choose randomly, `connectionCount` picks the node with the fewest number of active connections to|`random`|
|
||||
|`druid.broker.select.tier`|`highestPriority`, `lowestPriority`, `custom`|If segments are cross-replicated across tiers in a cluster, you can tell the broker to prefer to select segments in a tier with a certain priority.|`highestPriority`|
|
||||
|`druid.broker.select.tier.custom.priorities`|`An array of integer priorities.`|Select servers in tiers with a custom priority list.|None|
|
||||
|`druid.broker.cache.type`|`local`, `memcached`|The type of cache to use for queries.|`local`|
|
||||
|`druid.broker.cache.unCacheable`|All druid query types|All query types to not cache.|["groupBy", "select"]|
|
||||
|`druid.broker.cache.numBackgroundThreads`|Non-negative integer|Number of background threads in the thread pool to use for eventual-consistency caching results if caching is used. It is recommended to set this value greater or equal to the number of processing threads. To force caching to execute in the same thread as the query (query results are blocked on caching completion), use a thread count of 0. Setups who use a Druid backend in programatic settings (sub-second re-querying) should consider setting this to 0 to prevent eventual consistency from biting overall performance in the ass. If this is you, please experiment to find out what setting works best. |`0`|
|
||||
|
||||
#### Concurrent Requests
|
||||
|
||||
Druid uses Jetty to serve HTTP requests.
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.server.http.numThreads`|Number of threads for HTTP requests.|10|
|
||||
|`druid.server.http.maxIdleTime`|The Jetty max idle time for a connection.|PT5m|
|
||||
|`druid.broker.http.numConnections`|Size of connection pool for the Broker to connect to historical and real-time nodes. If there are more queries than this number that all need to speak to the same node, then they will queue up.|5|
|
||||
|`druid.broker.http.readTimeout`|The timeout for data reads.|PT15M|
|
||||
|
||||
#### Processing
|
||||
|
||||
The broker only uses processing configs for nested groupBy queries.
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.processing.buffer.sizeBytes`|This specifies a buffer size for the storage of intermediate results. The computation engine in both the Historical and Realtime nodes will use a scratch buffer of this size to do all of their intermediate computations off-heap. Larger values allow for more aggregations in a single pass over the data while smaller values can require more passes depending on the query that is being executed.|1073741824 (1GB)|
|
||||
|`druid.processing.formatString`|Realtime and historical nodes use this format string to name their processing threads.|processing-%s|
|
||||
|`druid.processing.numThreads`|The number of processing threads to have available for parallel processing of segments. Our rule of thumb is `num_cores - 1`, which means that even under heavy load there will still be one core available to do background tasks like talking with ZooKeeper and pulling down segments. If only one core is available, this property defaults to the value `1`.|Number of cores - 1 (or 1)|
|
||||
|`druid.processing.columnCache.sizeBytes`|Maximum size in bytes for the dimension value lookup cache. Any value greater than `0` enables the cache. It is currently disabled by default. Enabling the lookup cache can significantly improve the performance of aggregators operating on dimension values, such as the JavaScript aggregator, or cardinality aggregator, but can slow things down if the cache hit rate is low (i.e. dimensions with few repeating values). Enabling it may also require additional garbage collection tuning to avoid long GC pauses.|`0` (disabled)|
|
||||
|
||||
#### General Query Configuration
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.query.chunkPeriod`|Long-interval queries (of any type) may be broken into shorter interval queries, reducing the impact on resources. Use ISO 8601 periods. For example, if this property is set to `P1M` (one month), then a query covering a year would be broken into 12 smaller queries. |0 (off)|
|
||||
|
||||
##### GroupBy Query Config
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.query.groupBy.singleThreaded`|Run single threaded group By queries.|false|
|
||||
|`druid.query.groupBy.maxIntermediateRows`|Maximum number of intermediate rows.|50000|
|
||||
|`druid.query.groupBy.maxResults`|Maximum number of results.|500000|
|
||||
|
||||
##### Search Query Config
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.query.search.maxSearchLimit`|Maximum number of search results to return.|1000|
|
||||
|
||||
### Caching
|
||||
|
||||
You can optionally only configure caching to be enabled on the broker by setting caching configs here.
|
||||
|
||||
|Property|Possible Values|Description|Default|
|
||||
|--------|---------------|-----------|-------|
|
||||
|`druid.broker.cache.useCache`|Enable the cache on the broker.|false|
|
||||
|`druid.broker.cache.populateCache`|Populate the cache on the broker.|false|
|
||||
|`druid.cache.type`|`local`, `memcached`|The type of cache to use for queries.|`local`|
|
||||
|`druid.cache.unCacheable`|All druid query types|All query types to not cache.|["groupBy", "select"]|
|
||||
|
||||
#### Local Cache
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.broker.cache.sizeInBytes`|Maximum cache size in bytes. Zero disables caching.|0|
|
||||
|`druid.broker.cache.initialSize`|Initial size of the hashtable backing the cache.|500000|
|
||||
|`druid.broker.cache.logEvictionCount`|If non-zero, log cache eviction every `logEvictionCount` items.|0|
|
||||
|`druid.broker.cache.numBackgroundThreads`|Number of background threads in the thread pool to use for eventual-consistency caching results if caching is used. It is recommended to set this value greater or equal to the number of processing threads. To force caching to execute in the same thread as the query (query results are blocked on caching completion), use a thread count of 0. Setups who use a Druid backend in programatic settings (sub-second re-querying) should consider setting this to 0 to prevent eventual consistency from biting overall performance in the ass. If this is you, please experiment to find out what setting works best. |`0`|
|
||||
|
||||
|`druid.cache.sizeInBytes`|Maximum cache size in bytes. Zero disables caching.|0|
|
||||
|`druid.cache.initialSize`|Initial size of the hashtable backing the cache.|500000|
|
||||
|`druid.cache.logEvictionCount`|If non-zero, log cache eviction every `logEvictionCount` items.|0|
|
||||
|
||||
#### Memcache
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.broker.cache.expiration`|Memcached [expiration time](https://code.google.com/p/memcached/wiki/NewCommands#Standard_Protocol).|2592000 (30 days)|
|
||||
|`druid.broker.cache.timeout`|Maximum time in milliseconds to wait for a response from Memcached.|500|
|
||||
|`druid.broker.cache.hosts`|Comma separated list of Memcached hosts `<host:port>`.|none|
|
||||
|`druid.broker.cache.maxObjectSize`|Maximum object size in bytes for a Memcached object.|52428800 (50 MB)|
|
||||
|`druid.broker.cache.memcachedPrefix`|Key prefix for all keys in Memcached.|druid|
|
||||
|`druid.cache.expiration`|Memcached [expiration time](https://code.google.com/p/memcached/wiki/NewCommands#Standard_Protocol).|2592000 (30 days)|
|
||||
|`druid.cache.timeout`|Maximum time in milliseconds to wait for a response from Memcached.|500|
|
||||
|`druid.cache.hosts`|Command separated list of Memcached hosts `<host:port>`.|none|
|
||||
|`druid.cache.maxObjectSize`|Maximum object size in bytes for a Memcached object.|52428800 (50 MB)|
|
||||
|`druid.cache.memcachedPrefix`|Key prefix for all keys in Memcached.|druid|
|
||||
|
|
|
@ -4,7 +4,7 @@ layout: doc_page
|
|||
|
||||
# Configuring Druid
|
||||
|
||||
This describes the basic server configuration that is loaded by all Druid server processes; the same file is loaded by all. See also the JSON "specFile" descriptions in [Realtime](Realtime.html) and [Batch-ingestion](Batch-ingestion.html).
|
||||
This describes the common configuration shared by all Druid nodes. These configurations can be defined in the `common.runtime.properties` file.
|
||||
|
||||
## JVM Configuration Best Practices
|
||||
|
||||
|
@ -14,51 +14,17 @@ There are three JVM parameters that we set on all of our processes:
|
|||
2. `-Dfile.encoding=UTF-8` This is similar to timezone, we test assuming UTF-8. Local encodings might work, but they also might result in weird and interesting bugs.
|
||||
3. `-Djava.io.tmpdir=<a path>` Various parts of the system that interact with the file system do it via temporary files, and these files can get somewhat large. Many production systems are set up to have small (but fast) `/tmp` directories, which can be problematic with Druid so we recommend pointing the JVM’s tmp directory to something with a little more meat.
|
||||
|
||||
## Modules
|
||||
### Extensions
|
||||
|
||||
As of Druid v0.6, most core Druid functionality has been compartmentalized into modules. There are a set of default modules that may apply to any node type, and there are specific modules for the different node types. Default modules are __lazily instantiated__. Each module has its own set of configuration.
|
||||
|
||||
This page describes the configuration of the default modules. Node-specific configuration is discussed on each node's respective page. In addition, you can add custom modules to [extend Druid](Modules.html).
|
||||
|
||||
Configuration of the various modules is done via Java properties. These can either be provided as `-D` system properties on the java command line or they can be passed in via a file called `runtime.properties` that exists on the classpath.
|
||||
|
||||
Note: as a future item, we’d like to consolidate all of the various configuration into a yaml/JSON based configuration file.
|
||||
|
||||
### Emitter Module
|
||||
|
||||
The Druid servers emit various metrics and alerts via something we call an Emitter. There are two emitter implementations included with the code, one that just logs to log4j ("logging", which is used by default if no emitter is specified) and one that does POSTs of JSON events to a server ("http"). The properties for using the logging emitter are described below.
|
||||
Many of Druid's external dependencies can be plugged in as modules. Extensions can be provided using the following configs:
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.emitter`|Setting this value to either "logging" or "http" will instantialize one of the emitter modules.|logging|
|
||||
|`druid.extensions.remoteRepositories`|If this is not set to '[]', Druid will try to download extensions at the specified remote repository.|["http://repo1.maven.org/maven2/","https://metamx.artifactoryonline.com/metamx/pub-libs-releases-local"]|
|
||||
|`druid.extensions.localRepository`|The local maven directory where extensions are installed. If this is set, remoteRepositories is not required.|[]|
|
||||
|`druid.extensions.coordinates`|The list of extensions to include.|[]|
|
||||
|
||||
|
||||
#### Logging Emitter Module
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.emitter.logging.loggerClass`|Choices: HttpPostEmitter, LoggingEmitter, NoopServiceEmitter, ServiceEmitter. The class used for logging.|LoggingEmitter|
|
||||
|`druid.emitter.logging.logLevel`|Choices: debug, info, warn, error. The log level at which message are logged.|info|
|
||||
|
||||
#### Http Emitter Module
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.emitter.http.timeOut`|The timeout for data reads.|PT5M|
|
||||
|`druid.emitter.http.flushMillis`|How often to internal message buffer is flushed (data is sent).|60000|
|
||||
|`druid.emitter.http.flushCount`|How many messages can the internal message buffer hold before flushing (sending).|500|
|
||||
|`druid.emitter.http.recipientBaseUrl`|The base URL to emit messages to. Druid will POST JSON to be consumed at the HTTP endpoint specified by this property.|none|
|
||||
|
||||
### Http Client Module
|
||||
|
||||
This is the HTTP client used by [Broker](Broker.html) nodes.
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.broker.http.numConnections`|Size of connection pool for the Broker to connect to historical and real-time nodes. If there are more queries than this number that all need to speak to the same node, then they will queue up.|5|
|
||||
|`druid.broker.http.readTimeout`|The timeout for data reads.|PT15M|
|
||||
|
||||
### Curator Module
|
||||
### Zookeeper
|
||||
|
||||
Druid uses [Curator](http://curator.incubator.apache.org/) for all [Zookeeper](http://zookeeper.apache.org/) interactions.
|
||||
|
||||
|
@ -68,110 +34,36 @@ Druid uses [Curator](http://curator.incubator.apache.org/) for all [Zookeeper](h
|
|||
|`druid.zk.service.sessionTimeoutMs`|ZooKeeper session timeout, in milliseconds.|30000|
|
||||
|`druid.curator.compress`|Boolean flag for whether or not created Znodes should be compressed.|false|
|
||||
|
||||
### Announcer Module
|
||||
|
||||
The announcer module is used to announce and unannounce Znodes in ZooKeeper (using Curator).
|
||||
|
||||
#### ZooKeeper Paths
|
||||
|
||||
See [ZooKeeper](ZooKeeper.html).
|
||||
|
||||
#### Data Segment Announcer
|
||||
|
||||
Data segment announcers are used to announce segments.
|
||||
We recommend just setting the base ZK path, but all ZK paths that Druid uses can be overwritten.
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.announcer.type`|Choices: legacy or batch. The type of data segment announcer to use.|batch|
|
||||
|`druid.zk.paths.base`|Base Zookeeper path.|druid|
|
||||
|`druid.zk.paths.propertiesPath`|Zookeeper properties path.|druid/properties|
|
||||
|`druid.zk.paths.announcementsPath`|Druid node announcement path.|druid/announcements|
|
||||
|`druid.zk.paths.liveSegmentsPath`|Current path for where Druid nodes announce their segments.|druid/segments|
|
||||
|`druid.zk.paths.loadQueuePath`|Entries here cause historical nodes to load and drop segments.|druid/loadQueue|
|
||||
|`druid.zk.paths.coordinatorPath`|Used by the coordinator for leader election.|druid/coordinator|
|
||||
|`druid.zk.paths.servedSegmentsPath`|@Deprecated. Legacy path for where Druid nodes announce their segments.|druid/servedSegments|
|
||||
|
||||
##### Single Data Segment Announcer
|
||||
|
||||
In legacy Druid, each segment served by a node would be announced as an individual Znode.
|
||||
|
||||
##### Batch Data Segment Announcer
|
||||
|
||||
In current Druid, multiple data segments may be announced under the same Znode.
|
||||
The indexing service also uses its own set of paths. These configs can be included in the common configuration.
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.announcer.segmentsPerNode`|Each Znode contains info for up to this many segments.|50|
|
||||
|`druid.announcer.maxBytesPerNode`|Max byte size for Znode.|524288|
|
||||
|`druid.zk.paths.indexer.announcementsPath`|Middle managers announce themselves here.|druid/indexer/announcements|
|
||||
|`druid.zk.paths.indexer.tasksPath`|Used to assign tasks to middle managers.|druid/indexer/tasks|
|
||||
|`druid.zk.paths.indexer.statusPath`|Parent path for announcement of task statuses.|druid/indexer/status|
|
||||
|`druid.zk.paths.indexer.leaderLatchPath`|Used for Overlord leader election.|druid/indexer/leaderLatchPath|
|
||||
|
||||
### Druid Processing Module
|
||||
|
||||
This module contains query processing functionality.
|
||||
The following path is used service discovery.
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.processing.buffer.sizeBytes`|This specifies a buffer size for the storage of intermediate results. The computation engine in both the Historical and Realtime nodes will use a scratch buffer of this size to do all of their intermediate computations off-heap. Larger values allow for more aggregations in a single pass over the data while smaller values can require more passes depending on the query that is being executed.|1073741824 (1GB)|
|
||||
|`druid.processing.formatString`|Realtime and historical nodes use this format string to name their processing threads.|processing-%s|
|
||||
|`druid.processing.numThreads`|The number of processing threads to have available for parallel processing of segments. Our rule of thumb is `num_cores - 1`, which means that even under heavy load there will still be one core available to do background tasks like talking with ZooKeeper and pulling down segments. If only one core is available, this property defaults to the value `1`.|Number of cores - 1 (or 1)|
|
||||
|`druid.processing.columnCache.sizeBytes`|Maximum size in bytes for the dimension value lookup cache. Any value greater than `0` enables the cache. It is currently disabled by default. Enabling the lookup cache can significantly improve the performance of aggregators operating on dimension values, such as the JavaScript aggregator, or cardinality aggregator, but can slow things down if the cache hit rate is low (i.e. dimensions with few repeating values). Enabling it may also require additional garbage collection tuning to avoid long GC pauses.|`0` (disabled)|
|
||||
|`druid.discovery.curator.path`|Services announce themselves under this ZooKeeper path.|/druid/discovery|
|
||||
|
||||
### Request Logging
|
||||
|
||||
### Metrics Module
|
||||
|
||||
The metrics module is used to track Druid metrics.
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.monitoring.emissionPeriod`|How often metrics are emitted.|PT1m|
|
||||
|`druid.monitoring.monitors`|Sets list of Druid monitors used by a node. Each monitor is specified as `com.metamx.metrics.<monitor-name>` (see below for names and more information). For example, you can specify monitors for a Broker with `druid.monitoring.monitors=["com.metamx.metrics.SysMonitor","com.metamx.metrics.JvmMonitor"]`.|none (no monitors)|
|
||||
|
||||
The following monitors are available:
|
||||
|
||||
* CacheMonitor – Emits metrics (to logs) about the segment results cache for Historical and Broker nodes. Reports typical cache statistics include hits, misses, rates, and size (bytes and number of entries), as well as timeouts and and errors.
|
||||
* SysMonitor – This uses the [SIGAR library](http://www.hyperic.com/products/sigar) to report on various system activities and statuses.
|
||||
* ServerMonitor – Reports statistics on Historical nodes.
|
||||
* JvmMonitor – Reports JVM-related statistics.
|
||||
* RealtimeMetricsMonitor – Reports statistics on Realtime nodes.
|
||||
|
||||
### Server Module
|
||||
|
||||
This module is used for Druid server nodes.
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.host`|The host for the current node. This is used to advertise the current processes location as reachable from another node and should generally be specified such that `http://${druid.host}/` could actually talk to this process|none|
|
||||
|`druid.port`|This is the port to actually listen on; unless port mapping is used, this will be the same port as is on `druid.host`|none|
|
||||
|`druid.service`|The name of the service. This is used as a dimension when emitting metrics and alerts to differentiate between the various services|none|
|
||||
|
||||
### Storage Node Module
|
||||
|
||||
This module is used by nodes that store data (Historical and Realtime).
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.server.maxSize`|The maximum number of bytes-worth of segments that the node wants assigned to it. This is not a limit that Historical nodes actually enforce, just a value published to the Coordinator node so it can plan accordingly.|0|
|
||||
|`druid.server.tier`| A string to name the distribution tier that the storage node belongs to. Many of the [rules Coordinator nodes use](Rule-Configuration.html) to manage segments can be keyed on tiers. | `_default_tier` |
|
||||
|`druid.server.priority`|In a tiered architecture, the priority of the tier, thus allowing control over which nodes are queried. Higher numbers mean higher priority. The default (no priority) works for architecture with no cross replication (tiers that have no data-storage overlap). Data centers typically have equal priority. | 0 |
|
||||
|
||||
|
||||
#### Segment Cache
|
||||
|
||||
Druid storage nodes maintain information about segments they have already downloaded, and a disk cache to store that data.
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.segmentCache.locations`|Segments assigned to a Historical node are first stored on the local file system (in a disk cache) and then served by the Historical node. These locations define where that local cache resides. | none (no caching) |
|
||||
|`druid.segmentCache.deleteOnRemove`|Delete segment files from cache once a node is no longer serving a segment.|true|
|
||||
|`druid.segmentCache.dropSegmentDelayMillis`|How long a node delays before completely dropping segment.|30000 (30 seconds)|
|
||||
|`druid.segmentCache.infoDir`|Historical nodes keep track of the segments they are serving so that when the process is restarted they can reload the same segments without waiting for the Coordinator to reassign. This path defines where this metadata is kept. Directory will be created if needed.|${first_location}/info_dir|
|
||||
|`druid.segmentCache.announceIntervalMillis`|How frequently to announce segments while segments are loading from cache. Set this value to zero to wait for all segments to be loaded before announcing.|5000 (5 seconds)|
|
||||
|`druid.segmentCache.numLoadingThreads`|How many segments to load concurrently from from deep storage.|1|
|
||||
|
||||
### Jetty Server Module
|
||||
|
||||
Druid uses Jetty to serve HTTP requests.
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.server.http.numThreads`|Number of threads for HTTP requests.|10|
|
||||
|`druid.server.http.maxIdleTime`|The Jetty max idle time for a connection.|PT5m|
|
||||
|
||||
### Queryable Module
|
||||
|
||||
This module is used by all nodes that can serve queries.
|
||||
All nodes that can serve queries can also log the requests they see.
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|
@ -193,58 +85,54 @@ Every request is emitted to some external location.
|
|||
|--------|-----------|-------|
|
||||
|`druid.request.logging.feed`|Feed name for requests.|none|
|
||||
|
||||
### Query Runner Factory Module
|
||||
### Enabling Metrics
|
||||
|
||||
This module is required by nodes that can serve queries.
|
||||
Druid nodes periodically emit metrics and different metrics monitors can be included. Each node can overwrite the default list of monitors.
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.query.chunkPeriod`|Long-interval queries (of any type) may be broken into shorter interval queries, reducing the impact on resources. Use ISO 8601 periods. For example, if this property is set to `P1M` (one month), then a query covering a year would be broken into 12 smaller queries. |0 (off)|
|
||||
|`druid.monitoring.emissionPeriod`|How often metrics are emitted.|PT1m|
|
||||
|`druid.monitoring.monitors`|Sets list of Druid monitors used by a node. Each monitor is specified as `com.metamx.metrics.<monitor-name>` (see below for names and more information). For example, you can specify monitors for a Broker with `druid.monitoring.monitors=["com.metamx.metrics.SysMonitor","com.metamx.metrics.JvmMonitor"]`.|none (no monitors)|
|
||||
|
||||
#### GroupBy Query Config
|
||||
The following monitors are available:
|
||||
|
||||
* CacheMonitor – Emits metrics (to logs) about the segment results cache for Historical and Broker nodes. Reports typical cache statistics include hits, misses, rates, and size (bytes and number of entries), as well as timeouts and and errors.
|
||||
* SysMonitor – This uses the [SIGAR library](http://www.hyperic.com/products/sigar) to report on various system activities and statuses.
|
||||
* ServerMonitor – Reports statistics on Historical nodes.
|
||||
* JvmMonitor – Reports JVM-related statistics.
|
||||
* RealtimeMetricsMonitor – Reports statistics on Realtime nodes.
|
||||
|
||||
### Emitting Metrics
|
||||
|
||||
The Druid servers emit various metrics and alerts via something we call an Emitter. There are three emitter implementations included with the code, a "noop" emitter, one that just logs to log4j ("logging", which is used by default if no emitter is specified) and one that does POSTs of JSON events to a server ("http"). The properties for using the logging emitter are described below.
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.query.groupBy.singleThreaded`|Run single threaded group By queries.|false|
|
||||
|`druid.query.groupBy.maxIntermediateRows`|Maximum number of intermediate rows.|50000|
|
||||
|`druid.query.groupBy.maxResults`|Maximum number of results.|500000|
|
||||
|`druid.emitter`|Setting this value to "noop", "logging", or "http" will instantialize one of the emitter modules.|logging|
|
||||
|
||||
|
||||
#### Search Query Config
|
||||
#### Logging Emitter Module
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.query.search.maxSearchLimit`|Maximum number of search results to return.|1000|
|
||||
|`druid.emitter.logging.loggerClass`|Choices: HttpPostEmitter, LoggingEmitter, NoopServiceEmitter, ServiceEmitter. The class used for logging.|LoggingEmitter|
|
||||
|`druid.emitter.logging.logLevel`|Choices: debug, info, warn, error. The log level at which message are logged.|info|
|
||||
|
||||
|
||||
### Discovery Module
|
||||
|
||||
The discovery module is used for service discovery.
|
||||
#### Http Emitter Module
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.discovery.curator.path`|Services announce themselves under this ZooKeeper path.|/druid/discovery|
|
||||
|`druid.emitter.http.timeOut`|The timeout for data reads.|PT5M|
|
||||
|`druid.emitter.http.flushMillis`|How often to internal message buffer is flushed (data is sent).|60000|
|
||||
|`druid.emitter.http.flushCount`|How many messages can the internal message buffer hold before flushing (sending).|500|
|
||||
|`druid.emitter.http.recipientBaseUrl`|The base URL to emit messages to. Druid will POST JSON to be consumed at the HTTP endpoint specified by this property.|none|
|
||||
|
||||
### Metadata Storage
|
||||
|
||||
#### Indexing Service Discovery Module
|
||||
|
||||
This module is used to find the [Indexing Service](Indexing-Service.html) using Curator service discovery.
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.selectors.indexing.serviceName`|The druid.service name of the indexing service Overlord node. To start the Overlord with a different name, set it with this property. |overlord|
|
||||
|
||||
|
||||
### Server Inventory View Module
|
||||
|
||||
This module is used to read announcements of segments in ZooKeeper. The configs are identical to the Announcer Module.
|
||||
|
||||
### Database Connector Module
|
||||
|
||||
These properties specify the jdbc connection and other configuration around the database. The only processes that connect to the DB with these properties are the [Coordinator](Coordinator.html) and [Indexing service](Indexing-service.html). This is tested on metadata storage.
|
||||
These properties specify the jdbc connection and other configuration around the metadata storage. The only processes that connect to the metadata storage with these properties are the [Coordinator](Coordinator.html) and [Indexing service](Indexing-service.html).
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.metadata.storage.type`|The type of metadata storage to use. Choose from "mysql", "postgres", or "derby".|derby|
|
||||
|`druid.metadata.storage.connector.user`|The username to connect with.|none|
|
||||
|`druid.metadata.storage.connector.password`|The password to connect with.|none|
|
||||
|`druid.metadata.storage.connector.createTables`|If Druid requires a table and it doesn't exist, create it?|true|
|
||||
|
@ -258,18 +146,9 @@ These properties specify the jdbc connection and other configuration around the
|
|||
|`druid.metadata.storage.tables.taskLog`|Used by the indexing service to store task logs.|druid_taskLog|
|
||||
|`druid.metadata.storage.tables.taskLock`|Used by the indexing service to store task locks.|druid_taskLock|
|
||||
|
||||
### Jackson Config Manager Module
|
||||
### Deep Storage
|
||||
|
||||
The Jackson Config manager reads and writes config entries from the Druid config table using [Jackson](http://jackson.codehaus.org/).
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.manager.config.pollDuration`|How often the manager polls the config table for updates.|PT1m|
|
||||
|
||||
|
||||
### DataSegment Pusher/Puller Module
|
||||
|
||||
This module is used to configure Druid deep storage. The configurations concern how to push and pull [Segments](Segments.html) from deep storage.
|
||||
The configurations concern how to push and pull [Segments](Segments.html) from deep storage.
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|
@ -293,22 +172,14 @@ This deep storage is used to interface with Amazon's S3.
|
|||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.s3.accessKey`|The access key to use to access S3.|none|
|
||||
|`druid.s3.secretKey`|The secret key to use to access S3.|none|
|
||||
|`druid.storage.bucket`|S3 bucket name.|none|
|
||||
|`druid.storage.baseKey`|S3 object key prefix for storage.|none|
|
||||
|`druid.storage.disableAcl`|Boolean flag for ACL.|false|
|
||||
|`druid.storage.archiveBucket`|S3 bucket name for archiving when running the indexing-service *archive task*.|none|
|
||||
|`druid.storage.archiveBaseKey`|S3 object key prefix for archiving.|none|
|
||||
|
||||
#### AWS Module
|
||||
|
||||
This module is used to interact with S3.
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.s3.accessKey`|The access key to use to access S3.|none|
|
||||
|`druid.s3.secretKey`|The secret key to use to access S3.|none|
|
||||
|
||||
|
||||
#### HDFS Deep Storage
|
||||
|
||||
This deep storage is used to interface with HDFS.
|
||||
|
@ -326,35 +197,62 @@ This deep storage is used to interface with Cassandra.
|
|||
|`druid.storage.host`|Cassandra host.|none|
|
||||
|`druid.storage.keyspace`|Cassandra key space.|none|
|
||||
|
||||
### Task Log Module
|
||||
### Caching
|
||||
|
||||
This module is used to configure the [Indexing Service](Indexing-Service.html) task logs.
|
||||
If you are using a distributed cache such as memcached, you can include the configuration here.
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.indexer.logs.type`|Choices:noop, s3, file. Where to store task logs|file|
|
||||
|`druid.cache.type`|`local`, `memcached`|The type of cache to use for queries.|`local`|
|
||||
|`druid.cache.unCacheable`|All druid query types|All query types to not cache.|["groupBy", "select"]|
|
||||
|
||||
#### File Task Logs
|
||||
|
||||
Store task logs in the local filesystem.
|
||||
#### Local Cache
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.indexer.logs.directory`|Local filesystem path.|log|
|
||||
|`druid.cache.sizeInBytes`|Maximum cache size in bytes. Zero disables caching.|0|
|
||||
|`druid.cache.initialSize`|Initial size of the hashtable backing the cache.|500000|
|
||||
|`druid.cache.logEvictionCount`|If non-zero, log cache eviction every `logEvictionCount` items.|0|
|
||||
|
||||
#### S3 Task Logs
|
||||
|
||||
Store task logs in S3.
|
||||
#### Memcache
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.indexer.logs.s3Bucket`|S3 bucket name.|none|
|
||||
|`druid.indexer.logs.s3Prefix`|S3 key prefix.|none|
|
||||
|`druid.cache.expiration`|Memcached [expiration time](https://code.google.com/p/memcached/wiki/NewCommands#Standard_Protocol).|2592000 (30 days)|
|
||||
|`druid.cache.timeout`|Maximum time in milliseconds to wait for a response from Memcached.|500|
|
||||
|`druid.cache.hosts`|Command separated list of Memcached hosts `<host:port>`.|none|
|
||||
|`druid.cache.maxObjectSize`|Maximum object size in bytes for a Memcached object.|52428800 (50 MB)|
|
||||
|`druid.cache.memcachedPrefix`|Key prefix for all keys in Memcached.|druid|
|
||||
|
||||
#### Noop Task Logs
|
||||
### Indexing Service Discovery
|
||||
|
||||
No task logs are actually stored.
|
||||
This config is used to find the [Indexing Service](Indexing-Service.html) using Curator service discovery. Only required if you are actually running an indexing service.
|
||||
|
||||
### Firehose Module
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.selectors.indexing.serviceName`|The druid.service name of the indexing service Overlord node. To start the Overlord with a different name, set it with this property. |overlord|
|
||||
|
||||
The Firehose module lists all available firehoses. There are no configurations.
|
||||
### Announcing Segments
|
||||
|
||||
You can optionally configure how to announce and unannounce Znodes in ZooKeeper (using Curator). For normal operations you do not need to override any of these configs.
|
||||
|
||||
#### Data Segment Announcer
|
||||
|
||||
Data segment announcers are used to announce segments.
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.announcer.type`|Choices: legacy or batch. The type of data segment announcer to use.|batch|
|
||||
|
||||
##### Single Data Segment Announcer
|
||||
|
||||
In legacy Druid, each segment served by a node would be announced as an individual Znode.
|
||||
|
||||
##### Batch Data Segment Announcer
|
||||
|
||||
In current Druid, multiple data segments may be announced under the same Znode.
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.announcer.segmentsPerNode`|Each Znode contains info for up to this many segments.|50|
|
||||
|`druid.announcer.maxBytesPerNode`|Max byte size for Znode.|524288|
|
||||
|
|
|
@ -8,7 +8,17 @@ For general Coordinator Node information, see [here](Coordinator.html).
|
|||
Runtime Configuration
|
||||
---------------------
|
||||
|
||||
The coordinator module uses several of the default modules in [Configuration](Configuration.html) and has the following set of configurations as well:
|
||||
The coordinator node uses several of the global configs in [Configuration](Configuration.html) and has the following set of configurations as well:
|
||||
|
||||
### Node Config
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.host`|The host for the current node. This is used to advertise the current processes location as reachable from another node and should generally be specified such that `http://${druid.host}/` could actually talk to this process|none|
|
||||
|`druid.port`|This is the port to actually listen on; unless port mapping is used, this will be the same port as is on `druid.host`|none|
|
||||
|`druid.service`|The name of the service. This is used as a dimension when emitting metrics and alerts to differentiate between the various services|none|
|
||||
|
||||
### Coordinator Operation
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|
@ -17,7 +27,13 @@ The coordinator module uses several of the default modules in [Configuration](Co
|
|||
|`druid.coordinator.startDelay`|The operation of the Coordinator works on the assumption that it has an up-to-date view of the state of the world when it runs, the current ZK interaction code, however, is written in a way that doesn’t allow the Coordinator to know for a fact that it’s done loading the current state of the world. This delay is a hack to give it enough time to believe that it has all the data.|PT300S|
|
||||
|`druid.coordinator.merge.on`|Boolean flag for whether or not the coordinator should try and merge small segments into a more optimal segment size.|false|
|
||||
|`druid.coordinator.conversion.on`|Boolean flag for converting old segment indexing versions to the latest segment indexing version.|false|
|
||||
|`druid.coordinator.load.timeout`|The timeout duration for when the coordinator assigns a segment to a historical node.|15 minutes|
|
||||
|`druid.coordinator.load.timeout`|The timeout duration for when the coordinator assigns a segment to a historical node.|PT15M|
|
||||
|
||||
### Metadata Retrieval
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.manager.config.pollDuration`|How often the manager polls the config table for updates.|PT1m|
|
||||
|`druid.manager.segment.pollDuration`|The duration between polls the Coordinator does for updates to the set of active segments. Generally defines the amount of lag time it can take for the coordinator to notice new segments.|PT1M|
|
||||
|`druid.manager.rules.pollDuration`|The duration between polls the Coordinator does for updates to the set of active rules. Generally defines the amount of lag time it can take for the coordinator to notice rules.|PT1M|
|
||||
|`druid.manager.rules.defaultTier`|The default tier from which default rules will be loaded from.|_default|
|
||||
|
|
|
@ -5,18 +5,98 @@ Historical Node Configuration
|
|||
=============================
|
||||
For general Historical Node information, see [here](Historical.html).
|
||||
|
||||
|
||||
Runtime Configuration
|
||||
---------------------
|
||||
|
||||
The historical module uses several of the default modules in [Configuration](Configuration.html) and has a few configs of its own.
|
||||
The historical node uses several of the global configs in [Configuration](Configuration.html) and has the following set of configurations as well:
|
||||
|
||||
#### Local Cache
|
||||
### Node Configs
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.host`|The host for the current node. This is used to advertise the current processes location as reachable from another node and should generally be specified such that `http://${druid.host}/` could actually talk to this process|none|
|
||||
|`druid.port`|This is the port to actually listen on; unless port mapping is used, this will be the same port as is on `druid.host`|none|
|
||||
|`druid.service`|The name of the service. This is used as a dimension when emitting metrics and alerts to differentiate between the various services|none|
|
||||
|
||||
### General Configuration
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.server.tier`| A string to name the distribution tier that the storage node belongs to. Many of the [rules Coordinator nodes use](Rule-Configuration.html) to manage segments can be keyed on tiers. | `_default_tier` |
|
||||
|`druid.server.priority`|In a tiered architecture, the priority of the tier, thus allowing control over which nodes are queried. Higher numbers mean higher priority. The default (no priority) works for architecture with no cross replication (tiers that have no data-storage overlap). Data centers typically have equal priority. | 0 |
|
||||
|
||||
### Storing Segments
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.segmentCache.locations`|Segments assigned to a Historical node are first stored on the local file system (in a disk cache) and then served by the Historical node. These locations define where that local cache resides. | none (no caching) |
|
||||
|`druid.segmentCache.deleteOnRemove`|Delete segment files from cache once a node is no longer serving a segment.|true|
|
||||
|`druid.segmentCache.dropSegmentDelayMillis`|How long a node delays before completely dropping segment.|30000 (30 seconds)|
|
||||
|`druid.segmentCache.infoDir`|Historical nodes keep track of the segments they are serving so that when the process is restarted they can reload the same segments without waiting for the Coordinator to reassign. This path defines where this metadata is kept. Directory will be created if needed.|${first_location}/info_dir|
|
||||
|`druid.segmentCache.announceIntervalMillis`|How frequently to announce segments while segments are loading from cache. Set this value to zero to wait for all segments to be loaded before announcing.|5000 (5 seconds)|
|
||||
|`druid.segmentCache.numLoadingThreads`|How many segments to load concurrently from from deep storage.|1|
|
||||
|
||||
### Query Configs
|
||||
|
||||
#### Concurrent Requests
|
||||
|
||||
Druid uses Jetty to serve HTTP requests.
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.server.http.numThreads`|Number of threads for HTTP requests.|10|
|
||||
|`druid.server.http.maxIdleTime`|The Jetty max idle time for a connection.|PT5m|
|
||||
|
||||
#### Processing
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.processing.buffer.sizeBytes`|This specifies a buffer size for the storage of intermediate results. The computation engine in both the Historical and Realtime nodes will use a scratch buffer of this size to do all of their intermediate computations off-heap. Larger values allow for more aggregations in a single pass over the data while smaller values can require more passes depending on the query that is being executed.|1073741824 (1GB)|
|
||||
|`druid.processing.formatString`|Realtime and historical nodes use this format string to name their processing threads.|processing-%s|
|
||||
|`druid.processing.numThreads`|The number of processing threads to have available for parallel processing of segments. Our rule of thumb is `num_cores - 1`, which means that even under heavy load there will still be one core available to do background tasks like talking with ZooKeeper and pulling down segments. If only one core is available, this property defaults to the value `1`.|Number of cores - 1 (or 1)|
|
||||
|`druid.processing.columnCache.sizeBytes`|Maximum size in bytes for the dimension value lookup cache. Any value greater than `0` enables the cache. It is currently disabled by default. Enabling the lookup cache can significantly improve the performance of aggregators operating on dimension values, such as the JavaScript aggregator, or cardinality aggregator, but can slow things down if the cache hit rate is low (i.e. dimensions with few repeating values). Enabling it may also require additional garbage collection tuning to avoid long GC pauses.|`0` (disabled)|
|
||||
|
||||
#### General Query Configuration
|
||||
|
||||
##### GroupBy Query Config
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.query.groupBy.singleThreaded`|Run single threaded group By queries.|false|
|
||||
|`druid.query.groupBy.maxIntermediateRows`|Maximum number of intermediate rows.|50000|
|
||||
|`druid.query.groupBy.maxResults`|Maximum number of results.|500000|
|
||||
|
||||
##### Search Query Config
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.query.search.maxSearchLimit`|Maximum number of search results to return.|1000|
|
||||
|
||||
### Caching
|
||||
|
||||
You can optionally only configure caching to be enabled on the historical by setting caching configs here.
|
||||
|
||||
|Property|Possible Values|Description|Default|
|
||||
|--------|---------------|-----------|-------|
|
||||
|`druid.historical.cache.useCache`|`true`,`false`|Allow cache to be used. Cache will NOT be used unless this is set.|`false`|
|
||||
|`druid.historical.cache.populateCache`|`true`,`false`|Allow cache to be populated. Cache will NOT be populated unless this is set.|`false`|
|
||||
|`druid.historical.cache.unCacheable`|All druid query types|Do not attempt to cache queries whose types are in this array|`["groupBy","select"]`|
|
||||
|`druid.historical.cache.numBackgroundThreads`|Non-negative integer|Number of background threads in the thread pool to use for eventual-consistency caching results if caching is used. It is recommended to set this value greater or equal to the number of processing threads. To force caching to execute in the same thread as the query (query results are blocked on caching completion), use a thread count of 0. Setups who use a Druid backend in programatic settings (sub-second re-querying) should consider setting this to 0 to prevent eventual consistency from biting overall performance in the ass. If this is you, please experiment to find out what setting works best.|`0`|
|
||||
|`druid.historical.cache.useCache`|Enable the cache on the broker.|false|
|
||||
|`druid.historical.cache.populateCache`|Populate the cache on the broker.|false|
|
||||
|`druid.cache.type`|`local`, `memcached`|The type of cache to use for queries.|`local`|
|
||||
|`druid.cache.unCacheable`|All druid query types|All query types to not cache.|["groupBy", "select"]|
|
||||
|
||||
#### Local Cache
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.cache.sizeInBytes`|Maximum cache size in bytes. Zero disables caching.|0|
|
||||
|`druid.cache.initialSize`|Initial size of the hashtable backing the cache.|500000|
|
||||
|`druid.cache.logEvictionCount`|If non-zero, log cache eviction every `logEvictionCount` items.|0|
|
||||
|
||||
#### Memcache
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.cache.expiration`|Memcached [expiration time](https://code.google.com/p/memcached/wiki/NewCommands#Standard_Protocol).|2592000 (30 days)|
|
||||
|`druid.cache.timeout`|Maximum time in milliseconds to wait for a response from Memcached.|500|
|
||||
|`druid.cache.hosts`|Command separated list of Memcached hosts `<host:port>`.|none|
|
||||
|`druid.cache.maxObjectSize`|Maximum object size in bytes for a Memcached object.|52428800 (50 MB)|
|
||||
|`druid.cache.memcachedPrefix`|Key prefix for all keys in Memcached.|druid|
|
||||
|
|
|
@ -3,9 +3,54 @@ layout: doc_page
|
|||
---
|
||||
For general Indexing Service information, see [here](Indexing-Service.html).
|
||||
|
||||
#### Runtime Configuration
|
||||
## Runtime Configuration
|
||||
|
||||
In addition to the configuration of some of the default modules in [Configuration](Configuration.html), the overlord has the following basic configs:
|
||||
The indexing service uses several of the global configs in [Configuration](Configuration.html) and has the following set of configurations as well:
|
||||
|
||||
### Must be set on Overlord and Middle Manager
|
||||
|
||||
#### Node Configs
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.host`|The host for the current node. This is used to advertise the current processes location as reachable from another node and should generally be specified such that `http://${druid.host}/` could actually talk to this process|none|
|
||||
|`druid.port`|This is the port to actually listen on; unless port mapping is used, this will be the same port as is on `druid.host`|none|
|
||||
|`druid.service`|The name of the service. This is used as a dimension when emitting metrics and alerts to differentiate between the various services|none|
|
||||
|
||||
#### Task Logging
|
||||
|
||||
If you are running the indexing service in remote mode, the task logs must S3 or HDFS.
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.indexer.logs.type`|Choices:noop, s3, hdfs, file. Where to store task logs|file|
|
||||
|
||||
##### File Task Logs
|
||||
|
||||
Store task logs in the local filesystem.
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.indexer.logs.directory`|Local filesystem path.|log|
|
||||
|
||||
##### S3 Task Logs
|
||||
|
||||
Store task logs in S3.
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.indexer.logs.s3Bucket`|S3 bucket name.|none|
|
||||
|`druid.indexer.logs.s3Prefix`|S3 key prefix.|none|
|
||||
|
||||
##### HDFS Task Logs
|
||||
|
||||
Store task logs in HDFS.
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.indexer.logs.directory`|The directory to store logs.|none|
|
||||
|
||||
### Overlord Configs
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|
@ -23,7 +68,7 @@ The following configs only apply if the overlord is running in remote mode:
|
|||
|--------|-----------|-------|
|
||||
|`druid.indexer.runner.taskAssignmentTimeout`|How long to wait after a task as been assigned to a middle manager before throwing an error.|PT5M|
|
||||
|`druid.indexer.runner.minWorkerVersion`|The minimum middle manager version to send tasks to. |"0"|
|
||||
|`druid.indexer.runner.compressZnodes`|Indicates whether or not the overlord should expect middle managers to compress Znodes.|false|
|
||||
|`druid.indexer.runner.compressZnodes`|Indicates whether or not the overlord should expect middle managers to compress Znodes.|true|
|
||||
|`druid.indexer.runner.maxZnodeBytes`|The maximum size Znode in bytes that can be created in Zookeeper.|524288|
|
||||
|
||||
There are additional configs for autoscaling (if it is enabled):
|
||||
|
@ -44,43 +89,105 @@ There are additional configs for autoscaling (if it is enabled):
|
|||
|
||||
#### Dynamic Configuration
|
||||
|
||||
Overlord dynamic configuration is mainly for autoscaling. The overlord reads a worker setup spec as a JSON object from the Druid [metadata storage](Metadata-storage.html) config table. This object contains information about the version of middle managers to create, the maximum and minimum number of middle managers in the cluster at one time, and additional information required to automatically create middle managers.
|
||||
The overlord can dynamically change worker behavior.
|
||||
|
||||
The JSON object can be submitted to the overlord via a POST request at:
|
||||
|
||||
```
|
||||
http://<COORDINATOR_IP>:<port>/druid/indexer/v1/worker/setup
|
||||
http://<COORDINATOR_IP>:<port>/druid/indexer/v1/worker
|
||||
```
|
||||
|
||||
A sample worker setup spec is shown below:
|
||||
A sample worker config spec is shown below:
|
||||
|
||||
```json
|
||||
{
|
||||
"minVersion":"some_version",
|
||||
"minNumWorkers":"0",
|
||||
"maxNumWorkers":"10",
|
||||
"nodeData": {
|
||||
"type":"ec2",
|
||||
"amiId":"ami-someId",
|
||||
"instanceType":"m1.xlarge",
|
||||
"minInstances":"1",
|
||||
"maxInstances":"1",
|
||||
"securityGroupIds":["securityGroupIds"],
|
||||
"keyName":"keyName"
|
||||
"selectStrategy": {
|
||||
"type": "fillCapacityWithAffinity",
|
||||
"affinityConfig": {
|
||||
"affinity": {
|
||||
"datasource1": ["ip1:port", "ip2:port"],
|
||||
"datasource2": ["ip3:port"]
|
||||
}
|
||||
}
|
||||
},
|
||||
"userData":{
|
||||
"impl":"string",
|
||||
"data":"version=:VERSION:",
|
||||
"versionReplacementString":":VERSION:"
|
||||
"autoScaler": {
|
||||
"type": "ec2",
|
||||
"minNumWorkers": 2,
|
||||
"maxNumWorkers": 12,
|
||||
"envConfig": {
|
||||
"availabilityZone": "us-east-1a",
|
||||
"nodeData": {
|
||||
"amiId": "${AMI}",
|
||||
"instanceType": "c3.8xlarge",
|
||||
"minInstances": 1,
|
||||
"maxInstances": 1,
|
||||
"securityGroupIds": ["${IDs}"],
|
||||
"keyName": ${KEY_NAME}
|
||||
},
|
||||
"userData": {
|
||||
"impl": "string",
|
||||
"data": "${SCRIPT_COMMAND}",
|
||||
"versionReplacementString": ":VERSION:",
|
||||
"version": null
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Issuing a GET request at the same URL will return the current worker setup spec that is currently in place. The worker setup spec list above is just a sample and it is possible to extend the code base for other deployment environments. A description of the worker setup spec is shown below.
|
||||
Issuing a GET request at the same URL will return the current worker config spec that is currently in place. The worker config spec list above is just a sample for EC2 and it is possible to extend the code base for other deployment environments. A description of the worker config spec is shown below.
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`selectStrategy`|How to assign tasks to middlemanagers. Choices are `fillCapacity` and `fillCapacityWithAffinity`.|fillCapacity|
|
||||
|`autoScaler`|Only used if autoscaling is enabled. See below.|null|
|
||||
|
||||
#### Worker Select Strategy
|
||||
|
||||
##### Fill Capacity
|
||||
|
||||
Workers are assigned tasks until capacity.
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`type`|`fillCapacity`.|fillCapacity|
|
||||
|
||||
##### Fill Capacity With Affinity
|
||||
|
||||
An affinity config can be provided.
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`type`|`fillCapacityWithAffinity`.|fillCapacityWithAffinity|
|
||||
|`affinity`|A map to String to list of String host names.|{}|
|
||||
|
||||
Tasks will try to be assigned to preferred workers. Fill capacity strategy is used if no preference for a datasource specified.
|
||||
|
||||
#### Autoscaler
|
||||
|
||||
Amazon's EC2 is currently the only supported autoscaler.
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`minNumWorkers`|The minimum number of workers that can be in the cluster at any given time.|0|
|
||||
|`maxNumWorkers`|The maximum number of workers that can be in the cluster at any given time.|0|
|
||||
|`nodeData`|A JSON object that describes how to launch new nodes. Currently, only EC2 is supported.|none; required|
|
||||
|`userData`|A JSON object that describes how to configure new nodes. Currently, only EC2 is supported. If you have set druid.indexer.autoscale.workerVersion, this must have a versionReplacementString. Otherwise, a versionReplacementString is not necessary.|none; optional|
|
||||
|`availabilityZone`|What availability zone to run in.|none|
|
||||
|`nodeData`|A JSON object that describes how to launch new nodes.|none; required|
|
||||
|`userData`|A JSON object that describes how to configure new nodes. If you have set druid.indexer.autoscale.workerVersion, this must have a versionReplacementString. Otherwise, a versionReplacementString is not necessary.|none; optional|
|
||||
|
||||
### MiddleManager Configs
|
||||
|
||||
Middle managers pass their configurations down to their child peons. The middle manager requires the following configs:
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.worker.ip`|The IP of the worker.|localhost|
|
||||
|`druid.worker.version`|Version identifier for the middle manager.|0|
|
||||
|`druid.worker.capacity`|Maximum number of tasks the middle manager can accept.|Number of available processors - 1|
|
||||
|`druid.indexer.runner.compressZnodes`|Indicates whether or not the middle managers should compress Znodes.|false|
|
||||
|`druid.indexer.runner.maxZnodeBytes`|The maximum size Znode in bytes that can be created in Zookeeper.|524288|
|
||||
|`druid.indexer.runner.javaCommand`|Command required to execute java.|java|
|
||||
|`druid.indexer.runner.javaOpts`|-X Java options to run the peon in its own JVM.|""|
|
||||
|`druid.indexer.runner.classpath`|Java classpath for the peon.|System.getProperty("java.class.path")|
|
||||
|`druid.indexer.runner.startPort`|The port that peons begin running on.|8081|
|
||||
|`druid.indexer.runner.allowedPrefixes`|Whitelist of prefixes for configs that can be passed down to child peons.|"com.metamx", "druid", "io.druid", "user.timezone","file.encoding"|
|
||||
|
|
|
@ -12,14 +12,14 @@ Depending on what `druid.storage.type` is set to, Druid will upload segments to
|
|||
|
||||
## My realtime node is not handing segments off
|
||||
|
||||
Make sure that the `druid.publish.type` on your real-time nodes is set to `metadata`. Also make sure that `druid.storage.type` is set to a deep storage that makes sense. Some example configs:
|
||||
Make sure that the `druid.publish.type` on your real-time nodes is set to "metadata". Also make sure that `druid.storage.type` is set to a deep storage that makes sense. Some example configs:
|
||||
|
||||
```
|
||||
druid.publish.type=metadata
|
||||
druid.publish.type=db
|
||||
|
||||
druid.metadata.storage.connector.connectURI=jdbc\:metadata storage\://localhost\:3306/druid
|
||||
druid.metadata.storage.connector.user=druid
|
||||
druid.metadata.storage.connector.password=diurd
|
||||
druid.db.connector.connectURI=jdbc\:mysql\://localhost\:3306/druid
|
||||
druid.db.connector.user=druid
|
||||
druid.db.connector.password=diurd
|
||||
|
||||
druid.storage.type=s3
|
||||
druid.storage.bucket=druid
|
||||
|
|
|
@ -2,118 +2,62 @@
|
|||
layout: doc_page
|
||||
---
|
||||
Realtime Node Configuration
|
||||
===========================
|
||||
For general Real-time Node information, see [here](Realtime.html).
|
||||
==============================
|
||||
For general Realtime Node information, see [here](Realtime.html).
|
||||
|
||||
For Real-time Ingestion, see [Realtime Ingestion](Realtime-ingestion.html).
|
||||
Runtime Configuration
|
||||
---------------------
|
||||
|
||||
Quick Start
|
||||
-----------
|
||||
Run:
|
||||
The realtime node uses several of the global configs in [Configuration](Configuration.html) and has the following set of configurations as well:
|
||||
|
||||
```
|
||||
io.druid.cli.Main server realtime
|
||||
```
|
||||
### Node Config
|
||||
|
||||
With the following JVM configuration:
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.host`|The host for the current node. This is used to advertise the current processes location as reachable from another node and should generally be specified such that `http://${druid.host}/` could actually talk to this process|none|
|
||||
|`druid.port`|This is the port to actually listen on; unless port mapping is used, this will be the same port as is on `druid.host`|none|
|
||||
|`druid.service`|The name of the service. This is used as a dimension when emitting metrics and alerts to differentiate between the various services|none|
|
||||
|
||||
```
|
||||
-server
|
||||
-Xmx256m
|
||||
-Duser.timezone=UTC
|
||||
-Dfile.encoding=UTF-8
|
||||
### Realtime Operation
|
||||
|
||||
druid.host=localhost
|
||||
druid.service=realtime
|
||||
druid.port=8083
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.publish.type`|Where to publish segments. Choices are "noop" or "metadata".|metadata|
|
||||
|`druid.realtime.specFile`|File location of realtime specFile.|none|
|
||||
|
||||
druid.extensions.coordinates=["io.druid.extensions:druid-kafka-seven:0.6.160"]
|
||||
### Storing Intermediate Segments
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.segmentCache.locations`|Where intermediate segments are stored. The maxSize should always be zero.|none|
|
||||
|
||||
|
||||
druid.zk.service.host=localhost
|
||||
### Query Configs
|
||||
|
||||
# The realtime config file.
|
||||
druid.realtime.specFile=/path/to/specFile
|
||||
#### Processing
|
||||
|
||||
# Choices: metadata (hand off segments), noop (do not hand off segments).
|
||||
druid.publish.type=metadata
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.processing.buffer.sizeBytes`|This specifies a buffer size for the storage of intermediate results. The computation engine in both the Historical and Realtime nodes will use a scratch buffer of this size to do all of their intermediate computations off-heap. Larger values allow for more aggregations in a single pass over the data while smaller values can require more passes depending on the query that is being executed.|1073741824 (1GB)|
|
||||
|`druid.processing.formatString`|Realtime and historical nodes use this format string to name their processing threads.|processing-%s|
|
||||
|`druid.processing.numThreads`|The number of processing threads to have available for parallel processing of segments. Our rule of thumb is `num_cores - 1`, which means that even under heavy load there will still be one core available to do background tasks like talking with ZooKeeper and pulling down segments. If only one core is available, this property defaults to the value `1`.|Number of cores - 1 (or 1)|
|
||||
|`druid.processing.columnCache.sizeBytes`|Maximum size in bytes for the dimension value lookup cache. Any value greater than `0` enables the cache. It is currently disabled by default. Enabling the lookup cache can significantly improve the performance of aggregators operating on dimension values, such as the JavaScript aggregator, or cardinality aggregator, but can slow things down if the cache hit rate is low (i.e. dimensions with few repeating values). Enabling it may also require additional garbage collection tuning to avoid long GC pauses.|`0` (disabled)|
|
||||
|
||||
druid.metadata.storage.connector.connectURI=jdbc\:mysql\://localhost\:3306/druid
|
||||
druid.metadata.storage.connector.user=druid
|
||||
druid.metadata.storage.connector.password=diurd
|
||||
#### General Query Configuration
|
||||
|
||||
druid.processing.buffer.sizeBytes=100000000
|
||||
```
|
||||
##### GroupBy Query Config
|
||||
|
||||
Production Configs
|
||||
------------------
|
||||
These production configs are using S3 as a deep store.
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.query.groupBy.singleThreaded`|Run single threaded group By queries.|false|
|
||||
|`druid.query.groupBy.maxIntermediateRows`|Maximum number of intermediate rows.|50000|
|
||||
|`druid.query.groupBy.maxResults`|Maximum number of results.|500000|
|
||||
|
||||
JVM settings:
|
||||
##### Search Query Config
|
||||
|
||||
```
|
||||
-server
|
||||
-Xmx#{HEAP_MAX}g
|
||||
-Xms#{HEAP_MIN}g
|
||||
-XX:NewSize=#{NEW_SIZE}g
|
||||
-XX:MaxNewSize=#{MAX_NEW_SIZE}g
|
||||
-XX:+UseConcMarkSweepGC
|
||||
-XX:+PrintGCDetails
|
||||
-XX:+PrintGCTimeStamps
|
||||
-Duser.timezone=UTC
|
||||
-Dfile.encoding=UTF-8
|
||||
-Djava.io.tmpdir=/mnt/tmp
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.query.search.maxSearchLimit`|Maximum number of search results to return.|1000|
|
||||
|
||||
-Dcom.sun.management.jmxremote.port=17071
|
||||
-Dcom.sun.management.jmxremote.authenticate=false
|
||||
-Dcom.sun.management.jmxremote.ssl=false
|
||||
```
|
||||
|
||||
Runtime.properties:
|
||||
|
||||
```
|
||||
druid.host=#{IP_ADDR}:8080
|
||||
druid.port=8080
|
||||
druid.service=druid/prod/realtime
|
||||
|
||||
druid.extensions.coordinates=["io.druid.extensions:druid-s3-extensions:0.6.160","io.druid.extensions:druid-kafka-seven:0.6.160"]
|
||||
|
||||
druid.zk.service.host=#{ZK_IPs}
|
||||
druid.zk.paths.base=/druid/prod
|
||||
|
||||
druid.s3.accessKey=#{ACCESS_KEY}
|
||||
druid.s3.secretKey=#{SECRET_KEY}
|
||||
|
||||
druid.metadata.storage.connector.connectURI=jdbc:mysql://#{MYSQL_URL}:3306/druid
|
||||
druid.metadata.storage.connector.user=#{MYSQL_USER}
|
||||
druid.metadata.storage.connector.password=#{MYSQL_PW}
|
||||
druid.metadata.storage.connector.useValidationQuery=true
|
||||
druid.metadata.storage.tables.base=prod
|
||||
|
||||
druid.publish.type=metadata
|
||||
|
||||
druid.processing.numThreads=3
|
||||
|
||||
druid.request.logging.type=file
|
||||
druid.request.logging.dir=request_logs/
|
||||
|
||||
druid.realtime.specFile=conf/schemas.json
|
||||
|
||||
druid.segmentCache.locations=[{"path": "/mnt/persistent/zk_druid", "maxSize": 0}]
|
||||
|
||||
druid.storage.type=s3
|
||||
druid.storage.bucket=#{S3_STORAGE_BUCKET}
|
||||
druid.storage.baseKey=prod-realtime/v1
|
||||
|
||||
druid.monitoring.monitors=["com.metamx.metrics.SysMonitor", "io.druid.segment.realtime.RealtimeMetricsMonitor"]
|
||||
|
||||
# Emit metrics over http
|
||||
druid.emitter=http
|
||||
druid.emitter.http.recipientBaseUrl=#{EMITTER_URL}
|
||||
|
||||
# If you choose to compress ZK announcements, you must do so for every node type
|
||||
druid.announcer.type=batch
|
||||
druid.curator.compress=true
|
||||
```
|
||||
|
||||
The realtime module also uses several of the default modules in [Configuration](Configuration.html). For more information on the realtime spec file (or configuration file), see [realtime ingestion](Realtime-ingestion.html) page.
|
||||
|
|
Loading…
Reference in New Issue