mirror of https://github.com/apache/druid.git
Merge pull request #256 from metamx/rename
Refactor our modules to standarize Druid node naming convention
This commit is contained in:
commit
dd0b31785d
|
@ -140,4 +140,4 @@ This is a specification of the properties that tell the job how to update metada
|
||||||
|password|password for db|yes|
|
|password|password for db|yes|
|
||||||
|segmentTable|table to use in DB|yes|
|
|segmentTable|table to use in DB|yes|
|
||||||
|
|
||||||
These properties should parrot what you have configured for your [Master](Master.html).
|
These properties should parrot what you have configured for your [Coordinator](Coordinator.html).
|
||||||
|
|
|
@ -11,13 +11,13 @@ Forwarding Queries
|
||||||
|
|
||||||
Most druid queries contain an interval object that indicates a span of time for which data is requested. Likewise, Druid [Segments](Segments.html) are partitioned to contain data for some interval of time and segments are distributed across a cluster. Consider a simple datasource with 7 segments where each segment contains data for a given day of the week. Any query issued to the datasource for more than one day of data will hit more than one segment. These segments will likely be distributed across multiple nodes, and hence, the query will likely hit multiple nodes.
|
Most druid queries contain an interval object that indicates a span of time for which data is requested. Likewise, Druid [Segments](Segments.html) are partitioned to contain data for some interval of time and segments are distributed across a cluster. Consider a simple datasource with 7 segments where each segment contains data for a given day of the week. Any query issued to the datasource for more than one day of data will hit more than one segment. These segments will likely be distributed across multiple nodes, and hence, the query will likely hit multiple nodes.
|
||||||
|
|
||||||
To determine which nodes to forward queries to, the Broker node first builds a view of the world from information in Zookeeper. Zookeeper maintains information about [Compute](Compute.html) and [Realtime](Realtime.html) nodes and the segments they are serving. For every datasource in Zookeeper, the Broker node builds a timeline of segments and the nodes that serve them. When queries are received for a specific datasource and interval, the Broker node performs a lookup into the timeline associated with the query datasource for the query interval and retrieves the nodes that contain data for the query. The Broker node then forwards down the query to the selected nodes.
|
To determine which nodes to forward queries to, the Broker node first builds a view of the world from information in Zookeeper. Zookeeper maintains information about [Historical](Historical.html) and [Realtime](Realtime.html) nodes and the segments they are serving. For every datasource in Zookeeper, the Broker node builds a timeline of segments and the nodes that serve them. When queries are received for a specific datasource and interval, the Broker node performs a lookup into the timeline associated with the query datasource for the query interval and retrieves the nodes that contain data for the query. The Broker node then forwards down the query to the selected nodes.
|
||||||
|
|
||||||
Caching
|
Caching
|
||||||
-------
|
-------
|
||||||
|
|
||||||
Broker nodes employ a distributed cache with a LRU cache invalidation strategy. The broker cache stores per segment results. The cache can be local to each broker node or shared across multiple nodes using an external distributed cache such as [memcached](http://memcached.org/). Each time a broker node receives a query, it first maps the query to a set of segments. A subset of these segment results may already exist in the cache and the results can be directly pulled from the cache. For any segment results that do not exist in the cache, the broker node will forward the query to the
|
Broker nodes employ a distributed cache with a LRU cache invalidation strategy. The broker cache stores per segment results. The cache can be local to each broker node or shared across multiple nodes using an external distributed cache such as [memcached](http://memcached.org/). Each time a broker node receives a query, it first maps the query to a set of segments. A subset of these segment results may already exist in the cache and the results can be directly pulled from the cache. For any segment results that do not exist in the cache, the broker node will forward the query to the
|
||||||
compute nodes. Once the compute nodes return their results, the broker will store those results in the cache. Real-time segments are never cached and hence requests for real-time data will always be forwarded to real-time nodes. Real-time data is perpetually changing and caching the results would be unreliable.
|
historical nodes. Once the historical nodes return their results, the broker will store those results in the cache. Real-time segments are never cached and hence requests for real-time data will always be forwarded to real-time nodes. Real-time data is perpetually changing and caching the results would be unreliable.
|
||||||
|
|
||||||
Running
|
Running
|
||||||
-------
|
-------
|
||||||
|
|
|
@ -1,40 +0,0 @@
|
||||||
---
|
|
||||||
layout: doc_page
|
|
||||||
---
|
|
||||||
Compute
|
|
||||||
=======
|
|
||||||
|
|
||||||
Compute nodes are the work horses of a cluster. They load up historical segments and expose them for querying.
|
|
||||||
|
|
||||||
Loading and Serving Segments
|
|
||||||
----------------------------
|
|
||||||
|
|
||||||
Each compute node maintains a constant connection to Zookeeper and watches a configurable set of Zookeeper paths for new segment information. Compute nodes do not communicate directly with each other or with the master nodes but instead rely on Zookeeper for coordination.
|
|
||||||
|
|
||||||
The [Master](Master.html) node is responsible for assigning new segments to compute nodes. Assignment is done by creating an ephemeral Zookeeper entry under a load queue path associated with a compute node. For more information on how the master assigns segments to compute nodes, please see [Master](Master.html).
|
|
||||||
|
|
||||||
When a compute node notices a new load queue entry in its load queue path, it will first check a local disk directory (cache) for the information about segment. If no information about the segment exists in the cache, the compute node will download metadata about the new segment to serve from Zookeeper. This metadata includes specifications about where the segment is located in deep storage and about how to decompress and process the segment. For more information about segment metadata and Druid segments in general, please see [Segments](Segments.html). Once a compute node completes processing a segment, the segment is announced in Zookeeper under a served segments path associated with the node. At this point, the segment is available for querying.
|
|
||||||
|
|
||||||
Loading and Serving Segments From Cache
|
|
||||||
---------------------------------------
|
|
||||||
|
|
||||||
Recall that when a compute node notices a new segment entry in its load queue path, the compute node first checks a configurable cache directory on its local disk to see if the segment had been previously downloaded. If a local cache entry already exists, the compute node will directly read the segment binary files from disk and load the segment.
|
|
||||||
|
|
||||||
The segment cache is also leveraged when a compute node is first started. On startup, a compute node will search through its cache directory and immediately load and serve all segments that are found. This feature allows compute nodes to be queried as soon they come online.
|
|
||||||
|
|
||||||
Querying Segments
|
|
||||||
-----------------
|
|
||||||
|
|
||||||
Please see [Querying](Querying.html) for more information on querying compute nodes.
|
|
||||||
|
|
||||||
For every query that a compute node services, it will log the query and report metrics on the time taken to run the query.
|
|
||||||
|
|
||||||
Running
|
|
||||||
-------
|
|
||||||
|
|
||||||
Compute nodes can be run using the `com.metamx.druid.http.ComputeMain` class.
|
|
||||||
|
|
||||||
Configuration
|
|
||||||
-------------
|
|
||||||
|
|
||||||
See [Configuration](Configuration.html).
|
|
|
@ -72,7 +72,6 @@ druid.storage.s3.bucket=
|
||||||
druid.storage.s3.baseKey=
|
druid.storage.s3.baseKey=
|
||||||
|
|
||||||
druid.bard.cache.sizeInBytes=40000000
|
druid.bard.cache.sizeInBytes=40000000
|
||||||
druid.master.merger.service=blah_blah
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Configuration groupings
|
Configuration groupings
|
||||||
|
@ -80,7 +79,7 @@ Configuration groupings
|
||||||
|
|
||||||
### S3 Access
|
### S3 Access
|
||||||
|
|
||||||
These properties are for connecting with S3 and using it to pull down segments. In the future, we plan on being able to use other deep storage file systems as well, like HDFS. The file system is actually only accessed by the [Compute](Compute.html), [Realtime](Realtime.html) and [Indexing service](Indexing service.html) nodes.
|
These properties are for connecting with S3 and using it to pull down segments. In the future, we plan on being able to use other deep storage file systems as well, like HDFS. The file system is actually only accessed by the [Historical](Historical.html), [Realtime](Realtime.html) and [Indexing service](Indexing service.html) nodes.
|
||||||
|
|
||||||
|Property|Description|Default|
|
|Property|Description|Default|
|
||||||
|--------|-----------|-------|
|
|--------|-----------|-------|
|
||||||
|
@ -91,25 +90,25 @@ These properties are for connecting with S3 and using it to pull down segments.
|
||||||
|
|
||||||
### JDBC connection
|
### JDBC connection
|
||||||
|
|
||||||
These properties specify the jdbc connection and other configuration around the "segments table" database. The only processes that connect to the DB with these properties are the [Master](Master.html) and [Indexing service](Indexing-service.html). This is tested on MySQL.
|
These properties specify the jdbc connection and other configuration around the "segments table" database. The only processes that connect to the DB with these properties are the [Coordinator](Coordinator.html) and [Indexing service](Indexing-service.html). This is tested on MySQL.
|
||||||
|
|
||||||
|Property|Description|Default|
|
|Property|Description|Default|
|
||||||
|--------|-----------|-------|
|
|--------|-----------|-------|
|
||||||
|`druid.database.connectURI`|The jdbc connection uri|none|
|
|`druid.database.connectURI`|The jdbc connection uri|none|
|
||||||
|`druid.database.user`|The username to connect with|none|
|
|`druid.database.user`|The username to connect with|none|
|
||||||
|`druid.database.password`|The password to connect with|none|
|
|`druid.database.password`|The password to connect with|none|
|
||||||
|`druid.database.poll.duration`|The duration between polls the Master does for updates to the set of active segments. Generally defines the amount of lag time it can take for the master to notice new segments|PT1M|
|
|`druid.database.poll.duration`|The duration between polls the Coordinator does for updates to the set of active segments. Generally defines the amount of lag time it can take for the coordinator to notice new segments|PT1M|
|
||||||
|`druid.database.segmentTable`|The table to use to look for segments.|none|
|
|`druid.database.segmentTable`|The table to use to look for segments.|none|
|
||||||
|`druid.database.ruleTable`|The table to use to look for segment load/drop rules.|none|
|
|`druid.database.ruleTable`|The table to use to look for segment load/drop rules.|none|
|
||||||
|`druid.database.configTable`|The table to use to look for configs.|none|
|
|`druid.database.configTable`|The table to use to look for configs.|none|
|
||||||
|
|
||||||
### Master properties
|
### Coordinator properties
|
||||||
|
|
||||||
|Property|Description|Default|
|
|Property|Description|Default|
|
||||||
|--------|-----------|-------|
|
|--------|-----------|-------|
|
||||||
|`druid.master.period`|The run period for the master. The master’s operates by maintaining the current state of the world in memory and periodically looking at the set of segments available and segments being served to make decisions about whether any changes need to be made to the data topology. This property sets the delay between each of these runs|PT60S|
|
|`druid.coordinator.period`|The run period for the coordinator. The coordinator’s operates by maintaining the current state of the world in memory and periodically looking at the set of segments available and segments being served to make decisions about whether any changes need to be made to the data topology. This property sets the delay between each of these runs|PT60S|
|
||||||
|`druid.master.removedSegmentLifetime`|When a node disappears, the master can provide a grace period for how long it waits before deciding that the node really isn’t going to come back and it really should declare that all segments from that node are no longer available. This sets that grace period in number of runs of the master.|1|
|
|`druid.coordinator.removedSegmentLifetime`|When a node disappears, the coordinator can provide a grace period for how long it waits before deciding that the node really isn’t going to come back and it really should declare that all segments from that node are no longer available. This sets that grace period in number of runs of the coordinator.|1|
|
||||||
|`druid.master.startDelay`|The operation of the Master works on the assumption that it has an up-to-date view of the state of the world when it runs, the current ZK interaction code, however, is written in a way that doesn’t allow the Master to know for a fact that it’s done loading the current state of the world. This delay is a hack to give it enough time to believe that it has all the data|PT600S|
|
|`druid.coordinator.startDelay`|The operation of the Coordinator works on the assumption that it has an up-to-date view of the state of the world when it runs, the current ZK interaction code, however, is written in a way that doesn’t allow the Coordinator to know for a fact that it’s done loading the current state of the world. This delay is a hack to give it enough time to believe that it has all the data|PT600S|
|
||||||
|
|
||||||
### Zk properties
|
### Zk properties
|
||||||
|
|
||||||
|
@ -121,28 +120,28 @@ These are properties that define various service/HTTP server aspects
|
||||||
|
|
||||||
|Property|Description|Default|
|
|Property|Description|Default|
|
||||||
|--------|-----------|-------|
|
|--------|-----------|-------|
|
||||||
|`druid.client.http.connections`|Size of connection pool for the Broker to connect to compute nodes. If there are more queries than this number that all need to speak to the same node, then they will queue up.|none|
|
|`druid.client.http.connections`|Size of connection pool for the Broker to connect to historical nodes. If there are more queries than this number that all need to speak to the same node, then they will queue up.|none|
|
||||||
|`druid.paths.indexCache`|Segments assigned to a compute node are first stored on the local file system and then served by the compute node. This path defines where that local cache resides. Directory will be created if needed|none|
|
|`druid.paths.indexCache`|Segments assigned to a historical node are first stored on the local file system and then served by the historical node. This path defines where that local cache resides. Directory will be created if needed|none|
|
||||||
|`druid.paths.segmentInfoCache`|Compute nodes keep track of the segments they are serving so that when the process is restarted they can reload the same segments without waiting for the master to reassign. This path defines where this metadata is kept. Directory will be created if needed|none|
|
|`druid.paths.segmentInfoCache`|historical nodes keep track of the segments they are serving so that when the process is restarted they can reload the same segments without waiting for the coordinator to reassign. This path defines where this metadata is kept. Directory will be created if needed|none|
|
||||||
|`druid.http.numThreads`|The number of HTTP worker threads.|10|
|
|`druid.http.numThreads`|The number of HTTP worker threads.|10|
|
||||||
|`druid.http.maxIdleTimeMillis`|The amount of time a connection can remain idle before it is terminated|300000 (5 min)|
|
|`druid.http.maxIdleTimeMillis`|The amount of time a connection can remain idle before it is terminated|300000 (5 min)|
|
||||||
|`druid.request.logging.dir`|Compute, Realtime and Broker nodes maintain request logs of all of the requests they get (interacton is via POST, so normal request logs don’t generally capture information about the actual query), this specifies the directory to store the request logs in|none|
|
|`druid.request.logging.dir`|Historical, Realtime and Broker nodes maintain request logs of all of the requests they get (interacton is via POST, so normal request logs don’t generally capture information about the actual query), this specifies the directory to store the request logs in|none|
|
||||||
|`druid.host`|The host for the current node. This is used to advertise the current processes location as reachable from another node and should generally be specified such that `http://${druid.host}/` could actually talk to this process|none|
|
|`druid.host`|The host for the current node. This is used to advertise the current processes location as reachable from another node and should generally be specified such that `http://${druid.host}/` could actually talk to this process|none|
|
||||||
|`druid.port`|This is the port to actually listen on; unless port mapping is used, this will be the same port as is on `druid.host`|none|
|
|`druid.port`|This is the port to actually listen on; unless port mapping is used, this will be the same port as is on `druid.host`|none|
|
||||||
|`druid.processing.formatString`|Realtime and Compute nodes use this format string to name their processing threads.|none|
|
|`druid.processing.formatString`|Realtime and historical nodes use this format string to name their processing threads.|none|
|
||||||
|`druid.processing.numThreads`|The number of processing threads to have available for parallel processing of segments. Our rule of thumb is `num_cores - 1`, this means that even under heavy load there will still be one core available to do background tasks like talking with ZK and pulling down segments.|none|
|
|`druid.processing.numThreads`|The number of processing threads to have available for parallel processing of segments. Our rule of thumb is `num_cores - 1`, this means that even under heavy load there will still be one core available to do background tasks like talking with ZK and pulling down segments.|none|
|
||||||
|`druid.computation.buffer.size`|This specifies a buffer size for the storage of intermediate results. The computation engine in both the Compute and Realtime nodes will use a scratch buffer of this size to do all of their intermediate computations off-heap. Larger values allow for more aggregations in a single pass over the data while smaller values can require more passes depending on the query that is being executed.|1073741824 (1GB)|
|
|`druid.computation.buffer.size`|This specifies a buffer size for the storage of intermediate results. The computation engine in both the Historical and Realtime nodes will use a scratch buffer of this size to do all of their intermediate computations off-heap. Larger values allow for more aggregations in a single pass over the data while smaller values can require more passes depending on the query that is being executed.|1073741824 (1GB)|
|
||||||
|`druid.service`|The name of the service. This is used as a dimension when emitting metrics and alerts to differentiate between the various services|none|
|
|`druid.service`|The name of the service. This is used as a dimension when emitting metrics and alerts to differentiate between the various services|none|
|
||||||
|`druid.bard.cache.sizeInBytes`|The Broker (called Bard internally) instance has the ability to store results of queries in an in-memory cache. This specifies the number of bytes to use for that cache|none|
|
|`druid.bard.cache.sizeInBytes`|The Broker (called Bard internally) instance has the ability to store results of queries in an in-memory cache. This specifies the number of bytes to use for that cache|none|
|
||||||
|
|
||||||
### Compute Properties
|
### Historical Node Properties
|
||||||
|
|
||||||
These are properties that the compute nodes use
|
These are properties that the historical nodes use
|
||||||
|
|
||||||
|Property|Description|Default|
|
|Property|Description|Default|
|
||||||
|--------|-----------|-------|
|
|--------|-----------|-------|
|
||||||
|`druid.server.maxSize`|The maximum number of bytes worth of segment that the node wants assigned to it. This is not a limit that the compute nodes actually enforce, they just publish it to the master and trust the master to do the right thing|none|
|
|`druid.server.maxSize`|The maximum number of bytes worth of segment that the node wants assigned to it. This is not a limit that the historical nodes actually enforce, they just publish it to the coordinator and trust the coordinator to do the right thing|none|
|
||||||
|`druid.server.type`|Specifies the type of the node. This is published via ZK and depending on the value the node will be treated specially by the Master/Broker. Allowed values are "realtime" or "historical". This is a configuration parameter because the plan is to allow for a more configurable cluster composition. At the current time, all realtime nodes should just be "realtime" and all compute nodes should just be "compute"|none|
|
|`druid.server.type`|Specifies the type of the node. This is published via ZK and depending on the value the node will be treated specially by the Coordinator/Broker. Allowed values are "realtime" or "historical". This is a configuration parameter because the plan is to allow for a more configurable cluster composition. At the current time, all realtime nodes should just be "realtime" and all historical nodes should just be "historical"|none|
|
||||||
|
|
||||||
### Emitter Properties
|
### Emitter Properties
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,136 @@
|
||||||
|
---
|
||||||
|
layout: doc_page
|
||||||
|
---
|
||||||
|
Coordinator
|
||||||
|
======
|
||||||
|
|
||||||
|
The Druid coordinator node is primarily responsible for segment management and distribution. More specifically, the Druid coordinator node communicates to historical nodes to load or drop segments based on configurations. The Druid coordinator is responsible for loading new segments, dropping outdated segments, managing segment replication, and balancing segment load.
|
||||||
|
|
||||||
|
The Druid coordinator runs periodically and the time between each run is a configurable parameter. Each time the Druid coordinator runs, it assesses the current state of the cluster before deciding on the appropriate actions to take. Similar to the broker and historical nodes, the Druid coordinator maintains a connection to a Zookeeper cluster for current cluster information. The coordinator also maintains a connection to a database containing information about available segments and rules. Available segments are stored in a segment table and list all segments that should be loaded in the cluster. Rules are stored in a rule table and indicate how segments should be handled.
|
||||||
|
|
||||||
|
Before any unassigned segments are serviced by historical nodes, the available historical nodes for each tier are first sorted in terms of capacity, with least capacity servers having the highest priority. Unassigned segments are always assigned to the nodes with least capacity to maintain a level of balance between nodes. The coordinator does not directly communicate with a historical node when assigning it a new segment; instead the coordinator creates some temporary information about the new segment under load queue path of the historical node. Once this request is seen, the historical node will load the segment and begin servicing it.
|
||||||
|
|
||||||
|
Rules
|
||||||
|
-----
|
||||||
|
|
||||||
|
Segments are loaded and dropped from the cluster based on a set of rules. Rules indicate how segments should be assigned to different historical node tiers and how many replicants of a segment should exist in each tier. Rules may also indicate when segments should be dropped entirely from the cluster. The coordinator loads a set of rules from the database. Rules may be specific to a certain datasource and/or a default set of rules can be configured. Rules are read in order and hence the ordering of rules is important. The coordinator will cycle through all available segments and match each segment with the first rule that applies. Each segment may only match a single rule
|
||||||
|
|
||||||
|
For more information on rules, see [Rule Configuration](Rule-Configuration.html).
|
||||||
|
|
||||||
|
Cleaning Up Segments
|
||||||
|
--------------------
|
||||||
|
|
||||||
|
Each run, the Druid coordinator compares the list of available database segments in the database with the current segments in the cluster. Segments that are not in the database but are still being served in the cluster are flagged and appended to a removal list. Segments that are overshadowed (their versions are too old and their data has been replaced by newer segments) are also dropped.
|
||||||
|
|
||||||
|
Segment Availability
|
||||||
|
--------------------
|
||||||
|
|
||||||
|
If a historical node restarts or becomes unavailable for any reason, the Druid coordinator will notice a node has gone missing and treat all segments served by that node as being dropped. Given a sufficient period of time, the segments may be reassigned to other historical nodes in the cluster. However, each segment that is dropped is not immediately forgotten. Instead, there is a transitional data structure that stores all dropped segments with an associated lifetime. The lifetime represents a period of time in which the coordinator will not reassign a dropped segment. Hence, if a historical node becomes unavailable and available again within a short period of time, the historical node will start up and serve segments from its cache without any those segments being reassigned across the cluster.
|
||||||
|
|
||||||
|
Balancing Segment Load
|
||||||
|
----------------------
|
||||||
|
|
||||||
|
To ensure an even distribution of segments across historical nodes in the cluster, the coordinator node will find the total size of all segments being served by every historical node each time the coordinator runs. For every historical node tier in the cluster, the coordinator node will determine the historical node with the highest utilization and the historical node with the lowest utilization. The percent difference in utilization between the two nodes is computed, and if the result exceeds a certain threshold, a number of segments will be moved from the highest utilized node to the lowest utilized node. There is a configurable limit on the number of segments that can be moved from one node to another each time the coordinator runs. Segments to be moved are selected at random and only moved if the resulting utilization calculation indicates the percentage difference between the highest and lowest servers has decreased.
|
||||||
|
|
||||||
|
HTTP Endpoints
|
||||||
|
--------------
|
||||||
|
|
||||||
|
The coordinator node exposes several HTTP endpoints for interactions.
|
||||||
|
|
||||||
|
### GET
|
||||||
|
|
||||||
|
* `/info/coordinator`
|
||||||
|
|
||||||
|
Returns the current true coordinator of the cluster as a JSON object.
|
||||||
|
|
||||||
|
* `/info/cluster`
|
||||||
|
|
||||||
|
Returns JSON data about every node and segment in the cluster. Information about each node and each segment on each node will be returned.
|
||||||
|
|
||||||
|
* `/info/servers`
|
||||||
|
|
||||||
|
Returns information about servers in the cluster. Set the `?full` query parameter to get full metadata about all servers and their segments in the cluster.
|
||||||
|
|
||||||
|
* `/info/servers/{serverName}`
|
||||||
|
|
||||||
|
Returns full metadata about a specific server.
|
||||||
|
|
||||||
|
* `/info/servers/{serverName}/segments`
|
||||||
|
|
||||||
|
Returns a list of all segments for a server. Set the `?full` query parameter to get all segment metadata included
|
||||||
|
|
||||||
|
* `/info/servers/{serverName}/segments/{segmentId}`
|
||||||
|
|
||||||
|
Returns full metadata for a specific segment.
|
||||||
|
|
||||||
|
* `/info/segments`
|
||||||
|
|
||||||
|
Returns all segments in the cluster as a list. Set the `?full` flag to get all metadata about segments in the cluster
|
||||||
|
|
||||||
|
* `/info/segments/{segmentId}`
|
||||||
|
|
||||||
|
Returns full metadata for a specific segment
|
||||||
|
|
||||||
|
* `/info/datasources`
|
||||||
|
|
||||||
|
Returns a list of datasources in the cluster. Set the `?full` flag to get all metadata for every datasource in the cluster
|
||||||
|
|
||||||
|
* `/info/datasources/{dataSourceName}`
|
||||||
|
|
||||||
|
Returns full metadata for a datasource
|
||||||
|
|
||||||
|
* `/info/datasources/{dataSourceName}/segments`
|
||||||
|
|
||||||
|
Returns a list of all segments for a datasource. Set the `?full` flag to get full segment metadata for a datasource
|
||||||
|
|
||||||
|
* `/info/datasources/{dataSourceName}/segments/{segmentId}`
|
||||||
|
|
||||||
|
Returns full segment metadata for a specific segment
|
||||||
|
|
||||||
|
* `/info/rules`
|
||||||
|
|
||||||
|
Returns all rules for all data sources in the cluster including the default datasource.
|
||||||
|
|
||||||
|
* `/info/rules/{dataSourceName}`
|
||||||
|
|
||||||
|
Returns all rules for a specified datasource
|
||||||
|
|
||||||
|
### POST
|
||||||
|
|
||||||
|
* `/info/rules/{dataSourceName}`
|
||||||
|
|
||||||
|
POST with a list of rules in JSON form to update rules.
|
||||||
|
|
||||||
|
The Coordinator Console
|
||||||
|
------------------
|
||||||
|
|
||||||
|
The Druid coordinator exposes a web GUI for displaying cluster information and rule configuration. After the coordinator starts, the console can be accessed at http://HOST:PORT/static/. There exists a full cluster view, as well as views for individual historical nodes, datasources and segments themselves. Segment information can be displayed in raw JSON form or as part of a sortable and filterable table.
|
||||||
|
|
||||||
|
The coordinator console also exposes an interface to creating and editing rules. All valid datasources configured in the segment database, along with a default datasource, are available for configuration. Rules of different types can be added, deleted or edited.
|
||||||
|
|
||||||
|
FAQ
|
||||||
|
---
|
||||||
|
|
||||||
|
1. **Do clients ever contact the coordinator node?**
|
||||||
|
|
||||||
|
The coordinator is not involved in a query.
|
||||||
|
|
||||||
|
historical nodes never directly contact the coordinator node. The Druid coordinator tells the historical nodes to load/drop data via Zookeeper, but the historical nodes are completely unaware of the coordinator.
|
||||||
|
|
||||||
|
Brokers also never contact the coordinator. Brokers base their understanding of the data topology on metadata exposed by the historical nodes via ZK and are completely unaware of the coordinator.
|
||||||
|
|
||||||
|
2. **Does it matter if the coordinator node starts up before or after other processes?**
|
||||||
|
|
||||||
|
No. If the Druid coordinator is not started up, no new segments will be loaded in the cluster and outdated segments will not be dropped. However, the coordinator node can be started up at any time, and after a configurable delay, will start running coordinator tasks.
|
||||||
|
|
||||||
|
This also means that if you have a working cluster and all of your coordinators die, the cluster will continue to function, it just won’t experience any changes to its data topology.
|
||||||
|
|
||||||
|
Running
|
||||||
|
-------
|
||||||
|
|
||||||
|
Coordinator nodes can be run using the `io.druid.cli.Main` class with program parameters "server coordinator".
|
||||||
|
|
||||||
|
Configuration
|
||||||
|
-------------
|
||||||
|
|
||||||
|
See [Configuration](Configuration.html).
|
|
@ -27,13 +27,13 @@ Druid is architected as a grouping of systems each with a distinct role and toge
|
||||||
|
|
||||||
The node types that currently exist are:
|
The node types that currently exist are:
|
||||||
|
|
||||||
* **Compute** nodes are the workhorses that handle storage and querying on "historical" data (non-realtime)
|
* **Historical** nodes are the workhorses that handle storage and querying on "historical" data (non-realtime)
|
||||||
* **Realtime** nodes ingest data in real-time, they are in charge of listening to a stream of incoming data and making it available immediately inside the Druid system. As data they have ingested ages, they hand it off to the compute nodes.
|
* **Realtime** nodes ingest data in real-time, they are in charge of listening to a stream of incoming data and making it available immediately inside the Druid system. As data they have ingested ages, they hand it off to the historical nodes.
|
||||||
* **Master** nodes act as coordinators. They look over the grouping of computes and make sure that data is available, replicated and in a generally "optimal" configuration.
|
* **Coordinator** nodes look over the grouping of historical nodes and make sure that data is available, replicated and in a generally "optimal" configuration.
|
||||||
* **Broker** nodes understand the topology of data across all of the other nodes in the cluster and re-write and route queries accordingly
|
* **Broker** nodes understand the topology of data across all of the other nodes in the cluster and re-write and route queries accordingly
|
||||||
* **Indexer** nodes form a cluster of workers to load batch and real-time data into the system as well as allow for alterations to the data stored in the system (also known as the Indexing Service)
|
* **Indexer** nodes form a cluster of workers to load batch and real-time data into the system as well as allow for alterations to the data stored in the system (also known as the Indexing Service)
|
||||||
|
|
||||||
This separation allows each node to only care about what it is best at. By separating Compute and Realtime, we separate the memory concerns of listening on a real-time stream of data and processing it for entry into the system. By separating the Master and Broker, we separate the needs for querying from the needs for maintaining "good" data distribution across the cluster.
|
This separation allows each node to only care about what it is best at. By separating Historical and Realtime, we separate the memory concerns of listening on a real-time stream of data and processing it for entry into the system. By separating the Coordinator and Broker, we separate the needs for querying from the needs for maintaining "good" data distribution across the cluster.
|
||||||
|
|
||||||
All nodes can be run in some highly available fashion. Either as symmetric peers in a share-nothing cluster or as hot-swap failover nodes.
|
All nodes can be run in some highly available fashion. Either as symmetric peers in a share-nothing cluster or as hot-swap failover nodes.
|
||||||
|
|
||||||
|
@ -55,27 +55,27 @@ Getting data into the Druid system requires an indexing process. This gives the
|
||||||
- Bitmap compression
|
- Bitmap compression
|
||||||
- RLE (on the roadmap, but not yet implemented)
|
- RLE (on the roadmap, but not yet implemented)
|
||||||
|
|
||||||
The output of the indexing process is stored in a "deep storage" LOB store/file system ([Deep Storage](Deep Storage.html) for information about potential options). Data is then loaded by compute nodes by first downloading the data to their local disk and then memory mapping it before serving queries.
|
The output of the indexing process is stored in a "deep storage" LOB store/file system ([Deep Storage](Deep Storage.html) for information about potential options). Data is then loaded by historical nodes by first downloading the data to their local disk and then memory mapping it before serving queries.
|
||||||
|
|
||||||
If a compute node dies, it will no longer serve its segments, but given that the segments are still available on the "deep storage" any other node can simply download the segment and start serving it. This means that it is possible to actually remove all compute nodes from the cluster and then re-provision them without any data loss. It also means that if the "deep storage" is not available, the nodes can continue to serve the segments they have already pulled down (i.e. the cluster goes stale, not down).
|
If a historical node dies, it will no longer serve its segments, but given that the segments are still available on the "deep storage" any other node can simply download the segment and start serving it. This means that it is possible to actually remove all historical nodes from the cluster and then re-provision them without any data loss. It also means that if the "deep storage" is not available, the nodes can continue to serve the segments they have already pulled down (i.e. the cluster goes stale, not down).
|
||||||
|
|
||||||
In order for a segment to exist inside of the cluster, an entry has to be added to a table in a MySQL instance. This entry is a self-describing bit of metadata about the segment, it includes things like the schema of the segment, the size, and the location on deep storage. These entries are what the Master uses to know what data **should** be available on the cluster.
|
In order for a segment to exist inside of the cluster, an entry has to be added to a table in a MySQL instance. This entry is a self-describing bit of metadata about the segment, it includes things like the schema of the segment, the size, and the location on deep storage. These entries are what the Coordinator uses to know what data **should** be available on the cluster.
|
||||||
|
|
||||||
### Fault Tolerance
|
### Fault Tolerance
|
||||||
|
|
||||||
- **Compute** As discussed above, if a compute node dies, another compute node can take its place and there is no fear of data loss
|
- **Historical** As discussed above, if a historical node dies, another historical node can take its place and there is no fear of data loss
|
||||||
- **Master** Can be run in a hot fail-over configuration. If no masters are running, then changes to the data topology will stop happening (no new data and no data balancing decisions), but the system will continue to run.
|
- **Coordinator** Can be run in a hot fail-over configuration. If no coordinators are running, then changes to the data topology will stop happening (no new data and no data balancing decisions), but the system will continue to run.
|
||||||
- **Broker** Can be run in parallel or in hot fail-over.
|
- **Broker** Can be run in parallel or in hot fail-over.
|
||||||
- **Realtime** Depending on the semantics of the delivery stream, multiple of these can be run in parallel processing the exact same stream. They periodically checkpoint to disk and eventually push out to the Computes. Steps are taken to be able to recover from process death, but loss of access to the local disk can result in data loss if this is the only method of adding data to the system.
|
- **Realtime** Depending on the semantics of the delivery stream, multiple of these can be run in parallel processing the exact same stream. They periodically checkpoint to disk and eventually push out to the Historical nodes. Steps are taken to be able to recover from process death, but loss of access to the local disk can result in data loss if this is the only method of adding data to the system.
|
||||||
- **"deep storage" file system** If this is not available, new data will not be able to enter the cluster, but the cluster will continue operating as is.
|
- **"deep storage" file system** If this is not available, new data will not be able to enter the cluster, but the cluster will continue operating as is.
|
||||||
- **MySQL** If this is not available, the master will be unable to find out about new segments in the system, but it will continue with its current view of the segments that should exist in the cluster.
|
- **MySQL** If this is not available, the coordinator will be unable to find out about new segments in the system, but it will continue with its current view of the segments that should exist in the cluster.
|
||||||
- **ZooKeeper** If this is not available, data topology changes will not be able to be made, but the Brokers will maintain their most recent view of the data topology and continue serving requests accordingly.
|
- **ZooKeeper** If this is not available, data topology changes will not be able to be made, but the Brokers will maintain their most recent view of the data topology and continue serving requests accordingly.
|
||||||
|
|
||||||
### Query processing
|
### Query processing
|
||||||
|
|
||||||
A query first enters the Broker, where the broker will match the query with the data segments that are known to exist. It will then pick a set of machines that are serving those segments and rewrite the query for each server to specify the segment(s) targetted. The Compute/Realtime nodes will take in the query, process them and return results. The Broker then takes the results and merges them together to get the final answer, which it returns. In this way, the broker can prune all of the data that doesn’t match a query before ever even looking at a single row of data.
|
A query first enters the Broker, where the broker will match the query with the data segments that are known to exist. It will then pick a set of machines that are serving those segments and rewrite the query for each server to specify the segment(s) targetted. The Historical/Realtime nodes will take in the query, process them and return results. The Broker then takes the results and merges them together to get the final answer, which it returns. In this way, the broker can prune all of the data that doesn’t match a query before ever even looking at a single row of data.
|
||||||
|
|
||||||
For filters at a more granular level than what the Broker can prune based on, the indexing structures inside each segment allows the compute nodes to figure out which (if any) rows match the filter set before looking at any row of data. It can do all of the boolean algebra of the filter on the bitmap indices and never actually look directly at a row of data.
|
For filters at a more granular level than what the Broker can prune based on, the indexing structures inside each segment allows the historical nodes to figure out which (if any) rows match the filter set before looking at any row of data. It can do all of the boolean algebra of the filter on the bitmap indices and never actually look directly at a row of data.
|
||||||
|
|
||||||
Once it knows the rows that match the current query, it can access the columns it cares about for those rows directly without having to load data that it is just going to throw away.
|
Once it knows the rows that match the current query, it can access the columns it cares about for those rows directly without having to load data that it is just going to throw away.
|
||||||
|
|
||||||
|
|
|
@ -14,9 +14,9 @@ This guide walks you through the steps to create the cluster and then how to cre
|
||||||
|
|
||||||
## What’s in this Druid Demo Cluster?
|
## What’s in this Druid Demo Cluster?
|
||||||
|
|
||||||
1. A single "Master" node. This node co-locates the [Master](Master.html) process, the [Broker](Broker.html) process, Zookeeper, and the MySQL instance. You can read more about Druid architecture [Design](Design.html).
|
1. A single "Coordinator" node. This node co-locates the [Coordinator](Coordinator.html) process, the [Broker](Broker.html) process, Zookeeper, and the MySQL instance. You can read more about Druid architecture [Design](Design.html).
|
||||||
|
|
||||||
1. Three compute nodes; these compute nodes, have been pre-configured to work with the Master node and should automatically load up the Wikipedia edit stream data (no specific setup is required).
|
1. Three historical nodes; these historical nodes, have been pre-configured to work with the Coordinator node and should automatically load up the Wikipedia edit stream data (no specific setup is required).
|
||||||
|
|
||||||
## Setup Instructions
|
## Setup Instructions
|
||||||
1. Log in to your AWS account: Start by logging into the [Console page](https://console.aws.amazon.com) of your AWS account; if you don’t have one, follow this link to sign up for one [http://aws.amazon.com/](http://aws.amazon.com/).
|
1. Log in to your AWS account: Start by logging into the [Console page](https://console.aws.amazon.com) of your AWS account; if you don’t have one, follow this link to sign up for one [http://aws.amazon.com/](http://aws.amazon.com/).
|
||||||
|
@ -54,19 +54,19 @@ This guide walks you through the steps to create the cluster and then how to cre
|
||||||
|
|
||||||
![CloudFormations](images/demo/setup-09-events.png)
|
![CloudFormations](images/demo/setup-09-events.png)
|
||||||
|
|
||||||
1. Get the IP address of your Druid Master Node:
|
1. Get the IP address of your Druid Coordinator Node:
|
||||||
1. Go to the following URL: [https://console.aws.amazon.com/ec2](https://console.aws.amazon.com/ec2)
|
1. Go to the following URL: [https://console.aws.amazon.com/ec2](https://console.aws.amazon.com/ec2)
|
||||||
1. Click **Instances** in the left pane – you should see something similar to the following figure.
|
1. Click **Instances** in the left pane – you should see something similar to the following figure.
|
||||||
1. Select the **DruidMaster** instance
|
1. Select the **DruidCoordinator** instance
|
||||||
1. Your IP address is right under the heading: **EC2 Instance: DruidMaster**. Select and copy that entire line, which ends with `amazonaws.com`.
|
1. Your IP address is right under the heading: **EC2 Instance: DruidCoordinator**. Select and copy that entire line, which ends with `amazonaws.com`.
|
||||||
|
|
||||||
![EC2 Instances](images/demo/setup-10-ip.png)
|
![EC2 Instances](images/demo/setup-10-ip.png)
|
||||||
|
|
||||||
## Querying Data
|
## Querying Data
|
||||||
|
|
||||||
1. Use the following URL to bring up the Druid Demo Cluster query interface (replace **IPAddressDruidMaster** with the actual druid master IP Address):
|
1. Use the following URL to bring up the Druid Demo Cluster query interface (replace **IPAddressDruidCoordinator** with the actual druid coordinator IP Address):
|
||||||
|
|
||||||
**`http://IPAddressDruidMaster:8082/druid/v3/demoServlet`**
|
**`http://IPAddressDruidCoordinator:8082/druid/v3/demoServlet`**
|
||||||
|
|
||||||
As you can see from the image below, there are default values in the Dimensions and Granularity fields. Clicking **Execute** will produce a basic query result.
|
As you can see from the image below, there are default values in the Dimensions and Granularity fields. Clicking **Execute** will produce a basic query result.
|
||||||
![Demo Query Interface](images/demo/query-1.png)
|
![Demo Query Interface](images/demo/query-1.png)
|
||||||
|
|
|
@ -19,7 +19,7 @@ It’s write semantics aren’t as fluid and does not support joins. ParAccel is
|
||||||
|
|
||||||
###Data distribution model
|
###Data distribution model
|
||||||
|
|
||||||
Druid’s data distribution, is segment based which exists on highly available "deep" storage, like S3 or HDFS. Scaling up (or down) does not require massive copy actions or downtime; in fact, losing any number of compute nodes does not result in data loss because new compute nodes can always be brought up by reading data from "deep" storage.
|
Druid’s data distribution, is segment based which exists on highly available "deep" storage, like S3 or HDFS. Scaling up (or down) does not require massive copy actions or downtime; in fact, losing any number of historical nodes does not result in data loss because new historical nodes can always be brought up by reading data from "deep" storage.
|
||||||
|
|
||||||
To contrast, ParAccel’s data distribution model is hash-based. Expanding the cluster requires re-hashing the data across the nodes, making it difficult to perform without taking downtime. Amazon’s Redshift works around this issue with a multi-step process:
|
To contrast, ParAccel’s data distribution model is hash-based. Expanding the cluster requires re-hashing the data across the nodes, making it difficult to perform without taking downtime. Amazon’s Redshift works around this issue with a multi-step process:
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,40 @@
|
||||||
|
---
|
||||||
|
layout: doc_page
|
||||||
|
---
|
||||||
|
Historical
|
||||||
|
=======
|
||||||
|
|
||||||
|
Historical nodes load up historical segments and expose them for querying.
|
||||||
|
|
||||||
|
Loading and Serving Segments
|
||||||
|
----------------------------
|
||||||
|
|
||||||
|
Each historical node maintains a constant connection to Zookeeper and watches a configurable set of Zookeeper paths for new segment information. Historical nodes do not communicate directly with each other or with the coordinator nodes but instead rely on Zookeeper for coordination.
|
||||||
|
|
||||||
|
The [Coordinator](Coordinator.html) node is responsible for assigning new segments to historical nodes. Assignment is done by creating an ephemeral Zookeeper entry under a load queue path associated with a historical node. For more information on how the coordinator assigns segments to historical nodes, please see [Coordinator](Coordinator.html).
|
||||||
|
|
||||||
|
When a historical node notices a new load queue entry in its load queue path, it will first check a local disk directory (cache) for the information about segment. If no information about the segment exists in the cache, the historical node will download metadata about the new segment to serve from Zookeeper. This metadata includes specifications about where the segment is located in deep storage and about how to decompress and process the segment. For more information about segment metadata and Druid segments in general, please see [Segments](Segments.html). Once a historical node completes processing a segment, the segment is announced in Zookeeper under a served segments path associated with the node. At this point, the segment is available for querying.
|
||||||
|
|
||||||
|
Loading and Serving Segments From Cache
|
||||||
|
---------------------------------------
|
||||||
|
|
||||||
|
Recall that when a historical node notices a new segment entry in its load queue path, the historical node first checks a configurable cache directory on its local disk to see if the segment had been previously downloaded. If a local cache entry already exists, the historical node will directly read the segment binary files from disk and load the segment.
|
||||||
|
|
||||||
|
The segment cache is also leveraged when a historical node is first started. On startup, a historical node will search through its cache directory and immediately load and serve all segments that are found. This feature allows historical nodes to be queried as soon they come online.
|
||||||
|
|
||||||
|
Querying Segments
|
||||||
|
-----------------
|
||||||
|
|
||||||
|
Please see [Querying](Querying.html) for more information on querying historical nodes.
|
||||||
|
|
||||||
|
For every query that a historical node services, it will log the query and report metrics on the time taken to run the query.
|
||||||
|
|
||||||
|
Running
|
||||||
|
-------
|
||||||
|
|
||||||
|
Historical nodes can be run using the `io.druid.cli.Main` class with program arguments "server historical".
|
||||||
|
|
||||||
|
Configuration
|
||||||
|
-------------
|
||||||
|
|
||||||
|
See [Configuration](Configuration.html).
|
|
@ -57,7 +57,7 @@ http://<COORDINATOR_IP>:<port>/static/console.html
|
||||||
|
|
||||||
The coordinator retrieves worker setup metadata from the Druid [MySQL](MySQL.html) config table. This metadata contains information about the version of workers to create, the maximum and minimum number of workers in the cluster at one time, and additional information required to automatically create workers.
|
The coordinator retrieves worker setup metadata from the Druid [MySQL](MySQL.html) config table. This metadata contains information about the version of workers to create, the maximum and minimum number of workers in the cluster at one time, and additional information required to automatically create workers.
|
||||||
|
|
||||||
Tasks are assigned to workers by creating entries under specific /tasks paths associated with a worker, similar to how the Druid master node assigns segments to compute nodes. See [Worker Configuration](Indexing-Service#configuration-1). Once a worker picks up a task, it deletes the task entry and announces a task status under a /status path associated with the worker. Tasks are submitted to a worker until the worker hits capacity. If all workers in a cluster are at capacity, the indexer coordinator node automatically creates new worker resources.
|
Tasks are assigned to workers by creating entries under specific /tasks paths associated with a worker, similar to how the Druid coordinator node assigns segments to historical nodes. See [Worker Configuration](Indexing-Service#configuration-1). Once a worker picks up a task, it deletes the task entry and announces a task status under a /status path associated with the worker. Tasks are submitted to a worker until the worker hits capacity. If all workers in a cluster are at capacity, the indexer coordinator node automatically creates new worker resources.
|
||||||
|
|
||||||
#### Autoscaling
|
#### Autoscaling
|
||||||
|
|
||||||
|
@ -180,7 +180,7 @@ Worker nodes require [basic service configuration](https://github.com/metamx/dru
|
||||||
-Ddruid.indexer.taskDir=/mnt/persistent/task/
|
-Ddruid.indexer.taskDir=/mnt/persistent/task/
|
||||||
-Ddruid.indexer.hadoopWorkingPath=/tmp/druid-indexing
|
-Ddruid.indexer.hadoopWorkingPath=/tmp/druid-indexing
|
||||||
|
|
||||||
-Ddruid.worker.masterService=druid:sample_cluster:indexer
|
-Ddruid.worker.coordinatorService=druid:sample_cluster:indexer
|
||||||
|
|
||||||
-Ddruid.indexer.fork.hostpattern=<IP>:%d
|
-Ddruid.indexer.fork.hostpattern=<IP>:%d
|
||||||
-Ddruid.indexer.fork.startport=8080
|
-Ddruid.indexer.fork.startport=8080
|
||||||
|
@ -205,7 +205,7 @@ Many of the configurations for workers are similar to those for basic service co
|
||||||
|`druid.worker.version`|Version identifier for the worker.|0|
|
|`druid.worker.version`|Version identifier for the worker.|0|
|
||||||
|`druid.worker.capacity`|Maximum number of tasks the worker can accept.|1|
|
|`druid.worker.capacity`|Maximum number of tasks the worker can accept.|1|
|
||||||
|`druid.indexer.threads`|Number of processing threads per worker.|1|
|
|`druid.indexer.threads`|Number of processing threads per worker.|1|
|
||||||
|`druid.worker.masterService`|Name of the indexer coordinator used for service discovery.|none|
|
|`druid.worker.coordinatorService`|Name of the indexer coordinator used for service discovery.|none|
|
||||||
|`druid.indexer.fork.hostpattern`|The format of the host name.|none|
|
|`druid.indexer.fork.hostpattern`|The format of the host name.|none|
|
||||||
|`druid.indexer.fork.startport`|Port in which child JVM starts from.|none|
|
|`druid.indexer.fork.startport`|Port in which child JVM starts from.|none|
|
||||||
|`druid.indexer.fork.opts`|JVM options for child JVMs.|none|
|
|`druid.indexer.fork.opts`|JVM options for child JVMs.|none|
|
||||||
|
|
|
@ -11,8 +11,8 @@ Each type of node needs its own config file and directory, so create them as sub
|
||||||
```bash
|
```bash
|
||||||
mkdir config
|
mkdir config
|
||||||
mkdir config/realtime
|
mkdir config/realtime
|
||||||
mkdir config/master
|
mkdir config/coordinator
|
||||||
mkdir config/compute
|
mkdir config/historical
|
||||||
mkdir config/broker
|
mkdir config/broker
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -211,7 +211,7 @@ GRANT ALL ON druid.* TO 'druid'@'localhost' IDENTIFIED BY 'diurd';
|
||||||
CREATE database druid;
|
CREATE database druid;
|
||||||
```
|
```
|
||||||
|
|
||||||
The [Master](Master.html) node will create the tables it needs based on its configuration.
|
The [Coordinator](Coordinator.html) node will create the tables it needs based on its configuration.
|
||||||
|
|
||||||
### Make sure you have ZooKeeper Running ###
|
### Make sure you have ZooKeeper Running ###
|
||||||
|
|
||||||
|
@ -232,11 +232,11 @@ cp conf/zoo_sample.cfg conf/zoo.cfg
|
||||||
cd ..
|
cd ..
|
||||||
```
|
```
|
||||||
|
|
||||||
### Launch a Master Node ###
|
### Launch a Coordinator Node ###
|
||||||
|
|
||||||
If you've already setup a realtime node, be aware that although you can run multiple node types on one physical computer, you must assign them unique ports. Having used 8080 for the [Realtime](Realtime.html) node, we use 8081 for the [Master](Master.html).
|
If you've already setup a realtime node, be aware that although you can run multiple node types on one physical computer, you must assign them unique ports. Having used 8080 for the [Realtime](Realtime.html) node, we use 8081 for the [Coordinator](Coordinator.html).
|
||||||
|
|
||||||
1. Setup a configuration file called config/master/runtime.properties similar to:
|
1. Setup a configuration file called config/coordinator/runtime.properties similar to:
|
||||||
|
|
||||||
```properties
|
```properties
|
||||||
druid.host=0.0.0.0:8081
|
druid.host=0.0.0.0:8081
|
||||||
|
@ -251,7 +251,7 @@ If you've already setup a realtime node, be aware that although you can run mult
|
||||||
# emitting, opaque marker
|
# emitting, opaque marker
|
||||||
druid.service=example
|
druid.service=example
|
||||||
|
|
||||||
druid.master.startDelay=PT60s
|
druid.coordinator.startDelay=PT60s
|
||||||
druid.request.logging.dir=/tmp/example/log
|
druid.request.logging.dir=/tmp/example/log
|
||||||
druid.realtime.specFile=realtime.spec
|
druid.realtime.specFile=realtime.spec
|
||||||
com.metamx.emitter.logging=true
|
com.metamx.emitter.logging=true
|
||||||
|
@ -281,17 +281,17 @@ If you've already setup a realtime node, be aware that although you can run mult
|
||||||
druid.paths.segmentInfoCache=/tmp/druid/segmentInfoCache
|
druid.paths.segmentInfoCache=/tmp/druid/segmentInfoCache
|
||||||
```
|
```
|
||||||
|
|
||||||
2. Launch the [Master](Master.html) node
|
2. Launch the [Coordinator](Coordinator.html) node
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 \
|
java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 \
|
||||||
-classpath lib/*:config/master \
|
-classpath lib/*:config/coordinator \
|
||||||
com.metamx.druid.http.MasterMain
|
com.metamx.druid.http.CoordinatorMain
|
||||||
```
|
```
|
||||||
|
|
||||||
### Launch a Compute/Historical Node ###
|
### Launch a Historical Node ###
|
||||||
|
|
||||||
1. Create a configuration file in config/compute/runtime.properties similar to:
|
1. Create a configuration file in config/historical/runtime.properties similar to:
|
||||||
|
|
||||||
```properties
|
```properties
|
||||||
druid.host=0.0.0.0:8082
|
druid.host=0.0.0.0:8082
|
||||||
|
@ -338,12 +338,12 @@ If you've already setup a realtime node, be aware that although you can run mult
|
||||||
druid.storage.local=true
|
druid.storage.local=true
|
||||||
```
|
```
|
||||||
|
|
||||||
2. Launch the compute node:
|
2. Launch the historical node:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 \
|
java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 \
|
||||||
-classpath lib/*:config/compute \
|
-classpath lib/*:config/historical \
|
||||||
com.metamx.druid.http.ComputeMain
|
io.druid.cli.Main server historical
|
||||||
```
|
```
|
||||||
|
|
||||||
### Create a File of Records ###
|
### Create a File of Records ###
|
||||||
|
@ -360,7 +360,7 @@ We can use the same records we have been, in a file called records.json:
|
||||||
|
|
||||||
### Run the Hadoop Job ###
|
### Run the Hadoop Job ###
|
||||||
|
|
||||||
Now its time to run the Hadoop [Batch-ingestion](Batch-ingestion.html) job, HadoopDruidIndexer, which will fill a historical [Compute](Compute.html) node with data. First we'll need to configure the job.
|
Now its time to run the Hadoop [Batch-ingestion](Batch-ingestion.html) job, HadoopDruidIndexer, which will fill a historical [Historical](Historical.html) node with data. First we'll need to configure the job.
|
||||||
|
|
||||||
1. Create a config called batchConfig.json similar to:
|
1. Create a config called batchConfig.json similar to:
|
||||||
|
|
||||||
|
|
|
@ -1,136 +0,0 @@
|
||||||
---
|
|
||||||
layout: doc_page
|
|
||||||
---
|
|
||||||
Master
|
|
||||||
======
|
|
||||||
|
|
||||||
The Druid master node is primarily responsible for segment management and distribution. More specifically, the Druid master node communicates to compute nodes to load or drop segments based on configurations. The Druid master is responsible for loading new segments, dropping outdated segments, managing segment replication, and balancing segment load.
|
|
||||||
|
|
||||||
The Druid master runs periodically and the time between each run is a configurable parameter. Each time the Druid master runs, it assesses the current state of the cluster before deciding on the appropriate actions to take. Similar to the broker and compute nodes, the Druid master maintains a connection to a Zookeeper cluster for current cluster information. The master also maintains a connection to a database containing information about available segments and rules. Available segments are stored in a segment table and list all segments that should be loaded in the cluster. Rules are stored in a rule table and indicate how segments should be handled.
|
|
||||||
|
|
||||||
Before any unassigned segments are serviced by compute nodes, the available compute nodes for each tier are first sorted in terms of capacity, with least capacity servers having the highest priority. Unassigned segments are always assigned to the nodes with least capacity to maintain a level of balance between nodes. The master does not directly communicate with a compute node when assigning it a new segment; instead the master creates some temporary information about the new segment under load queue path of the compute node. Once this request is seen, the compute node will load the segment and begin servicing it.
|
|
||||||
|
|
||||||
Rules
|
|
||||||
-----
|
|
||||||
|
|
||||||
Segments are loaded and dropped from the cluster based on a set of rules. Rules indicate how segments should be assigned to different compute node tiers and how many replicants of a segment should exist in each tier. Rules may also indicate when segments should be dropped entirely from the cluster. The master loads a set of rules from the database. Rules may be specific to a certain datasource and/or a default set of rules can be configured. Rules are read in order and hence the ordering of rules is important. The master will cycle through all available segments and match each segment with the first rule that applies. Each segment may only match a single rule
|
|
||||||
|
|
||||||
For more information on rules, see [Rule Configuration](Rule-Configuration.html).
|
|
||||||
|
|
||||||
Cleaning Up Segments
|
|
||||||
--------------------
|
|
||||||
|
|
||||||
Each run, the Druid master compares the list of available database segments in the database with the current segments in the cluster. Segments that are not in the database but are still being served in the cluster are flagged and appended to a removal list. Segments that are overshadowed (their versions are too old and their data has been replaced by newer segments) are also dropped.
|
|
||||||
|
|
||||||
Segment Availability
|
|
||||||
--------------------
|
|
||||||
|
|
||||||
If a compute node restarts or becomes unavailable for any reason, the Druid master will notice a node has gone missing and treat all segments served by that node as being dropped. Given a sufficient period of time, the segments may be reassigned to other compute nodes in the cluster. However, each segment that is dropped is not immediately forgotten. Instead, there is a transitional data structure that stores all dropped segments with an associated lifetime. The lifetime represents a period of time in which the master will not reassign a dropped segment. Hence, if a compute node becomes unavailable and available again within a short period of time, the compute node will start up and serve segments from its cache without any those segments being reassigned across the cluster.
|
|
||||||
|
|
||||||
Balancing Segment Load
|
|
||||||
----------------------
|
|
||||||
|
|
||||||
To ensure an even distribution of segments across compute nodes in the cluster, the master node will find the total size of all segments being served by every compute node each time the master runs. For every compute node tier in the cluster, the master node will determine the compute node with the highest utilization and the compute node with the lowest utilization. The percent difference in utilization between the two nodes is computed, and if the result exceeds a certain threshold, a number of segments will be moved from the highest utilized node to the lowest utilized node. There is a configurable limit on the number of segments that can be moved from one node to another each time the master runs. Segments to be moved are selected at random and only moved if the resulting utilization calculation indicates the percentage difference between the highest and lowest servers has decreased.
|
|
||||||
|
|
||||||
HTTP Endpoints
|
|
||||||
--------------
|
|
||||||
|
|
||||||
The master node exposes several HTTP endpoints for interactions.
|
|
||||||
|
|
||||||
### GET
|
|
||||||
|
|
||||||
* `/info/master`
|
|
||||||
|
|
||||||
Returns the current true master of the cluster as a JSON object.
|
|
||||||
|
|
||||||
* `/info/cluster`
|
|
||||||
|
|
||||||
Returns JSON data about every node and segment in the cluster. Information about each node and each segment on each node will be returned.
|
|
||||||
|
|
||||||
* `/info/servers`
|
|
||||||
|
|
||||||
Returns information about servers in the cluster. Set the `?full` query parameter to get full metadata about all servers and their segments in the cluster.
|
|
||||||
|
|
||||||
* `/info/servers/{serverName}`
|
|
||||||
|
|
||||||
Returns full metadata about a specific server.
|
|
||||||
|
|
||||||
* `/info/servers/{serverName}/segments`
|
|
||||||
|
|
||||||
Returns a list of all segments for a server. Set the `?full` query parameter to get all segment metadata included
|
|
||||||
|
|
||||||
* `/info/servers/{serverName}/segments/{segmentId}`
|
|
||||||
|
|
||||||
Returns full metadata for a specific segment.
|
|
||||||
|
|
||||||
* `/info/segments`
|
|
||||||
|
|
||||||
Returns all segments in the cluster as a list. Set the `?full` flag to get all metadata about segments in the cluster
|
|
||||||
|
|
||||||
* `/info/segments/{segmentId}`
|
|
||||||
|
|
||||||
Returns full metadata for a specific segment
|
|
||||||
|
|
||||||
* `/info/datasources`
|
|
||||||
|
|
||||||
Returns a list of datasources in the cluster. Set the `?full` flag to get all metadata for every datasource in the cluster
|
|
||||||
|
|
||||||
* `/info/datasources/{dataSourceName}`
|
|
||||||
|
|
||||||
Returns full metadata for a datasource
|
|
||||||
|
|
||||||
* `/info/datasources/{dataSourceName}/segments`
|
|
||||||
|
|
||||||
Returns a list of all segments for a datasource. Set the `?full` flag to get full segment metadata for a datasource
|
|
||||||
|
|
||||||
* `/info/datasources/{dataSourceName}/segments/{segmentId}`
|
|
||||||
|
|
||||||
Returns full segment metadata for a specific segment
|
|
||||||
|
|
||||||
* `/info/rules`
|
|
||||||
|
|
||||||
Returns all rules for all data sources in the cluster including the default datasource.
|
|
||||||
|
|
||||||
* `/info/rules/{dataSourceName}`
|
|
||||||
|
|
||||||
Returns all rules for a specified datasource
|
|
||||||
|
|
||||||
### POST
|
|
||||||
|
|
||||||
* `/info/rules/{dataSourceName}`
|
|
||||||
|
|
||||||
POST with a list of rules in JSON form to update rules.
|
|
||||||
|
|
||||||
The Master Console
|
|
||||||
------------------
|
|
||||||
|
|
||||||
The Druid master exposes a web GUI for displaying cluster information and rule configuration. After the master starts, the console can be accessed at http://HOST:PORT/static/. There exists a full cluster view, as well as views for individual compute nodes, datasources and segments themselves. Segment information can be displayed in raw JSON form or as part of a sortable and filterable table.
|
|
||||||
|
|
||||||
The master console also exposes an interface to creating and editing rules. All valid datasources configured in the segment database, along with a default datasource, are available for configuration. Rules of different types can be added, deleted or edited.
|
|
||||||
|
|
||||||
FAQ
|
|
||||||
---
|
|
||||||
|
|
||||||
1. **Do clients ever contact the master node?**
|
|
||||||
|
|
||||||
The master is not involved in a query.
|
|
||||||
|
|
||||||
Compute nodes never directly contact the master node. The Druid master tells the compute nodes to load/drop data via Zookeeper, but the compute nodes are completely unaware of the master.
|
|
||||||
|
|
||||||
Brokers also never contact the master. Brokers base their understanding of the data topology on metadata exposed by the compute nodes via ZK and are completely unaware of the master.
|
|
||||||
|
|
||||||
2. **Does it matter if the master node starts up before or after other processes?**
|
|
||||||
|
|
||||||
No. If the Druid master is not started up, no new segments will be loaded in the cluster and outdated segments will not be dropped. However, the master node can be started up at any time, and after a configurable delay, will start running master tasks.
|
|
||||||
|
|
||||||
This also means that if you have a working cluster and all of your masters die, the cluster will continue to function, it just won’t experience any changes to its data topology.
|
|
||||||
|
|
||||||
Running
|
|
||||||
-------
|
|
||||||
|
|
||||||
Master nodes can be run using the `com.metamx.druid.http.MasterMain` class.
|
|
||||||
|
|
||||||
Configuration
|
|
||||||
-------------
|
|
||||||
|
|
||||||
See [Configuration](Configuration.html).
|
|
|
@ -8,7 +8,7 @@ Segments Table
|
||||||
|
|
||||||
This is dictated by the `druid.database.segmentTable` property (Note that these properties are going to change in the next stable version after 0.4.12).
|
This is dictated by the `druid.database.segmentTable` property (Note that these properties are going to change in the next stable version after 0.4.12).
|
||||||
|
|
||||||
This table stores metadata about the segments that are available in the system. The table is polled by the [Master](Master.html) to determine the set of segments that should be available for querying in the system. The table has two main functional columns, the other columns are for indexing purposes.
|
This table stores metadata about the segments that are available in the system. The table is polled by the [Coordinator](Coordinator.html) to determine the set of segments that should be available for querying in the system. The table has two main functional columns, the other columns are for indexing purposes.
|
||||||
|
|
||||||
The `used` column is a boolean "tombstone". A 1 means that the segment should be "used" by the cluster (i.e. it should be loaded and available for requests). A 0 means that the segment should not be actively loaded into the cluster. We do this as a means of removing segments from the cluster without actually removing their metadata (which allows for simpler rolling back if that is ever an issue).
|
The `used` column is a boolean "tombstone". A 1 means that the segment should be "used" by the cluster (i.e. it should be loaded and available for requests). A 0 means that the segment should not be actively loaded into the cluster. We do this as a means of removing segments from the cluster without actually removing their metadata (which allows for simpler rolling back if that is ever an issue).
|
||||||
|
|
||||||
|
@ -36,7 +36,7 @@ Note that the format of this blob can and will change from time-to-time.
|
||||||
Rule Table
|
Rule Table
|
||||||
----------
|
----------
|
||||||
|
|
||||||
The rule table is used to store the various rules about where segments should land. These rules are used by the [Master](Master.html) when making segment (re-)allocation decisions about the cluster.
|
The rule table is used to store the various rules about where segments should land. These rules are used by the [Coordinator](Coordinator.html) when making segment (re-)allocation decisions about the cluster.
|
||||||
|
|
||||||
Config Table
|
Config Table
|
||||||
------------
|
------------
|
||||||
|
|
|
@ -3,7 +3,7 @@ layout: doc_page
|
||||||
---
|
---
|
||||||
# Setup #
|
# Setup #
|
||||||
|
|
||||||
Before we start querying druid, we're going to finish setting up a complete cluster on localhost. In [Loading Your Data](Loading-Your-Data.html) we setup a [Realtime](Realtime.html), [Compute](Compute.html) and [Master](Master.html) node. If you've already completed that tutorial, you need only follow the directions for 'Booting a Broker Node'.
|
Before we start querying druid, we're going to finish setting up a complete cluster on localhost. In [Loading Your Data](Loading-Your-Data.html) we setup a [Realtime](Realtime.html), [Historical](Historical.html) and [Coordinator](Coordinator.html) node. If you've already completed that tutorial, you need only follow the directions for 'Booting a Broker Node'.
|
||||||
|
|
||||||
## Booting a Broker Node ##
|
## Booting a Broker Node ##
|
||||||
|
|
||||||
|
@ -65,16 +65,16 @@ Before we start querying druid, we're going to finish setting up a complete clus
|
||||||
com.metamx.druid.http.BrokerMain
|
com.metamx.druid.http.BrokerMain
|
||||||
```
|
```
|
||||||
|
|
||||||
## Booting a Master Node ##
|
## Booting a Coordinator Node ##
|
||||||
|
|
||||||
1. Setup a config file at config/master/runtime.properties that looks like this: [https://gist.github.com/rjurney/5818870](https://gist.github.com/rjurney/5818870)
|
1. Setup a config file at config/coordinator/runtime.properties that looks like this: [https://gist.github.com/rjurney/5818870](https://gist.github.com/rjurney/5818870)
|
||||||
|
|
||||||
2. Run the master node:
|
2. Run the coordinator node:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 \
|
java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 \
|
||||||
-classpath services/target/druid-services-0.5.50-SNAPSHOT-selfcontained.jar:config/master \
|
-classpath services/target/druid-services-0.5.50-SNAPSHOT-selfcontained.jar:config/coordinator \
|
||||||
com.metamx.druid.http.MasterMain
|
io.druid.cli.Main server coordinator
|
||||||
```
|
```
|
||||||
|
|
||||||
## Booting a Realtime Node ##
|
## Booting a Realtime Node ##
|
||||||
|
@ -92,15 +92,15 @@ Before we start querying druid, we're going to finish setting up a complete clus
|
||||||
com.metamx.druid.realtime.RealtimeMain
|
com.metamx.druid.realtime.RealtimeMain
|
||||||
```
|
```
|
||||||
|
|
||||||
## Booting a Compute Node ##
|
## Booting a historical node ##
|
||||||
|
|
||||||
1. Setup a config file at config/compute/runtime.properties that looks like this: [https://gist.github.com/rjurney/5818885](https://gist.github.com/rjurney/5818885)
|
1. Setup a config file at config/historical/runtime.properties that looks like this: [https://gist.github.com/rjurney/5818885](https://gist.github.com/rjurney/5818885)
|
||||||
2. Run the compute node:
|
2. Run the historical node:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 \
|
java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 \
|
||||||
-classpath services/target/druid-services-0.5.50-SNAPSHOT-selfcontained.jar:config/compute \
|
-classpath services/target/druid-services-0.5.50-SNAPSHOT-selfcontained.jar:config/historical \
|
||||||
com.metamx.druid.http.ComputeMain
|
io.druid.cli.Main server historical
|
||||||
```
|
```
|
||||||
|
|
||||||
# Querying Your Data #
|
# Querying Your Data #
|
||||||
|
@ -109,7 +109,7 @@ Now that we have a complete cluster setup on localhost, we need to load data. To
|
||||||
|
|
||||||
## Querying Different Nodes ##
|
## Querying Different Nodes ##
|
||||||
|
|
||||||
As a shared-nothing system, there are three ways to query druid, against the [Realtime](Realtime.html), [Compute](Compute.html) or [Broker](Broker.html) node. Querying a Realtime node returns only realtime data, querying a compute node returns only historical segments. Querying the broker will query both realtime and compute segments and compose an overall result for the query. This is the normal mode of operation for queries in druid.
|
As a shared-nothing system, there are three ways to query druid, against the [Realtime](Realtime.html), [Historical](Historical.html) or [Broker](Broker.html) node. Querying a Realtime node returns only realtime data, querying a historical node returns only historical segments. Querying the broker will query both realtime and historical segments and compose an overall result for the query. This is the normal mode of operation for queries in druid.
|
||||||
|
|
||||||
### Construct a Query ###
|
### Construct a Query ###
|
||||||
|
|
||||||
|
@ -148,7 +148,7 @@ See our result:
|
||||||
} ]
|
} ]
|
||||||
```
|
```
|
||||||
|
|
||||||
### Querying the Compute Node ###
|
### Querying the historical node ###
|
||||||
Run the query against port 8082:
|
Run the query against port 8082:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
|
|
@ -4,7 +4,7 @@ layout: doc_page
|
||||||
Querying
|
Querying
|
||||||
========
|
========
|
||||||
|
|
||||||
Queries are made using an HTTP REST style request to a [Broker](Broker.html), [Compute](Compute.html), or [Realtime](Realtime.html) node. The query is expressed in JSON and each of these node types expose the same REST query interface.
|
Queries are made using an HTTP REST style request to a [Broker](Broker.html), [Historical](Historical.html), or [Realtime](Realtime.html) node. The query is expressed in JSON and each of these node types expose the same REST query interface.
|
||||||
|
|
||||||
We start by describing an example query with additional comments that mention possible variations. Query operators are also summarized in a table below.
|
We start by describing an example query with additional comments that mention possible variations. Query operators are also summarized in a table below.
|
||||||
|
|
||||||
|
|
|
@ -4,7 +4,7 @@ layout: doc_page
|
||||||
Realtime
|
Realtime
|
||||||
========
|
========
|
||||||
|
|
||||||
Realtime nodes provide a realtime index. Data indexed via these nodes is immediately available for querying. Realtime nodes will periodically build segments representing the data they’ve collected over some span of time and hand these segments off to [Compute](Compute.html) nodes.
|
Realtime nodes provide a realtime index. Data indexed via these nodes is immediately available for querying. Realtime nodes will periodically build segments representing the data they’ve collected over some span of time and hand these segments off to [Historical](Historical.html) nodes.
|
||||||
|
|
||||||
Running
|
Running
|
||||||
-------
|
-------
|
||||||
|
|
|
@ -1,7 +1,7 @@
|
||||||
---
|
---
|
||||||
layout: doc_page
|
layout: doc_page
|
||||||
---
|
---
|
||||||
Note: It is recommended that the master console is used to configure rules. However, the master node does have HTTP endpoints to programmatically configure rules.
|
Note: It is recommended that the coordinator console is used to configure rules. However, the coordinator node does have HTTP endpoints to programmatically configure rules.
|
||||||
|
|
||||||
Load Rules
|
Load Rules
|
||||||
----------
|
----------
|
||||||
|
@ -22,7 +22,7 @@ Interval load rules are of the form:
|
||||||
|
|
||||||
* `type` - this should always be "loadByInterval"
|
* `type` - this should always be "loadByInterval"
|
||||||
* `interval` - A JSON Object representing ISO-8601 Intervals
|
* `interval` - A JSON Object representing ISO-8601 Intervals
|
||||||
* `tier` - the configured compute node tier
|
* `tier` - the configured historical node tier
|
||||||
|
|
||||||
### Period Load Rule
|
### Period Load Rule
|
||||||
|
|
||||||
|
@ -38,7 +38,7 @@ Period load rules are of the form:
|
||||||
|
|
||||||
* `type` - this should always be "loadByPeriod"
|
* `type` - this should always be "loadByPeriod"
|
||||||
* `period` - A JSON Object representing ISO-8601 Periods
|
* `period` - A JSON Object representing ISO-8601 Periods
|
||||||
* `tier` - the configured compute node tier
|
* `tier` - the configured historical node tier
|
||||||
|
|
||||||
The interval of a segment will be compared against the specified period. The rule matches if the period overlaps the interval.
|
The interval of a segment will be compared against the specified period. The rule matches if the period overlaps the interval.
|
||||||
|
|
||||||
|
|
|
@ -4,7 +4,7 @@ layout: doc_page
|
||||||
Segments
|
Segments
|
||||||
========
|
========
|
||||||
|
|
||||||
Segments are the fundamental structure to store data in Druid. [Compute](Compute.html) and [Realtime](Realtime.html) nodes load and serve segments for querying. To construct segments, Druid will always shard data by a time partition. Data may be further sharded based on dimension cardinality and row count.
|
Segments are the fundamental structure to store data in Druid. [Historical](Historical.html) and [Realtime](Realtime.html) nodes load and serve segments for querying. To construct segments, Druid will always shard data by a time partition. Data may be further sharded based on dimension cardinality and row count.
|
||||||
|
|
||||||
The latest Druid segment version is `v9`.
|
The latest Druid segment version is `v9`.
|
||||||
|
|
||||||
|
|
|
@ -22,9 +22,9 @@ We started with a minimal CentOS installation but you can use any other compatib
|
||||||
1. A Kafka Broker
|
1. A Kafka Broker
|
||||||
1. A single-node Zookeeper ensemble
|
1. A single-node Zookeeper ensemble
|
||||||
1. A single-node Riak-CS cluster
|
1. A single-node Riak-CS cluster
|
||||||
1. A Druid [Master](Master.html)
|
1. A Druid [Coordinator](Coordinator.html)
|
||||||
1. A Druid [Broker](Broker.html)
|
1. A Druid [Broker](Broker.html)
|
||||||
1. A Druid [Compute](Compute.html)
|
1. A Druid [Historical](Historical.html)
|
||||||
1. A Druid [Realtime](Realtime.html)
|
1. A Druid [Realtime](Realtime.html)
|
||||||
|
|
||||||
This just walks through getting the relevant software installed and running. You will then need to configure the [Realtime](Realtime.html) node to take in your data.
|
This just walks through getting the relevant software installed and running. You will then need to configure the [Realtime](Realtime.html) node to take in your data.
|
||||||
|
|
|
@ -94,14 +94,14 @@ mkdir config
|
||||||
|
|
||||||
If you are interested in learning more about Druid configuration files, check out this [link](https://github.com/metamx/druid/wiki/Configuration). Many aspects of Druid are customizable. For the purposes of this tutorial, we are going to use default values for most things.
|
If you are interested in learning more about Druid configuration files, check out this [link](https://github.com/metamx/druid/wiki/Configuration). Many aspects of Druid are customizable. For the purposes of this tutorial, we are going to use default values for most things.
|
||||||
|
|
||||||
### Start a Master Node ###
|
### Start a Coordinator Node ###
|
||||||
|
|
||||||
Master nodes are in charge of load assignment and distribution. Master nodes monitor the status of the cluster and command compute nodes to assign and drop segments.
|
Coordinator nodes are in charge of load assignment and distribution. Coordinator nodes monitor the status of the cluster and command historical nodes to assign and drop segments.
|
||||||
|
|
||||||
To create the master config file:
|
To create the coordinator config file:
|
||||||
|
|
||||||
```
|
```
|
||||||
mkdir config/master
|
mkdir config/coordinator
|
||||||
```
|
```
|
||||||
|
|
||||||
Under the directory we just created, create the file `runtime.properties` with the following contents:
|
Under the directory we just created, create the file `runtime.properties` with the following contents:
|
||||||
|
@ -109,7 +109,7 @@ Under the directory we just created, create the file `runtime.properties` with t
|
||||||
```
|
```
|
||||||
druid.host=127.0.0.1:8082
|
druid.host=127.0.0.1:8082
|
||||||
druid.port=8082
|
druid.port=8082
|
||||||
druid.service=master
|
druid.service=coordinator
|
||||||
|
|
||||||
# logging
|
# logging
|
||||||
com.metamx.emitter.logging=true
|
com.metamx.emitter.logging=true
|
||||||
|
@ -132,24 +132,24 @@ druid.database.connectURI=jdbc:mysql://localhost:3306/druid
|
||||||
druid.database.ruleTable=rules
|
druid.database.ruleTable=rules
|
||||||
druid.database.configTable=config
|
druid.database.configTable=config
|
||||||
|
|
||||||
# master runtime configs
|
# coordinator runtime configs
|
||||||
druid.master.startDelay=PT60S
|
druid.coordinator.startDelay=PT60S
|
||||||
```
|
```
|
||||||
|
|
||||||
To start the master node:
|
To start the coordinator node:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 -classpath lib/*:config/master com.metamx.druid.http.MasterMain
|
java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 -classpath lib/*:config/coordinator io.druid.cli.Main server coordinator
|
||||||
```
|
```
|
||||||
|
|
||||||
### Start a Compute Node ###
|
### Start a historical node ###
|
||||||
|
|
||||||
Compute nodes are the workhorses of a cluster and are in charge of loading historical segments and making them available for queries. Our Wikipedia segment will be downloaded by a compute node.
|
historical nodes are the workhorses of a cluster and are in charge of loading historical segments and making them available for queries. Our Wikipedia segment will be downloaded by a historical node.
|
||||||
|
|
||||||
To create the compute config file:
|
To create the historical config file:
|
||||||
|
|
||||||
```
|
```
|
||||||
mkdir config/compute
|
mkdir config/historical
|
||||||
```
|
```
|
||||||
|
|
||||||
Under the directory we just created, create the file `runtime.properties` with the following contents:
|
Under the directory we just created, create the file `runtime.properties` with the following contents:
|
||||||
|
@ -157,7 +157,7 @@ Under the directory we just created, create the file `runtime.properties` with t
|
||||||
```
|
```
|
||||||
druid.host=127.0.0.1:8081
|
druid.host=127.0.0.1:8081
|
||||||
druid.port=8081
|
druid.port=8081
|
||||||
druid.service=compute
|
druid.service=historical
|
||||||
|
|
||||||
# logging
|
# logging
|
||||||
com.metamx.emitter.logging=true
|
com.metamx.emitter.logging=true
|
||||||
|
@ -185,15 +185,15 @@ druid.paths.segmentInfoCache=/tmp/druid/segmentInfoCache
|
||||||
druid.server.maxSize=100000000
|
druid.server.maxSize=100000000
|
||||||
```
|
```
|
||||||
|
|
||||||
To start the compute node:
|
To start the historical node:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 -classpath lib/*:config/compute com.metamx.druid.http.ComputeMain
|
java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 -classpath lib/*:config/historical io.druid.cli.Main server historical
|
||||||
```
|
```
|
||||||
|
|
||||||
### Start a Broker Node ###
|
### Start a Broker Node ###
|
||||||
|
|
||||||
Broker nodes are responsible for figuring out which compute and/or realtime nodes correspond to which queries. They also merge partial results from these nodes in a scatter/gather fashion.
|
Broker nodes are responsible for figuring out which historical and/or realtime nodes correspond to which queries. They also merge partial results from these nodes in a scatter/gather fashion.
|
||||||
|
|
||||||
To create the broker config file:
|
To create the broker config file:
|
||||||
|
|
||||||
|
@ -229,7 +229,7 @@ java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 -classpath lib/*:config/
|
||||||
|
|
||||||
## Loading the Data ##
|
## Loading the Data ##
|
||||||
|
|
||||||
The MySQL dependency we introduced earlier on contains a 'segments' table that contains entries for segments that should be loaded into our cluster. The Druid master compares this table with segments that already exist in the cluster to determine what should be loaded and dropped. To load our wikipedia segment, we need to create an entry in our MySQL segment table.
|
The MySQL dependency we introduced earlier on contains a 'segments' table that contains entries for segments that should be loaded into our cluster. The Druid coordinator compares this table with segments that already exist in the cluster to determine what should be loaded and dropped. To load our wikipedia segment, we need to create an entry in our MySQL segment table.
|
||||||
|
|
||||||
Usually, when new segments are created, these MySQL entries are created directly so you never have to do this by hand. For this tutorial, we can do this manually by going back into MySQL and issuing:
|
Usually, when new segments are created, these MySQL entries are created directly so you never have to do this by hand. For this tutorial, we can do this manually by going back into MySQL and issuing:
|
||||||
|
|
||||||
|
@ -238,14 +238,14 @@ use druid;
|
||||||
INSERT INTO segments (id, dataSource, created_date, start, end, partitioned, version, used, payload) VALUES ('wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z', 'wikipedia', '2013-08-08T21:26:23.799Z', '2013-08-01T00:00:00.000Z', '2013-08-02T00:00:00.000Z', '0', '2013-08-08T21:22:48.989Z', '1', '{\"dataSource\":\"wikipedia\",\"interval\":\"2013-08-01T00:00:00.000Z/2013-08-02T00:00:00.000Z\",\"version\":\"2013-08-08T21:22:48.989Z\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/wikipedia/20130801T000000.000Z_20130802T000000.000Z/2013-08-08T21_22_48.989Z/0/index.zip\"},\"dimensions\":\"dma_code,continent_code,geo,area_code,robot,country_name,network,city,namespace,anonymous,unpatrolled,page,postal_code,language,newpage,user,region_lookup\",\"metrics\":\"count,delta,variation,added,deleted\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":24664730,\"identifier\":\"wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z\"}');
|
INSERT INTO segments (id, dataSource, created_date, start, end, partitioned, version, used, payload) VALUES ('wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z', 'wikipedia', '2013-08-08T21:26:23.799Z', '2013-08-01T00:00:00.000Z', '2013-08-02T00:00:00.000Z', '0', '2013-08-08T21:22:48.989Z', '1', '{\"dataSource\":\"wikipedia\",\"interval\":\"2013-08-01T00:00:00.000Z/2013-08-02T00:00:00.000Z\",\"version\":\"2013-08-08T21:22:48.989Z\",\"loadSpec\":{\"type\":\"s3_zip\",\"bucket\":\"static.druid.io\",\"key\":\"data/segments/wikipedia/20130801T000000.000Z_20130802T000000.000Z/2013-08-08T21_22_48.989Z/0/index.zip\"},\"dimensions\":\"dma_code,continent_code,geo,area_code,robot,country_name,network,city,namespace,anonymous,unpatrolled,page,postal_code,language,newpage,user,region_lookup\",\"metrics\":\"count,delta,variation,added,deleted\",\"shardSpec\":{\"type\":\"none\"},\"binaryVersion\":9,\"size\":24664730,\"identifier\":\"wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z\"}');
|
||||||
```
|
```
|
||||||
|
|
||||||
If you look in your master node logs, you should, after a maximum of a minute or so, see logs of the following form:
|
If you look in your coordinator node logs, you should, after a maximum of a minute or so, see logs of the following form:
|
||||||
|
|
||||||
```
|
```
|
||||||
2013-08-08 22:48:41,967 INFO [main-EventThread] com.metamx.druid.master.LoadQueuePeon - Server[/druid/loadQueue/127.0.0.1:8081] done processing [/druid/loadQueue/127.0.0.1:8081/wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z]
|
2013-08-08 22:48:41,967 INFO [main-EventThread] com.metamx.druid.coordinator.LoadQueuePeon - Server[/druid/loadQueue/127.0.0.1:8081] done processing [/druid/loadQueue/127.0.0.1:8081/wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z]
|
||||||
2013-08-08 22:48:41,969 INFO [ServerInventoryView-0] com.metamx.druid.client.SingleServerInventoryView - Server[127.0.0.1:8081] added segment[wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z]
|
2013-08-08 22:48:41,969 INFO [ServerInventoryView-0] com.metamx.druid.client.SingleServerInventoryView - Server[127.0.0.1:8081] added segment[wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z]
|
||||||
```
|
```
|
||||||
|
|
||||||
When the segment completes downloading and ready for queries, you should see the following message on your compute node logs:
|
When the segment completes downloading and ready for queries, you should see the following message on your historical node logs:
|
||||||
|
|
||||||
```
|
```
|
||||||
2013-08-08 22:48:41,959 INFO [ZkCoordinator-0] com.metamx.druid.coordination.BatchDataSegmentAnnouncer - Announcing segment[wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z] at path[/druid/segments/127.0.0.1:8081/2013-08-08T22:48:41.959Z]
|
2013-08-08 22:48:41,959 INFO [ZkCoordinator-0] com.metamx.druid.coordination.BatchDataSegmentAnnouncer - Announcing segment[wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z] at path[/druid/segments/127.0.0.1:8081/2013-08-08T22:48:41.959Z]
|
||||||
|
|
|
@ -3,9 +3,9 @@ layout: doc_page
|
||||||
---
|
---
|
||||||
Druid uses ZooKeeper (ZK) for management of current cluster state. The operations that happen over ZK are
|
Druid uses ZooKeeper (ZK) for management of current cluster state. The operations that happen over ZK are
|
||||||
|
|
||||||
1. [Master](Master.html) leader election
|
1. [Coordinator](Coordinator.html) leader election
|
||||||
2. Segment "publishing" protocol from [Compute](Compute.html) and [Realtime](Realtime.html)
|
2. Segment "publishing" protocol from [Historical](Historical.html) and [Realtime](Realtime.html)
|
||||||
3. Segment load/drop protocol between [Master](Master.html) and [Compute](Compute.html)
|
3. Segment load/drop protocol between [Coordinator](Coordinator.html) and [Historical](Historical.html)
|
||||||
|
|
||||||
### Property Configuration
|
### Property Configuration
|
||||||
|
|
||||||
|
@ -24,7 +24,7 @@ druid.zk.paths.propertiesPath=${druid.zk.paths.base}/properties
|
||||||
druid.zk.paths.announcementsPath=${druid.zk.paths.base}/announcements
|
druid.zk.paths.announcementsPath=${druid.zk.paths.base}/announcements
|
||||||
druid.zk.paths.servedSegmentsPath=${druid.zk.paths.base}/servedSegments
|
druid.zk.paths.servedSegmentsPath=${druid.zk.paths.base}/servedSegments
|
||||||
druid.zk.paths.loadQueuePath=${druid.zk.paths.base}/loadQueue
|
druid.zk.paths.loadQueuePath=${druid.zk.paths.base}/loadQueue
|
||||||
druid.zk.paths.masterPath=${druid.zk.paths.base}/master
|
druid.zk.paths.coordinatorPath=${druid.zk.paths.base}/coordinator
|
||||||
druid.zk.paths.indexer.announcementsPath=${druid.zk.paths.base}/indexer/announcements
|
druid.zk.paths.indexer.announcementsPath=${druid.zk.paths.base}/indexer/announcements
|
||||||
druid.zk.paths.indexer.tasksPath=${druid.zk.paths.base}/indexer/tasks
|
druid.zk.paths.indexer.tasksPath=${druid.zk.paths.base}/indexer/tasks
|
||||||
druid.zk.paths.indexer.statusPath=${druid.zk.paths.base}/indexer/status
|
druid.zk.paths.indexer.statusPath=${druid.zk.paths.base}/indexer/status
|
||||||
|
@ -37,19 +37,19 @@ NOTE: We also use Curator’s service discovery module to expose some services v
|
||||||
druid.zk.paths.discoveryPath
|
druid.zk.paths.discoveryPath
|
||||||
```
|
```
|
||||||
|
|
||||||
### Master Leader Election
|
### Coordinator Leader Election
|
||||||
|
|
||||||
We use the Curator LeadershipLatch recipe to do leader election at path
|
We use the Curator LeadershipLatch recipe to do leader election at path
|
||||||
|
|
||||||
```
|
```
|
||||||
${druid.zk.paths.masterPath}/_MASTER
|
${druid.zk.paths.coordinatorPath}/_COORDINATOR
|
||||||
```
|
```
|
||||||
|
|
||||||
### Segment "publishing" protocol from Compute and Realtime
|
### Segment "publishing" protocol from Historical and Realtime
|
||||||
|
|
||||||
The `announcementsPath` and `servedSegmentsPath` are used for this.
|
The `announcementsPath` and `servedSegmentsPath` are used for this.
|
||||||
|
|
||||||
All [Compute](Compute.html) and [Realtime](Realtime.html) nodes publish themselves on the `announcementsPath`, specifically, they will create an ephemeral znode at
|
All [Historical](Historical.html) and [Realtime](Realtime.html) nodes publish themselves on the `announcementsPath`, specifically, they will create an ephemeral znode at
|
||||||
|
|
||||||
```
|
```
|
||||||
${druid.zk.paths.announcementsPath}/${druid.host}
|
${druid.zk.paths.announcementsPath}/${druid.host}
|
||||||
|
@ -67,16 +67,16 @@ And as they load up segments, they will attach ephemeral znodes that look like
|
||||||
${druid.zk.paths.servedSegmentsPath}/${druid.host}/_segment_identifier_
|
${druid.zk.paths.servedSegmentsPath}/${druid.host}/_segment_identifier_
|
||||||
```
|
```
|
||||||
|
|
||||||
Nodes like the [Master](Master.html) and [Broker](Broker.html) can then watch these paths to see which nodes are currently serving which segments.
|
Nodes like the [Coordinator](Coordinator.html) and [Broker](Broker.html) can then watch these paths to see which nodes are currently serving which segments.
|
||||||
|
|
||||||
### Segment load/drop protocol between Master and Compute
|
### Segment load/drop protocol between Coordinator and Historical
|
||||||
|
|
||||||
The `loadQueuePath` is used for this.
|
The `loadQueuePath` is used for this.
|
||||||
|
|
||||||
When the [Master](Master.html) decides that a [Compute](Compute.html) node should load or drop a segment, it writes an ephemeral znode to
|
When the [Coordinator](Coordinator.html) decides that a [Historical](Historical.html) node should load or drop a segment, it writes an ephemeral znode to
|
||||||
|
|
||||||
```
|
```
|
||||||
${druid.zk.paths.loadQueuePath}/_host_of_compute_node/_segment_identifier
|
${druid.zk.paths.loadQueuePath}/_host_of_historical_node/_segment_identifier
|
||||||
```
|
```
|
||||||
|
|
||||||
This node will contain a payload that indicates to the Compute node what it should do with the given segment. When the Compute node is done with the work, it will delete the znode in order to signify to the Master that it is complete.
|
This node will contain a payload that indicates to the historical node what it should do with the given segment. When the historical node is done with the work, it will delete the znode in order to signify to the Coordinator that it is complete.
|
||||||
|
|
|
@ -43,9 +43,9 @@ h2. Architecture
|
||||||
* "Design":./Design.html
|
* "Design":./Design.html
|
||||||
* "Segments":./Segments.html
|
* "Segments":./Segments.html
|
||||||
* Node Types
|
* Node Types
|
||||||
** "Compute":./Compute.html
|
** "Historical":./Historical.html
|
||||||
** "Broker":./Broker.html
|
** "Broker":./Broker.html
|
||||||
** "Master":./Master.html
|
** "Coordinator":./Coordinator.html
|
||||||
*** "Rule Configuration":./Rule-Configuration.html
|
*** "Rule Configuration":./Rule-Configuration.html
|
||||||
** "Realtime":./Realtime.html
|
** "Realtime":./Realtime.html
|
||||||
*** "Firehose":./Firehose.html
|
*** "Firehose":./Firehose.html
|
||||||
|
|
|
@ -1,27 +0,0 @@
|
||||||
# Setup Oracle Java
|
|
||||||
sudo apt-get update
|
|
||||||
sudo add-apt-repository -y ppa:webupd8team/java
|
|
||||||
sudo apt-get update
|
|
||||||
|
|
||||||
# Setup yes answer to license question
|
|
||||||
echo debconf shared/accepted-oracle-license-v1-1 select true | sudo debconf-set-selections
|
|
||||||
echo debconf shared/accepted-oracle-license-v1-1 seen true | sudo debconf-set-selections
|
|
||||||
sudo apt-get -y -q install oracle-java7-installer
|
|
||||||
|
|
||||||
# Automated Kafka setup
|
|
||||||
curl http://static.druid.io/artifacts/kafka-0.7.2-incubating-bin.tar.gz -o /tmp/kafka-0.7.2-incubating-bin.tar.gz
|
|
||||||
tar -xvzf /tmp/kafka-0.7.2-incubating-bin.tar.gz
|
|
||||||
cd kafka-0.7.2-incubating-bin
|
|
||||||
cat config/zookeeper.properties
|
|
||||||
nohup bin/zookeeper-server-start.sh config/zookeeper.properties 2>&1 > /dev/null &
|
|
||||||
# in a new console
|
|
||||||
nohup bin/kafka-server-start.sh config/server.properties 2>&1 > /dev/null &
|
|
||||||
|
|
||||||
# Install dependencies - mysql must be built from source, as the 12.04 apt-get hangs
|
|
||||||
export DEBIAN_FRONTEND=noninteractive
|
|
||||||
sudo debconf-set-selections <<< 'mysql-server-5.5 mysql-server/root_password password diurd'
|
|
||||||
sudo debconf-set-selections <<< 'mysql-server-5.5 mysql-server/root_password_again password diurd'
|
|
||||||
sudo apt-get -q -y -V --force-yes --reinstall install mysql-server-5.5
|
|
||||||
|
|
||||||
echo "ALL DONE with druid environment setup! Hit CTRL-C to proceed."
|
|
||||||
exit 0
|
|
|
@ -1,22 +0,0 @@
|
||||||
# Is localhost expected with multi-node?
|
|
||||||
mysql -u root -pdiurd -e "GRANT ALL ON druid.* TO 'druid'@'localhost' IDENTIFIED BY 'diurd'; CREATE database druid;" 2>&1 > /dev/null
|
|
||||||
|
|
||||||
tar -xvzf druid-services-*-bin.tar.gz 2>&1 > /dev/null
|
|
||||||
cd druid-services-* 2>&1 > /dev/null
|
|
||||||
|
|
||||||
mkdir logs 2>&1 > /dev/null
|
|
||||||
|
|
||||||
# Now start a realtime node
|
|
||||||
nohup java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Ddruid.realtime.specFile=config/realtime/realtime.spec -classpath lib/druid-services-0.5.5-SNAPSHOT-selfcontained.jar:config/realtime io.druid.cli.Main server realtime 2>&1 > logs/realtime.log &
|
|
||||||
|
|
||||||
# And a master node
|
|
||||||
nohup java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 -classpath lib/druid-services-0.5.5-SNAPSHOT-selfcontained.jar:config/master io.druid.cli.Main server coordinator 2>&1 > logs/master.log &
|
|
||||||
|
|
||||||
# And a compute node
|
|
||||||
nohup java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 -classpath lib/druid-services-0.5.5-SNAPSHOT-selfcontained.jar:config/compute io.druid.cli.Main server historical 2>&1 > logs/compute.log &
|
|
||||||
|
|
||||||
# And a broker node
|
|
||||||
nohup java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 -classpath lib/druid-services-0.5.5-SNAPSHOT-selfcontained.jar:config/broker io.druid.cli.Main server broker 2>&1 > logs/broker.log &
|
|
||||||
|
|
||||||
echo "Hit CTRL-C to continue..."
|
|
||||||
exit 0
|
|
|
@ -2,7 +2,7 @@
|
||||||
Druid can use Cassandra as a deep storage mechanism. Segments and their metadata are stored in Cassandra in two tables:
|
Druid can use Cassandra as a deep storage mechanism. Segments and their metadata are stored in Cassandra in two tables:
|
||||||
`index_storage` and `descriptor_storage`. Underneath the hood, the Cassandra integration leverages Astyanax. The
|
`index_storage` and `descriptor_storage`. Underneath the hood, the Cassandra integration leverages Astyanax. The
|
||||||
index storage table is a [Chunked Object](https://github.com/Netflix/astyanax/wiki/Chunked-Object-Store) repository. It contains
|
index storage table is a [Chunked Object](https://github.com/Netflix/astyanax/wiki/Chunked-Object-Store) repository. It contains
|
||||||
compressed segments for distribution to compute nodes. Since segments can be large, the Chunked Object storage allows the integration to multi-thread
|
compressed segments for distribution to historical nodes. Since segments can be large, the Chunked Object storage allows the integration to multi-thread
|
||||||
the write to Cassandra, and spreads the data across all the nodes in a cluster. The descriptor storage table is a normal C* table that
|
the write to Cassandra, and spreads the data across all the nodes in a cluster. The descriptor storage table is a normal C* table that
|
||||||
stores the segment metadatak.
|
stores the segment metadatak.
|
||||||
|
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
druid.host=localhost
|
druid.host=localhost
|
||||||
druid.service=master
|
druid.service=coordinator
|
||||||
druid.port=8082
|
druid.port=8082
|
||||||
|
|
||||||
druid.zk.service.host=localhost
|
druid.zk.service.host=localhost
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
druid.host=localhost
|
druid.host=localhost
|
||||||
druid.service=compute
|
druid.service=historical
|
||||||
druid.port=8081
|
druid.port=8081
|
||||||
|
|
||||||
druid.zk.service.host=localhost
|
druid.zk.service.host=localhost
|
||||||
|
|
|
@ -20,8 +20,8 @@
|
||||||
package io.druid.guice;
|
package io.druid.guice;
|
||||||
|
|
||||||
import com.google.inject.Binder;
|
import com.google.inject.Binder;
|
||||||
import io.druid.indexing.coordinator.config.ForkingTaskRunnerConfig;
|
import io.druid.indexing.overlord.config.ForkingTaskRunnerConfig;
|
||||||
import io.druid.indexing.coordinator.config.RemoteTaskRunnerConfig;
|
import io.druid.indexing.overlord.config.RemoteTaskRunnerConfig;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
*/
|
*/
|
||||||
|
|
|
@ -21,7 +21,7 @@ package io.druid.indexing.common.actions;
|
||||||
|
|
||||||
import com.metamx.emitter.EmittingLogger;
|
import com.metamx.emitter.EmittingLogger;
|
||||||
import io.druid.indexing.common.task.Task;
|
import io.druid.indexing.common.task.Task;
|
||||||
import io.druid.indexing.coordinator.TaskStorage;
|
import io.druid.indexing.overlord.TaskStorage;
|
||||||
|
|
||||||
import java.io.IOException;
|
import java.io.IOException;
|
||||||
|
|
||||||
|
|
|
@ -21,7 +21,7 @@ package io.druid.indexing.common.actions;
|
||||||
|
|
||||||
import com.google.inject.Inject;
|
import com.google.inject.Inject;
|
||||||
import io.druid.indexing.common.task.Task;
|
import io.druid.indexing.common.task.Task;
|
||||||
import io.druid.indexing.coordinator.TaskStorage;
|
import io.druid.indexing.overlord.TaskStorage;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
*/
|
*/
|
||||||
|
|
|
@ -25,9 +25,9 @@ import com.google.inject.Inject;
|
||||||
import com.metamx.emitter.service.ServiceEmitter;
|
import com.metamx.emitter.service.ServiceEmitter;
|
||||||
import io.druid.indexing.common.TaskLock;
|
import io.druid.indexing.common.TaskLock;
|
||||||
import io.druid.indexing.common.task.Task;
|
import io.druid.indexing.common.task.Task;
|
||||||
import io.druid.indexing.coordinator.IndexerDBCoordinator;
|
import io.druid.indexing.overlord.IndexerDBCoordinator;
|
||||||
import io.druid.indexing.coordinator.TaskLockbox;
|
import io.druid.indexing.overlord.TaskLockbox;
|
||||||
import io.druid.indexing.coordinator.TaskQueue;
|
import io.druid.indexing.overlord.TaskQueue;
|
||||||
import io.druid.timeline.DataSegment;
|
import io.druid.timeline.DataSegment;
|
||||||
|
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
|
|
|
@ -271,7 +271,7 @@ public class RealtimeIndexTask extends AbstractTask
|
||||||
|
|
||||||
// NOTE: This pusher selects path based purely on global configuration and the DataSegment, which means
|
// NOTE: This pusher selects path based purely on global configuration and the DataSegment, which means
|
||||||
// NOTE: that redundant realtime tasks will upload to the same location. This can cause index.zip and
|
// NOTE: that redundant realtime tasks will upload to the same location. This can cause index.zip and
|
||||||
// NOTE: descriptor.json to mismatch, or it can cause compute nodes to load different instances of the
|
// NOTE: descriptor.json to mismatch, or it can cause historical nodes to load different instances of the
|
||||||
// NOTE: "same" segment.
|
// NOTE: "same" segment.
|
||||||
realtimePlumberSchool.setDataSegmentPusher(toolbox.getSegmentPusher());
|
realtimePlumberSchool.setDataSegmentPusher(toolbox.getSegmentPusher());
|
||||||
realtimePlumberSchool.setConglomerate(toolbox.getQueryRunnerFactoryConglomerate());
|
realtimePlumberSchool.setConglomerate(toolbox.getQueryRunnerFactoryConglomerate());
|
||||||
|
|
|
@ -22,8 +22,8 @@ package io.druid.indexing.common.tasklogs;
|
||||||
import com.google.common.base.Optional;
|
import com.google.common.base.Optional;
|
||||||
import com.google.common.io.InputSupplier;
|
import com.google.common.io.InputSupplier;
|
||||||
import com.google.inject.Inject;
|
import com.google.inject.Inject;
|
||||||
import io.druid.indexing.coordinator.TaskMaster;
|
import io.druid.indexing.overlord.TaskMaster;
|
||||||
import io.druid.indexing.coordinator.TaskRunner;
|
import io.druid.indexing.overlord.TaskRunner;
|
||||||
|
|
||||||
import java.io.IOException;
|
import java.io.IOException;
|
||||||
import java.io.InputStream;
|
import java.io.InputStream;
|
||||||
|
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator;
|
package io.druid.indexing.overlord;
|
||||||
|
|
||||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||||
import com.google.common.base.Function;
|
import com.google.common.base.Function;
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator;
|
package io.druid.indexing.overlord;
|
||||||
|
|
||||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||||
import com.google.common.base.CharMatcher;
|
import com.google.common.base.CharMatcher;
|
||||||
|
@ -45,7 +45,7 @@ import io.druid.indexing.common.TaskStatus;
|
||||||
import io.druid.indexing.common.task.Task;
|
import io.druid.indexing.common.task.Task;
|
||||||
import io.druid.indexing.common.tasklogs.TaskLogPusher;
|
import io.druid.indexing.common.tasklogs.TaskLogPusher;
|
||||||
import io.druid.indexing.common.tasklogs.TaskLogStreamer;
|
import io.druid.indexing.common.tasklogs.TaskLogStreamer;
|
||||||
import io.druid.indexing.coordinator.config.ForkingTaskRunnerConfig;
|
import io.druid.indexing.overlord.config.ForkingTaskRunnerConfig;
|
||||||
import io.druid.server.DruidNode;
|
import io.druid.server.DruidNode;
|
||||||
import org.apache.commons.io.FileUtils;
|
import org.apache.commons.io.FileUtils;
|
||||||
|
|
|
@ -17,13 +17,13 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator;
|
package io.druid.indexing.overlord;
|
||||||
|
|
||||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||||
import com.google.inject.Inject;
|
import com.google.inject.Inject;
|
||||||
import io.druid.guice.annotations.Self;
|
import io.druid.guice.annotations.Self;
|
||||||
import io.druid.indexing.common.tasklogs.TaskLogPusher;
|
import io.druid.indexing.common.tasklogs.TaskLogPusher;
|
||||||
import io.druid.indexing.coordinator.config.ForkingTaskRunnerConfig;
|
import io.druid.indexing.overlord.config.ForkingTaskRunnerConfig;
|
||||||
import io.druid.server.DruidNode;
|
import io.druid.server.DruidNode;
|
||||||
|
|
||||||
import java.util.Properties;
|
import java.util.Properties;
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator;
|
package io.druid.indexing.overlord;
|
||||||
|
|
||||||
import com.google.common.base.Optional;
|
import com.google.common.base.Optional;
|
||||||
import com.google.common.base.Preconditions;
|
import com.google.common.base.Preconditions;
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator;
|
package io.druid.indexing.overlord;
|
||||||
|
|
||||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||||
import com.google.common.base.Function;
|
import com.google.common.base.Function;
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator;
|
package io.druid.indexing.overlord;
|
||||||
|
|
||||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||||
import com.google.common.base.Charsets;
|
import com.google.common.base.Charsets;
|
||||||
|
@ -46,8 +46,8 @@ import io.druid.curator.cache.PathChildrenCacheFactory;
|
||||||
import io.druid.indexing.common.TaskStatus;
|
import io.druid.indexing.common.TaskStatus;
|
||||||
import io.druid.indexing.common.task.Task;
|
import io.druid.indexing.common.task.Task;
|
||||||
import io.druid.indexing.common.tasklogs.TaskLogStreamer;
|
import io.druid.indexing.common.tasklogs.TaskLogStreamer;
|
||||||
import io.druid.indexing.coordinator.config.RemoteTaskRunnerConfig;
|
import io.druid.indexing.overlord.config.RemoteTaskRunnerConfig;
|
||||||
import io.druid.indexing.coordinator.setup.WorkerSetupData;
|
import io.druid.indexing.overlord.setup.WorkerSetupData;
|
||||||
import io.druid.indexing.worker.TaskAnnouncement;
|
import io.druid.indexing.worker.TaskAnnouncement;
|
||||||
import io.druid.indexing.worker.Worker;
|
import io.druid.indexing.worker.Worker;
|
||||||
import io.druid.server.initialization.ZkPathsConfig;
|
import io.druid.server.initialization.ZkPathsConfig;
|
||||||
|
@ -86,7 +86,7 @@ import java.util.concurrent.TimeUnit;
|
||||||
* <p/>
|
* <p/>
|
||||||
* The RemoteTaskRunner will assign tasks to a node until the node hits capacity. At that point, task assignment will
|
* The RemoteTaskRunner will assign tasks to a node until the node hits capacity. At that point, task assignment will
|
||||||
* fail. The RemoteTaskRunner depends on another component to create additional worker resources.
|
* fail. The RemoteTaskRunner depends on another component to create additional worker resources.
|
||||||
* For example, {@link io.druid.indexing.coordinator.scaling.ResourceManagementScheduler} can take care of these duties.
|
* For example, {@link io.druid.indexing.overlord.scaling.ResourceManagementScheduler} can take care of these duties.
|
||||||
* <p/>
|
* <p/>
|
||||||
* If a worker node becomes inexplicably disconnected from Zk, the RemoteTaskRunner will fail any tasks associated with the worker.
|
* If a worker node becomes inexplicably disconnected from Zk, the RemoteTaskRunner will fail any tasks associated with the worker.
|
||||||
* <p/>
|
* <p/>
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator;
|
package io.druid.indexing.overlord;
|
||||||
|
|
||||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||||
import com.google.common.base.Supplier;
|
import com.google.common.base.Supplier;
|
||||||
|
@ -25,8 +25,8 @@ import com.google.inject.Inject;
|
||||||
import com.metamx.http.client.HttpClient;
|
import com.metamx.http.client.HttpClient;
|
||||||
import io.druid.curator.cache.SimplePathChildrenCacheFactory;
|
import io.druid.curator.cache.SimplePathChildrenCacheFactory;
|
||||||
import io.druid.guice.annotations.Global;
|
import io.druid.guice.annotations.Global;
|
||||||
import io.druid.indexing.coordinator.config.RemoteTaskRunnerConfig;
|
import io.druid.indexing.overlord.config.RemoteTaskRunnerConfig;
|
||||||
import io.druid.indexing.coordinator.setup.WorkerSetupData;
|
import io.druid.indexing.overlord.setup.WorkerSetupData;
|
||||||
import io.druid.server.initialization.ZkPathsConfig;
|
import io.druid.server.initialization.ZkPathsConfig;
|
||||||
import org.apache.curator.framework.CuratorFramework;
|
import org.apache.curator.framework.CuratorFramework;
|
||||||
|
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator;
|
package io.druid.indexing.overlord;
|
||||||
|
|
||||||
import com.google.common.util.concurrent.SettableFuture;
|
import com.google.common.util.concurrent.SettableFuture;
|
||||||
import io.druid.indexing.common.TaskStatus;
|
import io.druid.indexing.common.TaskStatus;
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator;
|
package io.druid.indexing.overlord;
|
||||||
|
|
||||||
import org.joda.time.DateTime;
|
import org.joda.time.DateTime;
|
||||||
|
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator;
|
package io.druid.indexing.overlord;
|
||||||
|
|
||||||
public class TaskExistsException extends RuntimeException
|
public class TaskExistsException extends RuntimeException
|
||||||
{
|
{
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator;
|
package io.druid.indexing.overlord;
|
||||||
|
|
||||||
import com.google.common.base.Function;
|
import com.google.common.base.Function;
|
||||||
import com.google.common.base.Objects;
|
import com.google.common.base.Objects;
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator;
|
package io.druid.indexing.overlord;
|
||||||
|
|
||||||
import com.google.common.base.Optional;
|
import com.google.common.base.Optional;
|
||||||
import com.google.common.base.Throwables;
|
import com.google.common.base.Throwables;
|
||||||
|
@ -32,9 +32,9 @@ import io.druid.guice.annotations.Self;
|
||||||
import io.druid.indexing.common.actions.TaskActionClient;
|
import io.druid.indexing.common.actions.TaskActionClient;
|
||||||
import io.druid.indexing.common.actions.TaskActionClientFactory;
|
import io.druid.indexing.common.actions.TaskActionClientFactory;
|
||||||
import io.druid.indexing.common.task.Task;
|
import io.druid.indexing.common.task.Task;
|
||||||
import io.druid.indexing.coordinator.exec.TaskConsumer;
|
import io.druid.indexing.overlord.exec.TaskConsumer;
|
||||||
import io.druid.indexing.coordinator.scaling.ResourceManagementScheduler;
|
import io.druid.indexing.overlord.scaling.ResourceManagementScheduler;
|
||||||
import io.druid.indexing.coordinator.scaling.ResourceManagementSchedulerFactory;
|
import io.druid.indexing.overlord.scaling.ResourceManagementSchedulerFactory;
|
||||||
import io.druid.server.DruidNode;
|
import io.druid.server.DruidNode;
|
||||||
import io.druid.server.initialization.ZkPathsConfig;
|
import io.druid.server.initialization.ZkPathsConfig;
|
||||||
import org.apache.curator.framework.CuratorFramework;
|
import org.apache.curator.framework.CuratorFramework;
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator;
|
package io.druid.indexing.overlord;
|
||||||
|
|
||||||
import com.google.common.base.Optional;
|
import com.google.common.base.Optional;
|
||||||
import com.google.common.base.Preconditions;
|
import com.google.common.base.Preconditions;
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator;
|
package io.druid.indexing.overlord;
|
||||||
|
|
||||||
import com.google.common.util.concurrent.ListenableFuture;
|
import com.google.common.util.concurrent.ListenableFuture;
|
||||||
import io.druid.indexing.common.TaskStatus;
|
import io.druid.indexing.common.TaskStatus;
|
||||||
|
@ -27,7 +27,7 @@ import java.util.Collection;
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Interface for handing off tasks. Used by a {@link io.druid.indexing.coordinator.exec.TaskConsumer} to
|
* Interface for handing off tasks. Used by a {@link io.druid.indexing.overlord.exec.TaskConsumer} to
|
||||||
* run tasks that have been locked.
|
* run tasks that have been locked.
|
||||||
*/
|
*/
|
||||||
public interface TaskRunner
|
public interface TaskRunner
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator;
|
package io.druid.indexing.overlord;
|
||||||
|
|
||||||
public interface TaskRunnerFactory
|
public interface TaskRunnerFactory
|
||||||
{
|
{
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator;
|
package io.druid.indexing.overlord;
|
||||||
|
|
||||||
import com.fasterxml.jackson.annotation.JsonProperty;
|
import com.fasterxml.jackson.annotation.JsonProperty;
|
||||||
import com.google.common.collect.ComparisonChain;
|
import com.google.common.collect.ComparisonChain;
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator;
|
package io.druid.indexing.overlord;
|
||||||
|
|
||||||
import com.google.common.base.Optional;
|
import com.google.common.base.Optional;
|
||||||
import io.druid.indexing.common.TaskLock;
|
import io.druid.indexing.common.TaskLock;
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator;
|
package io.druid.indexing.overlord;
|
||||||
|
|
||||||
import com.google.common.base.Function;
|
import com.google.common.base.Function;
|
||||||
import com.google.common.base.Optional;
|
import com.google.common.base.Optional;
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator;
|
package io.druid.indexing.overlord;
|
||||||
|
|
||||||
import com.google.common.base.Function;
|
import com.google.common.base.Function;
|
||||||
import com.google.common.base.Throwables;
|
import com.google.common.base.Throwables;
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator;
|
package io.druid.indexing.overlord;
|
||||||
|
|
||||||
import com.fasterxml.jackson.annotation.JsonProperty;
|
import com.fasterxml.jackson.annotation.JsonProperty;
|
||||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
import com.fasterxml.jackson.databind.ObjectMapper;
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator.config;
|
package io.druid.indexing.overlord.config;
|
||||||
|
|
||||||
import org.skife.config.Config;
|
import org.skife.config.Config;
|
||||||
import org.skife.config.Default;
|
import org.skife.config.Default;
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator.config;
|
package io.druid.indexing.overlord.config;
|
||||||
|
|
||||||
import com.fasterxml.jackson.annotation.JsonProperty;
|
import com.fasterxml.jackson.annotation.JsonProperty;
|
||||||
import com.google.common.collect.Lists;
|
import com.google.common.collect.Lists;
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator.config;
|
package io.druid.indexing.overlord.config;
|
||||||
|
|
||||||
import com.fasterxml.jackson.annotation.JsonProperty;
|
import com.fasterxml.jackson.annotation.JsonProperty;
|
||||||
import io.druid.db.DbConnectorConfig;
|
import io.druid.db.DbConnectorConfig;
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator.config;
|
package io.druid.indexing.overlord.config;
|
||||||
|
|
||||||
import com.google.common.base.Splitter;
|
import com.google.common.base.Splitter;
|
||||||
import com.google.common.collect.ImmutableSet;
|
import com.google.common.collect.ImmutableSet;
|
||||||
|
@ -29,7 +29,7 @@ import java.util.Set;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
*/
|
*/
|
||||||
public abstract class IndexerCoordinatorConfig extends ZkPathsConfig
|
public abstract class OverlordConfig extends ZkPathsConfig
|
||||||
{
|
{
|
||||||
private volatile Set<String> whitelistDatasources = null;
|
private volatile Set<String> whitelistDatasources = null;
|
||||||
|
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator.config;
|
package io.druid.indexing.overlord.config;
|
||||||
|
|
||||||
import com.fasterxml.jackson.annotation.JsonProperty;
|
import com.fasterxml.jackson.annotation.JsonProperty;
|
||||||
import org.joda.time.Period;
|
import org.joda.time.Period;
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator.exec;
|
package io.druid.indexing.overlord.exec;
|
||||||
|
|
||||||
import com.google.common.base.Throwables;
|
import com.google.common.base.Throwables;
|
||||||
import com.google.common.util.concurrent.FutureCallback;
|
import com.google.common.util.concurrent.FutureCallback;
|
||||||
|
@ -31,8 +31,8 @@ import com.metamx.emitter.service.ServiceMetricEvent;
|
||||||
import io.druid.indexing.common.TaskStatus;
|
import io.druid.indexing.common.TaskStatus;
|
||||||
import io.druid.indexing.common.actions.TaskActionClientFactory;
|
import io.druid.indexing.common.actions.TaskActionClientFactory;
|
||||||
import io.druid.indexing.common.task.Task;
|
import io.druid.indexing.common.task.Task;
|
||||||
import io.druid.indexing.coordinator.TaskQueue;
|
import io.druid.indexing.overlord.TaskQueue;
|
||||||
import io.druid.indexing.coordinator.TaskRunner;
|
import io.druid.indexing.overlord.TaskRunner;
|
||||||
|
|
||||||
public class TaskConsumer implements Runnable
|
public class TaskConsumer implements Runnable
|
||||||
{
|
{
|
|
@ -17,13 +17,13 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator.http;
|
package io.druid.indexing.overlord.http;
|
||||||
|
|
||||||
import com.google.inject.Inject;
|
import com.google.inject.Inject;
|
||||||
import io.druid.common.config.JacksonConfigManager;
|
import io.druid.common.config.JacksonConfigManager;
|
||||||
import io.druid.indexing.common.tasklogs.TaskLogStreamer;
|
import io.druid.indexing.common.tasklogs.TaskLogStreamer;
|
||||||
import io.druid.indexing.coordinator.TaskMaster;
|
import io.druid.indexing.overlord.TaskMaster;
|
||||||
import io.druid.indexing.coordinator.TaskStorageQueryAdapter;
|
import io.druid.indexing.overlord.TaskStorageQueryAdapter;
|
||||||
|
|
||||||
import javax.ws.rs.Path;
|
import javax.ws.rs.Path;
|
||||||
|
|
||||||
|
@ -31,10 +31,10 @@ import javax.ws.rs.Path;
|
||||||
*/
|
*/
|
||||||
@Deprecated
|
@Deprecated
|
||||||
@Path("/mmx/merger/v1")
|
@Path("/mmx/merger/v1")
|
||||||
public class OldIndexerCoordinatorResource extends IndexerCoordinatorResource
|
public class OldOverlordResource extends OverlordResource
|
||||||
{
|
{
|
||||||
@Inject
|
@Inject
|
||||||
public OldIndexerCoordinatorResource(
|
public OldOverlordResource(
|
||||||
TaskMaster taskMaster,
|
TaskMaster taskMaster,
|
||||||
TaskStorageQueryAdapter taskStorageQueryAdapter,
|
TaskStorageQueryAdapter taskStorageQueryAdapter,
|
||||||
TaskLogStreamer taskLogStreamer,
|
TaskLogStreamer taskLogStreamer,
|
|
@ -17,11 +17,11 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator.http;
|
package io.druid.indexing.overlord.http;
|
||||||
|
|
||||||
import com.google.common.base.Throwables;
|
import com.google.common.base.Throwables;
|
||||||
import com.google.inject.Inject;
|
import com.google.inject.Inject;
|
||||||
import io.druid.indexing.coordinator.TaskMaster;
|
import io.druid.indexing.overlord.TaskMaster;
|
||||||
import io.druid.server.http.RedirectInfo;
|
import io.druid.server.http.RedirectInfo;
|
||||||
|
|
||||||
import java.net.URL;
|
import java.net.URL;
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator.http;
|
package io.druid.indexing.overlord.http;
|
||||||
|
|
||||||
import com.google.common.base.Function;
|
import com.google.common.base.Function;
|
||||||
import com.google.common.base.Optional;
|
import com.google.common.base.Optional;
|
||||||
|
@ -32,13 +32,13 @@ import io.druid.indexing.common.actions.TaskActionClient;
|
||||||
import io.druid.indexing.common.actions.TaskActionHolder;
|
import io.druid.indexing.common.actions.TaskActionHolder;
|
||||||
import io.druid.indexing.common.task.Task;
|
import io.druid.indexing.common.task.Task;
|
||||||
import io.druid.indexing.common.tasklogs.TaskLogStreamer;
|
import io.druid.indexing.common.tasklogs.TaskLogStreamer;
|
||||||
import io.druid.indexing.coordinator.TaskMaster;
|
import io.druid.indexing.overlord.TaskMaster;
|
||||||
import io.druid.indexing.coordinator.TaskQueue;
|
import io.druid.indexing.overlord.TaskQueue;
|
||||||
import io.druid.indexing.coordinator.TaskRunner;
|
import io.druid.indexing.overlord.TaskRunner;
|
||||||
import io.druid.indexing.coordinator.TaskRunnerWorkItem;
|
import io.druid.indexing.overlord.TaskRunnerWorkItem;
|
||||||
import io.druid.indexing.coordinator.TaskStorageQueryAdapter;
|
import io.druid.indexing.overlord.TaskStorageQueryAdapter;
|
||||||
import io.druid.indexing.coordinator.scaling.ResourceManagementScheduler;
|
import io.druid.indexing.overlord.scaling.ResourceManagementScheduler;
|
||||||
import io.druid.indexing.coordinator.setup.WorkerSetupData;
|
import io.druid.indexing.overlord.setup.WorkerSetupData;
|
||||||
import io.druid.timeline.DataSegment;
|
import io.druid.timeline.DataSegment;
|
||||||
|
|
||||||
import javax.ws.rs.Consumes;
|
import javax.ws.rs.Consumes;
|
||||||
|
@ -59,9 +59,9 @@ import java.util.concurrent.atomic.AtomicReference;
|
||||||
/**
|
/**
|
||||||
*/
|
*/
|
||||||
@Path("/druid/indexer/v1")
|
@Path("/druid/indexer/v1")
|
||||||
public class IndexerCoordinatorResource
|
public class OverlordResource
|
||||||
{
|
{
|
||||||
private static final Logger log = new Logger(IndexerCoordinatorResource.class);
|
private static final Logger log = new Logger(OverlordResource.class);
|
||||||
|
|
||||||
private static Function<TaskRunnerWorkItem, Map<String, Object>> simplifyTaskFn =
|
private static Function<TaskRunnerWorkItem, Map<String, Object>> simplifyTaskFn =
|
||||||
new Function<TaskRunnerWorkItem, Map<String, Object>>()
|
new Function<TaskRunnerWorkItem, Map<String, Object>>()
|
||||||
|
@ -92,7 +92,7 @@ public class IndexerCoordinatorResource
|
||||||
private AtomicReference<WorkerSetupData> workerSetupDataRef = null;
|
private AtomicReference<WorkerSetupData> workerSetupDataRef = null;
|
||||||
|
|
||||||
@Inject
|
@Inject
|
||||||
public IndexerCoordinatorResource(
|
public OverlordResource(
|
||||||
TaskMaster taskMaster,
|
TaskMaster taskMaster,
|
||||||
TaskStorageQueryAdapter taskStorageQueryAdapter,
|
TaskStorageQueryAdapter taskStorageQueryAdapter,
|
||||||
TaskLogStreamer taskLogStreamer,
|
TaskLogStreamer taskLogStreamer,
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator.scaling;
|
package io.druid.indexing.overlord.scaling;
|
||||||
|
|
||||||
import com.fasterxml.jackson.annotation.JsonProperty;
|
import com.fasterxml.jackson.annotation.JsonProperty;
|
||||||
|
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator.scaling;
|
package io.druid.indexing.overlord.scaling;
|
||||||
|
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
|
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator.scaling;
|
package io.druid.indexing.overlord.scaling;
|
||||||
|
|
||||||
import com.amazonaws.services.ec2.AmazonEC2;
|
import com.amazonaws.services.ec2.AmazonEC2;
|
||||||
import com.amazonaws.services.ec2.model.DescribeInstancesRequest;
|
import com.amazonaws.services.ec2.model.DescribeInstancesRequest;
|
||||||
|
@ -35,9 +35,9 @@ import com.google.common.collect.Lists;
|
||||||
import com.google.inject.Inject;
|
import com.google.inject.Inject;
|
||||||
import com.metamx.emitter.EmittingLogger;
|
import com.metamx.emitter.EmittingLogger;
|
||||||
import io.druid.guice.annotations.Json;
|
import io.druid.guice.annotations.Json;
|
||||||
import io.druid.indexing.coordinator.setup.EC2NodeData;
|
import io.druid.indexing.overlord.setup.EC2NodeData;
|
||||||
import io.druid.indexing.coordinator.setup.GalaxyUserData;
|
import io.druid.indexing.overlord.setup.GalaxyUserData;
|
||||||
import io.druid.indexing.coordinator.setup.WorkerSetupData;
|
import io.druid.indexing.overlord.setup.WorkerSetupData;
|
||||||
import org.apache.commons.codec.binary.Base64;
|
import org.apache.commons.codec.binary.Base64;
|
||||||
|
|
||||||
import javax.annotation.Nullable;
|
import javax.annotation.Nullable;
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator.scaling;
|
package io.druid.indexing.overlord.scaling;
|
||||||
|
|
||||||
import com.metamx.emitter.EmittingLogger;
|
import com.metamx.emitter.EmittingLogger;
|
||||||
|
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator.scaling;
|
package io.druid.indexing.overlord.scaling;
|
||||||
|
|
||||||
import com.metamx.common.logger.Logger;
|
import com.metamx.common.logger.Logger;
|
||||||
|
|
|
@ -17,15 +17,15 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator.scaling;
|
package io.druid.indexing.overlord.scaling;
|
||||||
|
|
||||||
import com.metamx.common.concurrent.ScheduledExecutors;
|
import com.metamx.common.concurrent.ScheduledExecutors;
|
||||||
import com.metamx.common.lifecycle.LifecycleStart;
|
import com.metamx.common.lifecycle.LifecycleStart;
|
||||||
import com.metamx.common.lifecycle.LifecycleStop;
|
import com.metamx.common.lifecycle.LifecycleStop;
|
||||||
import com.metamx.common.logger.Logger;
|
import com.metamx.common.logger.Logger;
|
||||||
import io.druid.granularity.PeriodGranularity;
|
import io.druid.granularity.PeriodGranularity;
|
||||||
import io.druid.indexing.coordinator.RemoteTaskRunner;
|
import io.druid.indexing.overlord.RemoteTaskRunner;
|
||||||
import io.druid.indexing.coordinator.TaskRunner;
|
import io.druid.indexing.overlord.TaskRunner;
|
||||||
import org.joda.time.DateTime;
|
import org.joda.time.DateTime;
|
||||||
import org.joda.time.Duration;
|
import org.joda.time.Duration;
|
||||||
import org.joda.time.Period;
|
import org.joda.time.Period;
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator.scaling;
|
package io.druid.indexing.overlord.scaling;
|
||||||
|
|
||||||
import com.fasterxml.jackson.annotation.JsonProperty;
|
import com.fasterxml.jackson.annotation.JsonProperty;
|
||||||
import org.joda.time.DateTime;
|
import org.joda.time.DateTime;
|
|
@ -17,9 +17,9 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator.scaling;
|
package io.druid.indexing.overlord.scaling;
|
||||||
|
|
||||||
import io.druid.indexing.coordinator.RemoteTaskRunner;
|
import io.druid.indexing.overlord.RemoteTaskRunner;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
*/
|
*/
|
|
@ -17,11 +17,11 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator.scaling;
|
package io.druid.indexing.overlord.scaling;
|
||||||
|
|
||||||
import com.google.inject.Inject;
|
import com.google.inject.Inject;
|
||||||
import com.metamx.common.concurrent.ScheduledExecutorFactory;
|
import com.metamx.common.concurrent.ScheduledExecutorFactory;
|
||||||
import io.druid.indexing.coordinator.RemoteTaskRunner;
|
import io.druid.indexing.overlord.RemoteTaskRunner;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
*/
|
*/
|
|
@ -17,10 +17,10 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator.scaling;
|
package io.druid.indexing.overlord.scaling;
|
||||||
|
|
||||||
import io.druid.indexing.coordinator.RemoteTaskRunnerWorkItem;
|
import io.druid.indexing.overlord.RemoteTaskRunnerWorkItem;
|
||||||
import io.druid.indexing.coordinator.ZkWorker;
|
import io.druid.indexing.overlord.ZkWorker;
|
||||||
|
|
||||||
import java.util.Collection;
|
import java.util.Collection;
|
||||||
|
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator.scaling;
|
package io.druid.indexing.overlord.scaling;
|
||||||
|
|
||||||
import com.fasterxml.jackson.annotation.JsonProperty;
|
import com.fasterxml.jackson.annotation.JsonProperty;
|
||||||
import com.fasterxml.jackson.annotation.JsonValue;
|
import com.fasterxml.jackson.annotation.JsonValue;
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator.scaling;
|
package io.druid.indexing.overlord.scaling;
|
||||||
|
|
||||||
import com.fasterxml.jackson.annotation.JsonProperty;
|
import com.fasterxml.jackson.annotation.JsonProperty;
|
||||||
import org.joda.time.Period;
|
import org.joda.time.Period;
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator.scaling;
|
package io.druid.indexing.overlord.scaling;
|
||||||
|
|
||||||
import com.google.common.base.Function;
|
import com.google.common.base.Function;
|
||||||
import com.google.common.base.Predicate;
|
import com.google.common.base.Predicate;
|
||||||
|
@ -28,10 +28,10 @@ import com.google.common.collect.Sets;
|
||||||
import com.google.inject.Inject;
|
import com.google.inject.Inject;
|
||||||
import com.metamx.common.guava.FunctionalIterable;
|
import com.metamx.common.guava.FunctionalIterable;
|
||||||
import com.metamx.emitter.EmittingLogger;
|
import com.metamx.emitter.EmittingLogger;
|
||||||
import io.druid.indexing.coordinator.RemoteTaskRunnerWorkItem;
|
import io.druid.indexing.overlord.RemoteTaskRunnerWorkItem;
|
||||||
import io.druid.indexing.coordinator.TaskRunnerWorkItem;
|
import io.druid.indexing.overlord.TaskRunnerWorkItem;
|
||||||
import io.druid.indexing.coordinator.ZkWorker;
|
import io.druid.indexing.overlord.ZkWorker;
|
||||||
import io.druid.indexing.coordinator.setup.WorkerSetupData;
|
import io.druid.indexing.overlord.setup.WorkerSetupData;
|
||||||
import org.joda.time.DateTime;
|
import org.joda.time.DateTime;
|
||||||
import org.joda.time.Duration;
|
import org.joda.time.Duration;
|
||||||
|
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator.setup;
|
package io.druid.indexing.overlord.setup;
|
||||||
|
|
||||||
import com.fasterxml.jackson.annotation.JsonCreator;
|
import com.fasterxml.jackson.annotation.JsonCreator;
|
||||||
import com.fasterxml.jackson.annotation.JsonProperty;
|
import com.fasterxml.jackson.annotation.JsonProperty;
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator.setup;
|
package io.druid.indexing.overlord.setup;
|
||||||
|
|
||||||
import com.fasterxml.jackson.annotation.JsonCreator;
|
import com.fasterxml.jackson.annotation.JsonCreator;
|
||||||
import com.fasterxml.jackson.annotation.JsonProperty;
|
import com.fasterxml.jackson.annotation.JsonProperty;
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator.setup;
|
package io.druid.indexing.overlord.setup;
|
||||||
|
|
||||||
import com.fasterxml.jackson.annotation.JsonCreator;
|
import com.fasterxml.jackson.annotation.JsonCreator;
|
||||||
import com.fasterxml.jackson.annotation.JsonProperty;
|
import com.fasterxml.jackson.annotation.JsonProperty;
|
|
@ -30,7 +30,7 @@ import com.metamx.common.lifecycle.LifecycleStart;
|
||||||
import com.metamx.common.lifecycle.LifecycleStop;
|
import com.metamx.common.lifecycle.LifecycleStop;
|
||||||
import com.metamx.common.logger.Logger;
|
import com.metamx.common.logger.Logger;
|
||||||
import io.druid.curator.announcement.Announcer;
|
import io.druid.curator.announcement.Announcer;
|
||||||
import io.druid.indexing.coordinator.config.RemoteTaskRunnerConfig;
|
import io.druid.indexing.overlord.config.RemoteTaskRunnerConfig;
|
||||||
import io.druid.server.initialization.ZkPathsConfig;
|
import io.druid.server.initialization.ZkPathsConfig;
|
||||||
import org.apache.curator.framework.CuratorFramework;
|
import org.apache.curator.framework.CuratorFramework;
|
||||||
import org.apache.zookeeper.CreateMode;
|
import org.apache.zookeeper.CreateMode;
|
||||||
|
|
|
@ -27,7 +27,7 @@ import com.metamx.emitter.EmittingLogger;
|
||||||
import io.druid.concurrent.Execs;
|
import io.druid.concurrent.Execs;
|
||||||
import io.druid.indexing.common.TaskStatus;
|
import io.druid.indexing.common.TaskStatus;
|
||||||
import io.druid.indexing.common.task.Task;
|
import io.druid.indexing.common.task.Task;
|
||||||
import io.druid.indexing.coordinator.TaskRunner;
|
import io.druid.indexing.overlord.TaskRunner;
|
||||||
import io.druid.indexing.worker.config.WorkerConfig;
|
import io.druid.indexing.worker.config.WorkerConfig;
|
||||||
import org.apache.curator.framework.CuratorFramework;
|
import org.apache.curator.framework.CuratorFramework;
|
||||||
import org.apache.curator.framework.recipes.cache.PathChildrenCache;
|
import org.apache.curator.framework.recipes.cache.PathChildrenCache;
|
||||||
|
|
|
@ -31,7 +31,7 @@ import com.metamx.emitter.EmittingLogger;
|
||||||
import io.druid.concurrent.Execs;
|
import io.druid.concurrent.Execs;
|
||||||
import io.druid.indexing.common.TaskStatus;
|
import io.druid.indexing.common.TaskStatus;
|
||||||
import io.druid.indexing.common.task.Task;
|
import io.druid.indexing.common.task.Task;
|
||||||
import io.druid.indexing.coordinator.TaskRunner;
|
import io.druid.indexing.overlord.TaskRunner;
|
||||||
|
|
||||||
import java.io.File;
|
import java.io.File;
|
||||||
import java.io.IOException;
|
import java.io.IOException;
|
||||||
|
|
|
@ -20,7 +20,7 @@
|
||||||
package io.druid.indexing.worker.executor;
|
package io.druid.indexing.worker.executor;
|
||||||
|
|
||||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||||
import io.druid.indexing.coordinator.TaskRunner;
|
import io.druid.indexing.overlord.TaskRunner;
|
||||||
|
|
||||||
import java.io.File;
|
import java.io.File;
|
||||||
|
|
||||||
|
|
|
@ -24,7 +24,7 @@ import com.google.common.collect.ImmutableMap;
|
||||||
import com.google.common.io.InputSupplier;
|
import com.google.common.io.InputSupplier;
|
||||||
import com.google.inject.Inject;
|
import com.google.inject.Inject;
|
||||||
import com.metamx.common.logger.Logger;
|
import com.metamx.common.logger.Logger;
|
||||||
import io.druid.indexing.coordinator.ForkingTaskRunner;
|
import io.druid.indexing.overlord.ForkingTaskRunner;
|
||||||
|
|
||||||
import javax.ws.rs.DefaultValue;
|
import javax.ws.rs.DefaultValue;
|
||||||
import javax.ws.rs.GET;
|
import javax.ws.rs.GET;
|
||||||
|
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator;
|
package io.druid.indexing.overlord;
|
||||||
|
|
||||||
import com.google.common.collect.ImmutableList;
|
import com.google.common.collect.ImmutableList;
|
||||||
import com.google.common.collect.ImmutableSet;
|
import com.google.common.collect.ImmutableSet;
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator;
|
package io.druid.indexing.overlord;
|
||||||
|
|
||||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||||
import com.google.api.client.repackaged.com.google.common.base.Throwables;
|
import com.google.api.client.repackaged.com.google.common.base.Throwables;
|
||||||
|
@ -39,7 +39,7 @@ import io.druid.indexing.common.TestRealtimeTask;
|
||||||
import io.druid.indexing.common.TestUtils;
|
import io.druid.indexing.common.TestUtils;
|
||||||
import io.druid.indexing.common.task.Task;
|
import io.druid.indexing.common.task.Task;
|
||||||
import io.druid.indexing.common.task.TaskResource;
|
import io.druid.indexing.common.task.TaskResource;
|
||||||
import io.druid.indexing.coordinator.setup.WorkerSetupData;
|
import io.druid.indexing.overlord.setup.WorkerSetupData;
|
||||||
import io.druid.indexing.worker.TaskAnnouncement;
|
import io.druid.indexing.worker.TaskAnnouncement;
|
||||||
import io.druid.indexing.worker.Worker;
|
import io.druid.indexing.worker.Worker;
|
||||||
import io.druid.jackson.DefaultObjectMapper;
|
import io.druid.jackson.DefaultObjectMapper;
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator;
|
package io.druid.indexing.overlord;
|
||||||
|
|
||||||
import com.google.common.base.Optional;
|
import com.google.common.base.Optional;
|
||||||
import com.google.common.base.Throwables;
|
import com.google.common.base.Throwables;
|
||||||
|
@ -58,7 +58,7 @@ import io.druid.indexing.common.task.IndexTask;
|
||||||
import io.druid.indexing.common.task.KillTask;
|
import io.druid.indexing.common.task.KillTask;
|
||||||
import io.druid.indexing.common.task.Task;
|
import io.druid.indexing.common.task.Task;
|
||||||
import io.druid.indexing.common.task.TaskResource;
|
import io.druid.indexing.common.task.TaskResource;
|
||||||
import io.druid.indexing.coordinator.exec.TaskConsumer;
|
import io.druid.indexing.overlord.exec.TaskConsumer;
|
||||||
import io.druid.jackson.DefaultObjectMapper;
|
import io.druid.jackson.DefaultObjectMapper;
|
||||||
import io.druid.query.aggregation.AggregatorFactory;
|
import io.druid.query.aggregation.AggregatorFactory;
|
||||||
import io.druid.query.aggregation.DoubleSumAggregatorFactory;
|
import io.druid.query.aggregation.DoubleSumAggregatorFactory;
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator;
|
package io.druid.indexing.overlord;
|
||||||
|
|
||||||
import com.google.common.collect.ImmutableList;
|
import com.google.common.collect.ImmutableList;
|
||||||
import com.google.common.collect.ImmutableMap;
|
import com.google.common.collect.ImmutableMap;
|
|
@ -17,9 +17,9 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator;
|
package io.druid.indexing.overlord;
|
||||||
|
|
||||||
import io.druid.indexing.coordinator.config.RemoteTaskRunnerConfig;
|
import io.druid.indexing.overlord.config.RemoteTaskRunnerConfig;
|
||||||
import org.joda.time.Period;
|
import org.joda.time.Period;
|
||||||
|
|
||||||
/**
|
/**
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator.scaling;
|
package io.druid.indexing.overlord.scaling;
|
||||||
|
|
||||||
import com.amazonaws.services.ec2.AmazonEC2Client;
|
import com.amazonaws.services.ec2.AmazonEC2Client;
|
||||||
import com.amazonaws.services.ec2.model.DescribeInstancesRequest;
|
import com.amazonaws.services.ec2.model.DescribeInstancesRequest;
|
||||||
|
@ -29,9 +29,9 @@ import com.amazonaws.services.ec2.model.RunInstancesResult;
|
||||||
import com.amazonaws.services.ec2.model.TerminateInstancesRequest;
|
import com.amazonaws.services.ec2.model.TerminateInstancesRequest;
|
||||||
import com.google.common.collect.Lists;
|
import com.google.common.collect.Lists;
|
||||||
import io.druid.common.guava.DSuppliers;
|
import io.druid.common.guava.DSuppliers;
|
||||||
import io.druid.indexing.coordinator.setup.EC2NodeData;
|
import io.druid.indexing.overlord.setup.EC2NodeData;
|
||||||
import io.druid.indexing.coordinator.setup.GalaxyUserData;
|
import io.druid.indexing.overlord.setup.GalaxyUserData;
|
||||||
import io.druid.indexing.coordinator.setup.WorkerSetupData;
|
import io.druid.indexing.overlord.setup.WorkerSetupData;
|
||||||
import io.druid.jackson.DefaultObjectMapper;
|
import io.druid.jackson.DefaultObjectMapper;
|
||||||
import org.easymock.EasyMock;
|
import org.easymock.EasyMock;
|
||||||
import org.junit.After;
|
import org.junit.After;
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator.scaling;
|
package io.druid.indexing.overlord.scaling;
|
||||||
|
|
||||||
import com.google.common.collect.ImmutableMap;
|
import com.google.common.collect.ImmutableMap;
|
||||||
import com.google.common.collect.Lists;
|
import com.google.common.collect.Lists;
|
||||||
|
@ -29,9 +29,9 @@ import io.druid.common.guava.DSuppliers;
|
||||||
import io.druid.indexing.common.TestMergeTask;
|
import io.druid.indexing.common.TestMergeTask;
|
||||||
import io.druid.indexing.common.TaskStatus;
|
import io.druid.indexing.common.TaskStatus;
|
||||||
import io.druid.indexing.common.task.Task;
|
import io.druid.indexing.common.task.Task;
|
||||||
import io.druid.indexing.coordinator.RemoteTaskRunnerWorkItem;
|
import io.druid.indexing.overlord.RemoteTaskRunnerWorkItem;
|
||||||
import io.druid.indexing.coordinator.ZkWorker;
|
import io.druid.indexing.overlord.ZkWorker;
|
||||||
import io.druid.indexing.coordinator.setup.WorkerSetupData;
|
import io.druid.indexing.overlord.setup.WorkerSetupData;
|
||||||
import io.druid.indexing.worker.TaskAnnouncement;
|
import io.druid.indexing.worker.TaskAnnouncement;
|
||||||
import io.druid.indexing.worker.Worker;
|
import io.druid.indexing.worker.Worker;
|
||||||
import io.druid.jackson.DefaultObjectMapper;
|
import io.druid.jackson.DefaultObjectMapper;
|
|
@ -17,7 +17,7 @@
|
||||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
package io.druid.indexing.coordinator.scaling;
|
package io.druid.indexing.overlord.scaling;
|
||||||
|
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
|
|
|
@ -34,8 +34,8 @@ import io.druid.indexing.common.TestMergeTask;
|
||||||
import io.druid.indexing.common.TestRealtimeTask;
|
import io.druid.indexing.common.TestRealtimeTask;
|
||||||
import io.druid.indexing.common.TestUtils;
|
import io.druid.indexing.common.TestUtils;
|
||||||
import io.druid.indexing.common.config.TaskConfig;
|
import io.druid.indexing.common.config.TaskConfig;
|
||||||
import io.druid.indexing.coordinator.TestRemoteTaskRunnerConfig;
|
import io.druid.indexing.overlord.TestRemoteTaskRunnerConfig;
|
||||||
import io.druid.indexing.coordinator.ThreadPoolTaskRunner;
|
import io.druid.indexing.overlord.ThreadPoolTaskRunner;
|
||||||
import io.druid.indexing.worker.config.WorkerConfig;
|
import io.druid.indexing.worker.config.WorkerConfig;
|
||||||
import io.druid.jackson.DefaultObjectMapper;
|
import io.druid.jackson.DefaultObjectMapper;
|
||||||
import io.druid.segment.loading.DataSegmentPuller;
|
import io.druid.segment.loading.DataSegmentPuller;
|
||||||
|
|
2
pom.xml
2
pom.xml
|
@ -356,7 +356,7 @@
|
||||||
</dependency>
|
</dependency>
|
||||||
<dependency>
|
<dependency>
|
||||||
<groupId>org.antlr</groupId>
|
<groupId>org.antlr</groupId>
|
||||||
<artifactId>antlr4-master</artifactId>
|
<artifactId>antlr4-coordinator</artifactId>
|
||||||
<version>4.0</version>
|
<version>4.0</version>
|
||||||
</dependency>
|
</dependency>
|
||||||
<dependency>
|
<dependency>
|
||||||
|
|
|
@ -32,12 +32,12 @@ import java.util.Set;
|
||||||
public class RegexDimExtractionFnTest
|
public class RegexDimExtractionFnTest
|
||||||
{
|
{
|
||||||
private static final String[] paths = {
|
private static final String[] paths = {
|
||||||
"/druid/prod/compute",
|
"/druid/prod/historical",
|
||||||
"/druid/prod/bard",
|
"/druid/prod/broker",
|
||||||
"/druid/prod/master",
|
"/druid/prod/coordinator",
|
||||||
"/druid/demo-east/compute",
|
"/druid/demo/historical",
|
||||||
"/druid/demo-east/bard",
|
"/druid/demo/broker",
|
||||||
"/druid/demo-east/master",
|
"/druid/demo/coordinator",
|
||||||
"/dash/aloe",
|
"/dash/aloe",
|
||||||
"/dash/baloo"
|
"/dash/baloo"
|
||||||
};
|
};
|
||||||
|
@ -80,7 +80,7 @@ public class RegexDimExtractionFnTest
|
||||||
|
|
||||||
Assert.assertEquals(4, extracted.size());
|
Assert.assertEquals(4, extracted.size());
|
||||||
Assert.assertTrue(extracted.contains("druid/prod"));
|
Assert.assertTrue(extracted.contains("druid/prod"));
|
||||||
Assert.assertTrue(extracted.contains("druid/demo-east"));
|
Assert.assertTrue(extracted.contains("druid/demo"));
|
||||||
Assert.assertTrue(extracted.contains("dash/aloe"));
|
Assert.assertTrue(extracted.contains("dash/aloe"));
|
||||||
Assert.assertTrue(extracted.contains("dash/baloo"));
|
Assert.assertTrue(extracted.contains("dash/baloo"));
|
||||||
}
|
}
|
||||||
|
|
|
@ -186,7 +186,7 @@ Figure~\ref{fig:druid_cluster}.
|
||||||
Recall that the Druid data model has the notion of historical and real-time segments. The Druid cluster is architected to reflect this
|
Recall that the Druid data model has the notion of historical and real-time segments. The Druid cluster is architected to reflect this
|
||||||
conceptual separation of data. Real-time nodes are responsible for
|
conceptual separation of data. Real-time nodes are responsible for
|
||||||
ingesting, storing, and responding to queries for the most recent
|
ingesting, storing, and responding to queries for the most recent
|
||||||
events. Similarly, historical compute nodes are responsible for
|
events. Similarly, historical historical nodes are responsible for
|
||||||
loading and responding to queries for historical events.
|
loading and responding to queries for historical events.
|
||||||
|
|
||||||
Data in Druid is stored on storage nodes. Storage nodes can be either
|
Data in Druid is stored on storage nodes. Storage nodes can be either
|
||||||
|
@ -195,7 +195,7 @@ typically first hit a layer of broker nodes. Broker nodes are
|
||||||
responsible for finding and routing queries down to the storage nodes
|
responsible for finding and routing queries down to the storage nodes
|
||||||
that host the pertinent data. The storage nodes compute their portion
|
that host the pertinent data. The storage nodes compute their portion
|
||||||
of the query response in parallel and return their results to the
|
of the query response in parallel and return their results to the
|
||||||
brokers. Broker nodes, compute nodes, and realtime nodes are jointly
|
brokers. Broker nodes, historical nodes, and realtime nodes are jointly
|
||||||
classified as queryable nodes.
|
classified as queryable nodes.
|
||||||
|
|
||||||
Druid also has a set of coordination nodes to manage load assignment,
|
Druid also has a set of coordination nodes to manage load assignment,
|
||||||
|
@ -207,51 +207,51 @@ Druid relies on Apache Zookeeper \cite{hunt2010zookeeper}
|
||||||
for coordination. Most intra-cluster communication is over Zookeeper, although
|
for coordination. Most intra-cluster communication is over Zookeeper, although
|
||||||
queries are typically forwarded over HTTP.
|
queries are typically forwarded over HTTP.
|
||||||
|
|
||||||
\subsection{Historical Compute Nodes}
|
\subsection{Historical historical nodes}
|
||||||
Historical compute nodes are the main workers of a Druid cluster and
|
Historical historical nodes are the main workers of a Druid cluster and
|
||||||
are self-contained and self-sufficient. Compute nodes load historical
|
are self-contained and self-sufficient. historical nodes load historical
|
||||||
segments from permanent/deep storage and expose them for
|
segments from permanent/deep storage and expose them for
|
||||||
querying. There is no single point of contention between the nodes and
|
querying. There is no single point of contention between the nodes and
|
||||||
nodes have no knowledge of one another. Compute nodes are
|
nodes have no knowledge of one another. historical nodes are
|
||||||
operationally simple; they only know how to perform the tasks they are
|
operationally simple; they only know how to perform the tasks they are
|
||||||
assigned. To help other services discover compute nodes and the data
|
assigned. To help other services discover historical nodes and the data
|
||||||
they hold, every compute node maintains a constant Zookeeper
|
they hold, every historical node maintains a constant Zookeeper
|
||||||
connection. Compute nodes announce their online state and the segments
|
connection. historical nodes announce their online state and the segments
|
||||||
they serve by creating ephemeral nodes under specifically configured
|
they serve by creating ephemeral nodes under specifically configured
|
||||||
Zookeeper paths. Instructions for a given compute node to load new
|
Zookeeper paths. Instructions for a given historical node to load new
|
||||||
segments or drop existing segments are sent by creating ephemeral
|
segments or drop existing segments are sent by creating ephemeral
|
||||||
znodes under a special “load queue” path associated with the compute
|
znodes under a special “load queue” path associated with the compute
|
||||||
node. Figure~\ref{fig:zookeeper} illustrates a simple compute node and Zookeeper interaction.
|
node. Figure~\ref{fig:zookeeper} illustrates a simple historical node and Zookeeper interaction.
|
||||||
Each compute node announces themselves under an "announcements" path when they come online
|
Each historical node announces themselves under an "announcements" path when they come online
|
||||||
and each compute has a load queue path associated with it.
|
and each compute has a load queue path associated with it.
|
||||||
|
|
||||||
\begin{figure}
|
\begin{figure}
|
||||||
\centering
|
\centering
|
||||||
\includegraphics[width = 2.8in]{zookeeper}
|
\includegraphics[width = 2.8in]{zookeeper}
|
||||||
\caption{Compute nodes create ephemeral znodes under specifically configured Zookeeper paths.}
|
\caption{historical nodes create ephemeral znodes under specifically configured Zookeeper paths.}
|
||||||
\label{fig:zookeeper}
|
\label{fig:zookeeper}
|
||||||
\end{figure}
|
\end{figure}
|
||||||
|
|
||||||
To expose a segment for querying, a compute node must first possess a
|
To expose a segment for querying, a historical node must first possess a
|
||||||
local copy of the segment. Before a compute node downloads a segment
|
local copy of the segment. Before a historical node downloads a segment
|
||||||
from deep storage, it first checks a local disk directory (cache) to
|
from deep storage, it first checks a local disk directory (cache) to
|
||||||
see if the segment already exists in local storage. If no cache
|
see if the segment already exists in local storage. If no cache
|
||||||
information about the segment is present, the compute node will
|
information about the segment is present, the historical node will
|
||||||
download metadata about the segment from Zookeeper. This metadata
|
download metadata about the segment from Zookeeper. This metadata
|
||||||
includes information about where the segment is located in deep
|
includes information about where the segment is located in deep
|
||||||
storage and about how to decompress and process the segment. Once a
|
storage and about how to decompress and process the segment. Once a
|
||||||
compute node completes processing a segment, the node announces (in
|
historical node completes processing a segment, the node announces (in
|
||||||
Zookeeper) that it is serving the segment. At this point, the segment
|
Zookeeper) that it is serving the segment. At this point, the segment
|
||||||
is queryable.
|
is queryable.
|
||||||
|
|
||||||
\subsubsection{Tiers}
|
\subsubsection{Tiers}
|
||||||
\label{sec:tiers}
|
\label{sec:tiers}
|
||||||
Compute nodes can be grouped in different tiers, where all nodes in a
|
historical nodes can be grouped in different tiers, where all nodes in a
|
||||||
given tier are identically configured. Different performance and
|
given tier are identically configured. Different performance and
|
||||||
fault-tolerance parameters can be set for each tier. The purpose of
|
fault-tolerance parameters can be set for each tier. The purpose of
|
||||||
tiered nodes is to enable higher or lower priority segments to be
|
tiered nodes is to enable higher or lower priority segments to be
|
||||||
distributed according to their importance. For example, it is possible
|
distributed according to their importance. For example, it is possible
|
||||||
to spin up a “hot” tier of compute nodes that have a high number of
|
to spin up a “hot” tier of historical nodes that have a high number of
|
||||||
cores and a large RAM capacity. The “hot” cluster can be configured to
|
cores and a large RAM capacity. The “hot” cluster can be configured to
|
||||||
download more frequently accessed segments. A parallel “cold” cluster
|
download more frequently accessed segments. A parallel “cold” cluster
|
||||||
can also be created with much less powerful backing hardware. The
|
can also be created with much less powerful backing hardware. The
|
||||||
|
@ -290,12 +290,12 @@ all indexes on a node to be queried. On a periodic basis, the nodes will
|
||||||
schedule a background task that searches for all persisted indexes of
|
schedule a background task that searches for all persisted indexes of
|
||||||
a data source. The task merges these indexes together and builds a
|
a data source. The task merges these indexes together and builds a
|
||||||
historical segment. The nodes will upload the segment to deep storage
|
historical segment. The nodes will upload the segment to deep storage
|
||||||
and provide a signal for the historical compute nodes to begin serving
|
and provide a signal for the historical historical nodes to begin serving
|
||||||
the segment. The ingest, persist, merge, and handoff steps are fluid;
|
the segment. The ingest, persist, merge, and handoff steps are fluid;
|
||||||
there is no data loss as a real-time node converts a real-time segment
|
there is no data loss as a real-time node converts a real-time segment
|
||||||
to a historical one. Figure~\ref{fig:data-durability} illustrates the process.
|
to a historical one. Figure~\ref{fig:data-durability} illustrates the process.
|
||||||
|
|
||||||
Similar to compute nodes, real-time nodes announce segments in
|
Similar to historical nodes, real-time nodes announce segments in
|
||||||
Zookeeper. Unlike historical segments, real-time segments may
|
Zookeeper. Unlike historical segments, real-time segments may
|
||||||
represent a period of time that extends into the future. For example,
|
represent a period of time that extends into the future. For example,
|
||||||
a real-time node may announce it is serving a segment that contains
|
a real-time node may announce it is serving a segment that contains
|
||||||
|
@ -384,7 +384,7 @@ query, it first maps the query to a set of segments. A subset of
|
||||||
these segment results may already exist in the cache and the results
|
these segment results may already exist in the cache and the results
|
||||||
can be directly pulled from the cache. For any segment results that
|
can be directly pulled from the cache. For any segment results that
|
||||||
do not exist in the cache, the broker node will forward the query
|
do not exist in the cache, the broker node will forward the query
|
||||||
to the compute nodes. Once the compute nodes return their results,
|
to the historical nodes. Once the historical nodes return their results,
|
||||||
the broker will store those results in the cache. Real-time segments
|
the broker will store those results in the cache. Real-time segments
|
||||||
are never cached and hence requests for real-time data will always
|
are never cached and hence requests for real-time data will always
|
||||||
be forwarded to real-time nodes. Real-time data is perpetually
|
be forwarded to real-time nodes. Real-time data is perpetually
|
||||||
|
@ -412,16 +412,16 @@ table can be updated by any service that creates historical
|
||||||
segments. The MySQL database also contains a rule table that governs
|
segments. The MySQL database also contains a rule table that governs
|
||||||
how segments are created, destroyed, and replicated in the cluster.
|
how segments are created, destroyed, and replicated in the cluster.
|
||||||
|
|
||||||
The master does not directly communicate with a compute node when
|
The master does not directly communicate with a historical node when
|
||||||
assigning it work; instead the master creates an ephemeral znode in
|
assigning it work; instead the master creates an ephemeral znode in
|
||||||
Zookeeper containing information about what the compute node should
|
Zookeeper containing information about what the historical node should
|
||||||
do. The compute node maintains a similar connection to Zookeeper to
|
do. The historical node maintains a similar connection to Zookeeper to
|
||||||
monitor for new work.
|
monitor for new work.
|
||||||
|
|
||||||
\subsubsection{Rules}
|
\subsubsection{Rules}
|
||||||
Rules govern how historical segments are loaded and dropped from the cluster.
|
Rules govern how historical segments are loaded and dropped from the cluster.
|
||||||
Rules indicate how segments should be assigned to
|
Rules indicate how segments should be assigned to
|
||||||
different compute node tiers and how many replicates of a segment
|
different historical node tiers and how many replicates of a segment
|
||||||
should exist in each tier. Rules may also indicate when segments
|
should exist in each tier. Rules may also indicate when segments
|
||||||
should be dropped entirely from the cluster. Rules are usually set for a period of time.
|
should be dropped entirely from the cluster. Rules are usually set for a period of time.
|
||||||
For example, a user may use rules to load the most recent one month's worth of segments into a "hot" cluster,
|
For example, a user may use rules to load the most recent one month's worth of segments into a "hot" cluster,
|
||||||
|
@ -435,7 +435,7 @@ match each segment with the first rule that applies to it.
|
||||||
|
|
||||||
\subsubsection{Load Balancing}
|
\subsubsection{Load Balancing}
|
||||||
In a typical production environment, queries often hit dozens or even
|
In a typical production environment, queries often hit dozens or even
|
||||||
hundreds of data segments. Since each compute node has limited
|
hundreds of data segments. Since each historical node has limited
|
||||||
resources, historical segments must be distributed among the cluster
|
resources, historical segments must be distributed among the cluster
|
||||||
to ensure that the cluster load is not too imbalanced. Determining
|
to ensure that the cluster load is not too imbalanced. Determining
|
||||||
optimal load distribution requires some knowledge about query patterns
|
optimal load distribution requires some knowledge about query patterns
|
||||||
|
@ -445,7 +445,7 @@ access smaller segments are faster.
|
||||||
|
|
||||||
These query patterns suggest replicating recent historical segments at
|
These query patterns suggest replicating recent historical segments at
|
||||||
a higher rate, spreading out large segments that are close in time to
|
a higher rate, spreading out large segments that are close in time to
|
||||||
different compute nodes, and co-locating segments from different data
|
different historical nodes, and co-locating segments from different data
|
||||||
sources. To optimally distribute and balance segments among the
|
sources. To optimally distribute and balance segments among the
|
||||||
cluster, we developed a cost-based optimization procedure that takes
|
cluster, we developed a cost-based optimization procedure that takes
|
||||||
into account the segment data source, recency, and size. The exact
|
into account the segment data source, recency, and size. The exact
|
||||||
|
@ -628,7 +628,7 @@ Druid replicates historical segments on multiple hosts. The number of
|
||||||
replicates in each tier of the historical compute cluster is fully
|
replicates in each tier of the historical compute cluster is fully
|
||||||
configurable. Setups that require high levels of fault tolerance can
|
configurable. Setups that require high levels of fault tolerance can
|
||||||
be configured to have a high number of replicates. Replicates are
|
be configured to have a high number of replicates. Replicates are
|
||||||
assigned to compute nodes by coordination nodes using the same load
|
assigned to historical nodes by coordination nodes using the same load
|
||||||
distribution algorithm discussed in Section~\ref{sec:caching}. Broker nodes forward queries to the first node they find that contain a segment required for the query.
|
distribution algorithm discussed in Section~\ref{sec:caching}. Broker nodes forward queries to the first node they find that contain a segment required for the query.
|
||||||
|
|
||||||
Real-time segments follow a different replication model as real-time
|
Real-time segments follow a different replication model as real-time
|
||||||
|
@ -641,7 +641,7 @@ reload any indexes that were persisted to disk and read from the
|
||||||
message bus from the point it last committed an offset.
|
message bus from the point it last committed an offset.
|
||||||
|
|
||||||
\subsection{Failure Detection}
|
\subsection{Failure Detection}
|
||||||
If a compute node completely fails and becomes unavailable, the
|
If a historical node completely fails and becomes unavailable, the
|
||||||
ephemeral Zookeeper znodes it created are deleted. The master node
|
ephemeral Zookeeper znodes it created are deleted. The master node
|
||||||
will notice that certain segments are insufficiently replicated or
|
will notice that certain segments are insufficiently replicated or
|
||||||
missing altogether. Additional replicates will be created and
|
missing altogether. Additional replicates will be created and
|
||||||
|
@ -742,7 +742,7 @@ count, achieving scan rates of 33M rows/second/core. We believe
|
||||||
the 75 node cluster was actually overprovisioned for the test
|
the 75 node cluster was actually overprovisioned for the test
|
||||||
dataset, explaining the modest improvement over the 50 node cluster.
|
dataset, explaining the modest improvement over the 50 node cluster.
|
||||||
Druid's concurrency model is based on shards: one thread will scan one
|
Druid's concurrency model is based on shards: one thread will scan one
|
||||||
shard. If the number of segments on a compute node modulo the number
|
shard. If the number of segments on a historical node modulo the number
|
||||||
of cores is small (e.g. 17 segments and 15 cores), then many of the
|
of cores is small (e.g. 17 segments and 15 cores), then many of the
|
||||||
cores will be idle during the last round of the computation.
|
cores will be idle during the last round of the computation.
|
||||||
|
|
Before Width: | Height: | Size: 239 KiB After Width: | Height: | Size: 239 KiB |
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue