* Docusaurus build framework + ingestion doc refresh. * stick to npm instead of yarn * fix typos * restore some _bin * Adjustments. * detect and fix redirect anchors * update anchor lint * Web-console: remove specific column filters (#8343) * add clear filter * update tool kit * remove usless check * auto run * add % * Fix resource leak (#8337) * Fix resource leak * Patch comments * Enable Spotbugs NP_NONNULL_RETURN_VIOLATION (#8234) * Fixes from PR review. * Fix more anchors. * Preamble nix. * Fix more anchors, headers * clean up placeholder page * add to website lint to travis config * better broken link checking * travis fix * Fixed more broken links * better redirects * unfancy catch * fix LGTM error * link fixes * fix md issues * Addl fixes
3.0 KiB
id | title |
---|---|
zookeeper | ZooKeeper |
Apache Druid (incubating) uses Apache ZooKeeper (ZK) for management of current cluster state. The operations that happen over ZK are
- Coordinator leader election
- Segment "publishing" protocol from Historical
- Segment load/drop protocol between Coordinator and Historical
- Overlord leader election
- Overlord and MiddleManager task management
Coordinator Leader Election
We use the Curator LeadershipLatch recipe to do leader election at path
${druid.zk.paths.coordinatorPath}/_COORDINATOR
Segment "publishing" protocol from Historical and Realtime
The announcementsPath
and servedSegmentsPath
are used for this.
All Historical processes publish themselves on the announcementsPath
, specifically, they will create an ephemeral znode at
${druid.zk.paths.announcementsPath}/${druid.host}
Which signifies that they exist. They will also subsequently create a permanent znode at
${druid.zk.paths.servedSegmentsPath}/${druid.host}
And as they load up segments, they will attach ephemeral znodes that look like
${druid.zk.paths.servedSegmentsPath}/${druid.host}/_segment_identifier_
Processes like the Coordinator and Broker can then watch these paths to see which processes are currently serving which segments.
Segment load/drop protocol between Coordinator and Historical
The loadQueuePath
is used for this.
When the Coordinator decides that a Historical process should load or drop a segment, it writes an ephemeral znode to
${druid.zk.paths.loadQueuePath}/_host_of_historical_process/_segment_identifier
This znode will contain a payload that indicates to the Historical process what it should do with the given segment. When the Historical process is done with the work, it will delete the znode in order to signify to the Coordinator that it is complete.