mirror of https://github.com/apache/druid.git
8c802e4c9b
In the current design, brokers query both data nodes and tasks to fetch the schema of the segments they serve. The table schema is then constructed by combining the schemas of all segments within a datasource. However, this approach leads to a high number of segment metadata queries during broker startup, resulting in slow startup times and various issues outlined in the design proposal. To address these challenges, we propose centralizing the table schema management process within the coordinator. This change is the first step in that direction. In the new arrangement, the coordinator will take on the responsibility of querying both data nodes and tasks to fetch segment schema and subsequently building the table schema. Brokers will now simply query the Coordinator to fetch table schema. Importantly, brokers will still retain the capability to build table schemas if the need arises, ensuring both flexibility and resilience. |
||
---|---|---|
.. | ||
build_run_k8s_cluster.sh | ||
copy_hadoop_resources.sh | ||
copy_resources_template.sh | ||
docker_build_containers.sh | ||
docker_compose_args.sh | ||
docker_run_cluster.sh | ||
setup_druid_on_k8s.sh | ||
setup_druid_operator_on_k8s.sh | ||
setup_k8s_cluster.sh | ||
stop_k8s_cluster.sh |