druid/integration-tests/script
Rishabh Singh e30790e013
Introduce Segment Schema Publishing and Polling for Efficient Datasource Schema Building (#15817)
Issue: #14989

The initial step in optimizing segment metadata was to centralize the construction of datasource schema in the Coordinator (#14985). Thereafter, we addressed the problem of publishing schema for realtime segments (#15475). Subsequently, our goal is to eliminate the requirement for regularly executing queries to obtain segment schema information.

This is the final change which involves publishing segment schema for finalized segments from task and periodically polling them in the Coordinator.
2024-04-24 22:22:53 +05:30
..
build_run_k8s_cluster.sh Use K3S instead of minikube for integration tests (#13782) 2023-02-17 23:06:30 +05:30
copy_hadoop_resources.sh Fixing typos in docker build scripts (#11866) 2021-11-02 23:50:52 +05:30
copy_resources_template.sh Build reliablity fixes (#15048) 2023-09-28 12:27:52 -07:00
docker_build_containers.sh run some integration tests with Java 21 (#15104) 2023-10-20 11:18:13 +08:00
docker_compose_args.sh Introduce Segment Schema Publishing and Polling for Efficient Datasource Schema Building (#15817) 2024-04-24 22:22:53 +05:30
docker_run_cluster.sh Migrate to use docker compose v2 (#16232) 2024-04-03 12:32:55 +02:00
setup_druid_on_k8s.sh Reduce the size of distribution docker image (#15968) 2024-02-26 21:18:55 +05:30
setup_druid_operator_on_k8s.sh add latest version of druid operator to integeration tests (#13883) 2023-03-10 16:11:25 +05:30
setup_k8s_cluster.sh Use K3S instead of minikube for integration tests (#13782) 2023-02-17 23:06:30 +05:30
stop_k8s_cluster.sh Use K3S instead of minikube for integration tests (#13782) 2023-02-17 23:06:30 +05:30