Apache Druid: a high performance real-time analytics database.
Go to file
Vishesh Garg 197c54f673
Auto-Compaction using Multi-Stage Query Engine (#16291)
Description:
Compaction operations issued by the Coordinator currently run using the native query engine.
As majority of the advancements that we are making in batch ingestion are in MSQ, it is imperative
that we support compaction on MSQ to make Compaction more robust and possibly faster. 
For instance, we have seen OOM errors in native compaction that MSQ could have handled by its
auto-calculation of tuning parameters. 

This commit enables compaction on MSQ to remove the dependency on native engine. 

Main changes:
* `DataSourceCompactionConfig` now has an additional field `engine` that can be one of 
`[native, msq]` with `native` being the default.
*  if engine is MSQ, `CompactSegments` duty assigns all available compaction task slots to the
launched `CompactionTask` to ensure full capacity is available to MSQ. This is to avoid stalling which
could happen in case a fraction of the tasks were allotted and they eventually fell short of the number
of tasks required by the MSQ engine to run the compaction.
* `ClientCompactionTaskQuery` has a new field `compactionRunner` with just one `engine` field.
* `CompactionTask` now has `CompactionRunner` interface instance with its implementations
`NativeCompactinRunner` and `MSQCompactionRunner` in the `druid-multi-stage-query` extension.
The objectmapper deserializes `ClientCompactionRunnerInfo` in `ClientCompactionTaskQuery` to the
`CompactionRunner` instance that is mapped to the specified type [`native`, `msq`]. 
* `CompactTask` uses the `CompactionRunner` instance it receives to create the indexing tasks.
* `CompactionTask` to `MSQControllerTask` conversion logic checks whether metrics are present in 
the segment schema. If present, the task is created with a native group-by query; if not, the task is
issued with a scan query. The `storeCompactionState` flag is set in the context.
* Each created `MSQControllerTask` is launched in-place and its `TaskStatus` tracked to determine the
final status of the `CompactionTask`. The id of each of these tasks is the same as that of `CompactionTask`
since otherwise, the workers will be unable to determine the controller task's location for communication
(as they haven't been launched via the overlord).
2024-07-12 16:40:20 +05:30
.github Remove index_realtime and index_realtime_appenderator tasks (#16602) 2024-06-24 20:13:33 -07:00
.idea Ignore misc.xml (#14362) 2023-06-02 12:00:52 +05:30
benchmarks Auto-Compaction using Multi-Stage Query Engine (#16291) 2024-07-12 16:40:20 +05:30
cloud Prepare master branch for 31.0.0 release (#16333) 2024-04-26 09:22:43 +05:30
codestyle Support sorting on complex columns in MSQ (#16322) 2024-05-13 15:07:05 +05:30
dev Update intellij-setup.md (#16655) 2024-06-26 17:38:37 +05:30
distribution Remove index_realtime and index_realtime_appenderator tasks (#16602) 2024-06-24 20:13:33 -07:00
docs Web console: better sql data loader reset (#16696) 2024-07-11 14:45:04 -07:00
examples Update examples/bin/dsql scripts to accept Python 3 (#16677) 2024-07-03 15:52:57 +08:00
extensions-contrib Correct aggregators violating names (#16615) 2024-07-12 09:18:09 +02:00
extensions-core Auto-Compaction using Multi-Stage Query Engine (#16291) 2024-07-12 16:40:20 +05:30
hooks Git hooks should fail on errors; pass args to git hooks (#12322) 2022-03-10 09:07:50 +09:00
indexing-hadoop Column name in parse exceptions (#16529) 2024-06-25 13:42:52 -07:00
indexing-service Auto-Compaction using Multi-Stage Query Engine (#16291) 2024-07-12 16:40:20 +05:30
integration-tests Auto-Compaction using Multi-Stage Query Engine (#16291) 2024-07-12 16:40:20 +05:30
integration-tests-ex Revert "Downgrade the version of Apache Curator from 5.5.0 to 5.3.0 to avoid a bug in the new version (#16425)" (#16688) 2024-07-03 11:18:50 +05:30
licenses Web console: upgrade axios and follow-redirects (#16087) 2024-03-11 18:57:00 -07:00
processing Auto-Compaction using Multi-Stage Query Engine (#16291) 2024-07-12 16:40:20 +05:30
publications De-incubation cleanup in code, docs, packaging (#9108) 2020-01-03 12:33:19 -05:00
server Auto-Compaction using Multi-Stage Query Engine (#16291) 2024-07-12 16:40:20 +05:30
services Refactor `SegmentLoadDropHandler` code (#16685) 2024-07-08 09:29:55 +05:30
sql Auto-Compaction using Multi-Stage Query Engine (#16291) 2024-07-12 16:40:20 +05:30
web-console Web console: better sql data loader reset (#16696) 2024-07-11 14:45:04 -07:00
website docs: add redirect for kafka lookups (#16668) 2024-06-27 10:56:51 -07:00
.asf.yaml .asf.yaml: Add required "repository" field. (#14499) 2023-06-28 15:05:07 -07:00
.backportrc.json Add 0.18.0 to .backportrc.json to facilitate backport. (#9661) 2020-04-11 13:49:04 -07:00
.codecov.yml Use Codecov (#8388) 2019-08-28 08:49:30 -07:00
.dockerignore Add docker container for druid (#6896) 2019-02-08 12:12:28 +00:00
.gitignore add ignore path (#16429) 2024-05-11 17:54:52 +08:00
.lgtm.yml be consistent about referring to the web console by its name (#13118) 2022-09-19 15:02:17 -07:00
CONTRIBUTING.md Document our conventions for writing messages (#13916) 2023-04-03 21:30:20 -07:00
LABELS Fixing security vulnerability check errors (#13956) 2023-03-23 11:10:06 +05:30
LICENSE Adding the PropertyNamingStrategies from jackson for fixing hadoop ingestion (#14671) 2023-08-01 20:02:43 +05:30
NOTICE Update notice file. (#15702) 2024-01-23 15:56:22 +05:30
README.md Fix workflow labeler parameter to match the correct status img (#16142) 2024-03-20 11:05:17 +05:30
README.template De-incubation cleanup in code, docs, packaging (#9108) 2020-01-03 12:33:19 -05:00
check_test_suite.py Update Hadoop3 as default build version (#14005) 2023-04-26 12:52:51 +05:30
check_test_suite_test.py remove Travis CI (#13789) 2023-02-10 01:46:56 -08:00
doap_Druid.rdf Fix the created property in DOAP RDF file (#14971) 2023-09-13 06:12:35 -07:00
it.sh Build reliablity fixes (#15048) 2023-09-28 12:27:52 -07:00
licenses.yaml Revert "Downgrade the version of Apache Curator from 5.5.0 to 5.3.0 to avoid a bug in the new version (#16425)" (#16688) 2024-07-03 11:18:50 +05:30
owasp-dependency-check-suppressions.xml Fix CVE errors (#16147) 2024-04-05 17:53:09 +05:30
pom.xml Revert "Downgrade the version of Apache Curator from 5.5.0 to 5.3.0 to avoid a bug in the new version (#16425)" (#16688) 2024-07-03 11:18:50 +05:30
rewrite.yml Update Calcite*Test to use junit5 (#16106) 2024-03-19 04:05:12 -07:00
upload.sh Adding licenses and enable apache-rat-plugin. (#6215) 2018-09-18 08:39:26 -07:00

README.md

Coverage Status Docker Helm

Workflow Status
⚙️ CodeQL Config codeql-config
🔍 CodeQL codeql
🕒 Cron Job ITS cron-job-its
🏷️ Labeler labeler
♻️ Reusable Revised ITS reusable-revised-its
♻️ Reusable Standard ITS reusable-standard-its
♻️ Reusable Unit Tests reusable-unit-tests
🔄 Revised ITS revised-its
🔧 Standard ITS standard-its
🛠️ Static Checks static-checks
🧪 Unit and Integration Tests Unified unit-and-integration-tests-unified
🔬 Unit Tests unit-tests

Website Twitter Download Get Started Documentation Community Build Contribute License


Apache Druid

Druid is a high performance real-time analytics database. Druid's main value add is to reduce time to insight and action.

Druid is designed for workflows where fast queries and ingest really matter. Druid excels at powering UIs, running operational (ad-hoc) queries, or handling high concurrency. Consider Druid as an open source alternative to data warehouses for a variety of use cases. The design documentation explains the key concepts.

Getting started

You can get started with Druid with our local or Docker quickstart.

Druid provides a rich set of APIs (via HTTP and JDBC) for loading, managing, and querying your data. You can also interact with Druid via the built-in web console (shown below).

Load data

data loader Kafka

Load streaming and batch data using a point-and-click wizard to guide you through ingestion setup. Monitor one off tasks and ingestion supervisors.

Manage the cluster

management

Manage your cluster with ease. Get a view of your datasources, segments, ingestion tasks, and services from one convenient location. All powered by SQL systems tables, allowing you to see the underlying query for each view.

Issue queries

query view combo

Use the built-in query workbench to prototype DruidSQL and native queries or connect one of the many tools that help you make the most out of Druid.

Documentation

See the latest documentation for the documentation for the current official release. If you need information on a previous release, you can browse previous releases documentation.

Make documentation and tutorials updates in /docs using Markdown or extended Markdown (MDX). Then, open a pull request.

To build the site locally, you need Node 16.14 or higher and to install Docusaurus 2 with npm|yarn install in the website directory. Then you can run npm|yarn start to launch a local build of the docs.

If you're looking to update non-doc pages like Use Cases, those files are in the druid-website-src repo.

Community

Visit the official project community page to read about getting involved in contributing to Apache Druid, and how we help one another use and operate Druid.

  • Druid users can find help in the druid-user mailing list on Google Groups, and have more technical conversations in #troubleshooting on Slack.
  • Druid development discussions take place in the druid-dev mailing list (dev@druid.apache.org). Subscribe by emailing dev-subscribe@druid.apache.org. For live conversations, join the #dev channel on Slack.

Check out the official community page for details of how to join the community Slack channels.

Find articles written by community members and a calendar of upcoming events on the project site - contribute your own events and articles by submitting a PR in the apache/druid-website-src repository.

Building from source

Please note that JDK 8 or JDK 11 is required to build Druid.

See the latest build guide for instructions on building Apache Druid from source.

Contributing

Please follow the community guidelines for contributing.

For instructions on setting up IntelliJ dev/intellij-setup.md

License

Apache License, Version 2.0