Apache Druid: a high performance real-time analytics database.
Go to file
zachjsh 0c76df1c7d
Enable Continuous auto kill (#14831)
### Description

This change enables the `KillUnusedSegments` coordinator duty to be scheduled continuously. Things that prevented this, or made this difficult before were the following:

1. If scheduled at fast enough rate, the duty would find the same intervals to kill for the same datasources, while kill tasks submitted for those same datasources and intervals were already underway, thus wasting task slots on duplicated work.

2. The task resources used by auto kill were previously unbounded.  Each duty run period, if unused
 segments were found for any datasource, a kill task would be submitted to kill them.

This pr solves for both of these issues:

1. The duty keeps track of the end time of the last interval found when killing unused segments for each datasource, in a in memory map. The end time for each datasource, if found, is used as the start time lower bound, when searching for unused intervals for that same datasource. Each duty run, we remove any datasource keys from this map that are no longer found to match datasources in the system, or in whitelist, and also remove a datasource entry, if there is found to be no unused segments for the datasource, which happens when we fail to find an interval which includes unused segments. Removing the datasource entry from the map,  allows for searching for unusedSegments in the datasource from the beginning of time once again

2. The unbounded task resource usage can be mitigated with coordinator dynamic config added as part of ba957a9b97


Operators can configure continous auto kill by providing coordinator runtime properties similar to the following:

```
druid.coordinator.period.indexingPeriod=PT60S
druid.coordinator.kill.period=PT60S
```

And providing sensible limits to the killTask usage via coordinator dynamic properties.
2023-08-23 09:23:08 -04:00
.github docs: fix autolabeler for jupyter notebooks (#14862) 2023-08-18 12:42:36 -07:00
.idea Ignore misc.xml (#14362) 2023-06-02 12:00:52 +05:30
benchmarks consolidate json and auto indexers, remove v4 nested column serializer (#14456) 2023-08-22 18:50:11 -07:00
cloud Upgrade hibernate validator version to fix CVE-2019-10219 (#14757) 2023-08-14 11:50:51 +05:30
codestyle Upgrade guava version to 31.1-jre (#14767) 2023-08-22 12:09:53 +05:30
dev Document our conventions for writing messages (#13916) 2023-04-03 21:30:20 -07:00
distribution Bump com.ibm.icu:icu4j from 55.1 to 73.2 (#14853) 2023-08-18 09:10:39 -04:00
docs Enable Continuous auto kill (#14831) 2023-08-23 09:23:08 -04:00
examples 202307-notebook-unionall (#14726) 2023-08-21 10:55:58 -07:00
extensions-contrib Upgrade guava version to 31.1-jre (#14767) 2023-08-22 12:09:53 +05:30
extensions-core Add coordinator API for unused segments (#14846) 2023-08-23 14:51:25 +05:30
helm/druid helm: Add serviceAccounts, rbac, and small fixes (#13747) 2023-02-23 11:42:03 +05:30
hooks Git hooks should fail on errors; pass args to git hooks (#12322) 2022-03-10 09:07:50 +09:00
indexing-hadoop Fix a bug in QosFilter (#14859) 2023-08-21 13:00:41 +05:30
indexing-service Add coordinator API for unused segments (#14846) 2023-08-23 14:51:25 +05:30
integration-tests enable sql compatible null handling mode by default (#14792) 2023-08-21 20:07:13 -07:00
integration-tests-ex enable sql compatible null handling mode by default (#14792) 2023-08-21 20:07:13 -07:00
licenses Bump com.ibm.icu:icu4j from 55.1 to 73.2 (#14853) 2023-08-18 09:10:39 -04:00
processing override retry attempts for InputEntityIteratingReaderTest for much faster test run (#14897) 2023-08-22 22:01:47 -07:00
publications De-incubation cleanup in code, docs, packaging (#9108) 2020-01-03 12:33:19 -05:00
server Enable Continuous auto kill (#14831) 2023-08-23 09:23:08 -04:00
services Clean up compaction logs on coordinator (#14875) 2023-08-21 17:30:41 +05:30
sql Allow EARLIEST/EARLIEST_BY/LATEST/LATEST_BY for STRING columns without specifying maxStringBytes (#14848) 2023-08-22 22:50:19 -07:00
web-console Clean up Kafka supervisor topic (#14651) 2023-08-21 11:55:38 -07:00
website Automatic compaction API documentation refactor (#14740) 2023-08-21 11:34:41 -07:00
.asf.yaml .asf.yaml: Add required "repository" field. (#14499) 2023-06-28 15:05:07 -07:00
.backportrc.json Add 0.18.0 to .backportrc.json to facilitate backport. (#9661) 2020-04-11 13:49:04 -07:00
.codecov.yml Use Codecov (#8388) 2019-08-28 08:49:30 -07:00
.dockerignore Add docker container for druid (#6896) 2019-02-08 12:12:28 +00:00
.gitignore Docusaurus2 upgrade for master (#14411) 2023-08-16 19:01:21 -07:00
.lgtm.yml be consistent about referring to the web console by its name (#13118) 2022-09-19 15:02:17 -07:00
CONTRIBUTING.md Document our conventions for writing messages (#13916) 2023-04-03 21:30:20 -07:00
LABELS Fixing security vulnerability check errors (#13956) 2023-03-23 11:10:06 +05:30
LICENSE Adding the PropertyNamingStrategies from jackson for fixing hadoop ingestion (#14671) 2023-08-01 20:02:43 +05:30
NOTICE update NOTICE year, update kafka notice in licenses.yaml (#14299) 2023-05-17 04:32:19 -07:00
README.md Docusaurus2 upgrade for master (#14411) 2023-08-16 19:01:21 -07:00
README.template De-incubation cleanup in code, docs, packaging (#9108) 2020-01-03 12:33:19 -05:00
check_test_suite.py Update Hadoop3 as default build version (#14005) 2023-04-26 12:52:51 +05:30
check_test_suite_test.py remove Travis CI (#13789) 2023-02-10 01:46:56 -08:00
it.sh Extend the IT framework to allow tests in extensions (#13877) 2023-05-15 20:29:51 +05:30
licenses.yaml Upgrade guava version to 31.1-jre (#14767) 2023-08-22 12:09:53 +05:30
owasp-dependency-check-suppressions.xml Upgrade guava version to 31.1-jre (#14767) 2023-08-22 12:09:53 +05:30
pom.xml Upgrade guava version to 31.1-jre (#14767) 2023-08-22 12:09:53 +05:30
upload.sh Adding licenses and enable apache-rat-plugin. (#6215) 2018-09-18 08:39:26 -07:00

README.md

Coverage Status Docker Helm

Workflow Status
⚙️ CodeQL Config codeql-config
🔍 CodeQL codeql
🕒 Cron Job ITS cron-job-its
🏷️ Labeler labeler
♻️ Reusable Revised ITS reusable-revised-its
♻️ Reusable Standard ITS reusable-standard-its
♻️ Reusable Unit Tests reusable-unit-tests
🔄 Revised ITS revised-its
🔧 Standard ITS standard-its
🛠️ Static Checks static-checks
🧪 Unit and Integration Tests Unified unit-and-integration-tests-unified
🔬 Unit Tests unit-tests

Website Twitter Download Get Started Documentation Community Build Contribute License


Apache Druid

Druid is a high performance real-time analytics database. Druid's main value add is to reduce time to insight and action.

Druid is designed for workflows where fast queries and ingest really matter. Druid excels at powering UIs, running operational (ad-hoc) queries, or handling high concurrency. Consider Druid as an open source alternative to data warehouses for a variety of use cases. The design documentation explains the key concepts.

Getting started

You can get started with Druid with our local or Docker quickstart.

Druid provides a rich set of APIs (via HTTP and JDBC) for loading, managing, and querying your data. You can also interact with Druid via the built-in web console (shown below).

Load data

data loader Kafka

Load streaming and batch data using a point-and-click wizard to guide you through ingestion setup. Monitor one off tasks and ingestion supervisors.

Manage the cluster

management

Manage your cluster with ease. Get a view of your datasources, segments, ingestion tasks, and services from one convenient location. All powered by SQL systems tables, allowing you to see the underlying query for each view.

Issue queries

query view combo

Use the built-in query workbench to prototype DruidSQL and native queries or connect one of the many tools that help you make the most out of Druid.

Documentation

See the latest documentation for the documentation for the current official release. If you need information on a previous release, you can browse previous releases documentation.

Make documentation and tutorials updates in /docs using Markdown or extended Markdown (MDX). Then, open a pull request.

To build the site locally, you need Node 16.14 or higher and to install Docusaurus 2 with npm|yarn install in the website directory. Then you can run npm|yarn start to launch a local build of the docs.

If you're looking to update non-doc pages like Use Cases, those files are in the druid-website-src repo.

Community

Visit the official project community page to read about getting involved in contributing to Apache Druid, and how we help one another use and operate Druid.

  • Druid users can find help in the druid-user mailing list on Google Groups, and have more technical conversations in #troubleshooting on Slack.
  • Druid development discussions take place in the druid-dev mailing list (dev@druid.apache.org). Subscribe by emailing dev-subscribe@druid.apache.org. For live conversations, join the #dev channel on Slack.

Check out the official community page for details of how to join the community Slack channels.

Find articles written by community members and a calendar of upcoming events on the project site - contribute your own events and articles by submitting a PR in the apache/druid-website-src repository.

Building from source

Please note that JDK 8 or JDK 11 is required to build Druid.

See the latest build guide for instructions on building Apache Druid from source.

Contributing

Please follow the community guidelines for contributing.

For instructions on setting up IntelliJ dev/intellij-setup.md

License

Apache License, Version 2.0