Apache Druid: a high performance real-time analytics database.
Go to file
Gian Merlino 319f99db05
Always use file sizes when determining batch ingest splits (#13955)
* Always use file sizes when determining batch ingest splits.

Main changes:

1) Update CloudObjectInputSource and its subclasses (S3, GCS,
   Azure, Aliyun OSS) to use SplitHintSpecs in all cases. Previously, they
   were only used for prefixes, not uris or objects.

2) Update ExternalInputSpecSlicer (MSQ) to consider file size. Previously,
   file size was ignored; all files were treated as equal weight when
   determining splits.

A side effect of these changes is that we'll make additional network
calls to find the sizes of objects when users specify URIs or objects
as opposed to prefixes. IMO, this is worth it because it's the only way
to respect the user's split hint and task assignment settings.

Secondary changes:

1) S3, Aliyun OSS: Use getObjectMetadata instead of listObjects to get
   metadata for a single object. This is a simpler call that is also
   expected to be less expensive.

2) Azure: Fix a bug where getBlobLength did not populate blob
   reference attributes, and therefore would not actually retrieve the
   blob length.

3) MSQ: Align dynamic slicing logic between ExternalInputSpecSlicer and
   TableInputSpecSlicer.

4) MSQ: Adjust WorkerInputs to ensure there is always at least one
   worker, even if it has a nil slice.

* Add msqCompatible to testGroupByWithImpossibleTimeFilter.

* Fix tests.

* Add additional tests.

* Remove unused stuff.

* Remove more unused stuff.

* Adjust thresholds.

* Remove irrelevant test.

* Fix comments.

* Fix bug.

* Updates.
2023-04-05 08:54:01 -07:00
.github remove duplicate trigger on Cron Job ITs workflow (#14013) 2023-04-04 09:39:48 +05:30
.idea merge druid-core, extendedset, and druid-hll into druid-processing to simplify everything (#13698) 2023-02-17 14:27:41 -08:00
benchmarks actually backwards compatible frontCoded string encoding strategy (#13996) 2023-03-31 02:24:12 -07:00
cloud merge druid-core, extendedset, and druid-hll into druid-processing to simplify everything (#13698) 2023-02-17 14:27:41 -08:00
codestyle Removing the forbidden check on getOrDefault due to java8 incompatibility. (#13920) 2023-03-11 09:49:32 +05:30
dev Document our conventions for writing messages (#13916) 2023-04-03 21:30:20 -07:00
distribution Upgrade ZK from 3.5.9 to 3.5.10 to avoid data inconsistency risk (#13715) 2023-03-15 19:21:09 +05:30
docs Adding query stack fault to MSQ to capture native query errors. (#13926) 2023-04-05 16:29:10 +05:30
examples pip install for Python Druid API (#13938) 2023-03-21 11:37:39 -07:00
extensions-contrib Always use file sizes when determining batch ingest splits (#13955) 2023-04-05 08:54:01 -07:00
extensions-core Always use file sizes when determining batch ingest splits (#13955) 2023-04-05 08:54:01 -07:00
helm/druid helm: Add serviceAccounts, rbac, and small fixes (#13747) 2023-02-23 11:42:03 +05:30
hooks Git hooks should fail on errors; pass args to git hooks (#12322) 2022-03-10 09:07:50 +09:00
indexing-hadoop remix nested columns (#14014) 2023-04-04 17:51:59 -07:00
indexing-service add null safety checks for DiscoveryDruidNode services for more resilient http server and task views (#13930) 2023-04-05 02:45:39 -07:00
integration-tests add latest version of druid operator to integeration tests (#13883) 2023-03-10 16:11:25 +05:30
integration-tests-ex Using MinIO to run S3DeepStorage ITs (#13997) 2023-03-30 12:15:53 -07:00
licenses Web console: use EXTEND syntax (#13985) 2023-03-29 16:19:49 -07:00
processing Always use file sizes when determining batch ingest splits (#13955) 2023-04-05 08:54:01 -07:00
publications De-incubation cleanup in code, docs, packaging (#9108) 2020-01-03 12:33:19 -05:00
server add null safety checks for DiscoveryDruidNode services for more resilient http server and task views (#13930) 2023-04-05 02:45:39 -07:00
services remix nested columns (#14014) 2023-04-04 17:51:59 -07:00
sql Always use file sizes when determining batch ingest splits (#13955) 2023-04-05 08:54:01 -07:00
web-console Web console: add a nice UI for overlord dynamic configs and improve the docs (#13993) 2023-03-31 10:12:25 -07:00
website docs: sql unnest and cleanup unnest datasource (#13736) 2023-04-04 13:07:54 -07:00
.asf.yaml Add .asf.yaml. (#9083) 2019-12-20 16:45:38 -08:00
.backportrc.json Add 0.18.0 to .backportrc.json to facilitate backport. (#9661) 2020-04-11 13:49:04 -07:00
.codecov.yml Use Codecov (#8388) 2019-08-28 08:49:30 -07:00
.dockerignore Add docker container for druid (#6896) 2019-02-08 12:12:28 +00:00
.gitignore Tuple sketch SQL support (#13887) 2023-03-28 18:47:12 +05:30
.lgtm.yml be consistent about referring to the web console by its name (#13118) 2022-09-19 15:02:17 -07:00
CONTRIBUTING.md Document our conventions for writing messages (#13916) 2023-04-03 21:30:20 -07:00
LABELS Fixing security vulnerability check errors (#13956) 2023-03-23 11:10:06 +05:30
LICENSE merge druid-core, extendedset, and druid-hll into druid-processing to simplify everything (#13698) 2023-02-17 14:27:41 -08:00
NOTICE Update year in the notice file and the release process instructions (#12622) 2022-08-23 18:17:18 +05:30
README.md Fix broken shields (#14015) 2023-04-04 09:41:53 +05:30
README.template De-incubation cleanup in code, docs, packaging (#9108) 2020-01-03 12:33:19 -05:00
check_test_suite.py remove Travis CI (#13789) 2023-02-10 01:46:56 -08:00
check_test_suite_test.py remove Travis CI (#13789) 2023-02-10 01:46:56 -08:00
it.sh Using MinIO to run S3DeepStorage ITs (#13997) 2023-03-30 12:15:53 -07:00
licenses.yaml Web console: use EXTEND syntax (#13985) 2023-03-29 16:19:49 -07:00
owasp-dependency-check-suppressions.xml Fixing security vulnerability check errors (#13956) 2023-03-23 11:10:06 +05:30
pom.xml Bump Joda-Time version for current DateTimeZone data (#13999) 2023-03-29 20:15:49 +05:30
upload.sh Adding licenses and enable apache-rat-plugin. (#6215) 2018-09-18 08:39:26 -07:00

README.md

Coverage Status Docker Helm

Workflow Status
⚙️ CodeQL Config codeql-config
🔍 CodeQL codeql
🕒 Cron Job ITS cron-job-its
🏷️ Labeler labeler
♻️ Reusable Revised ITS reusable-revised-its
♻️ Reusable Standard ITS reusable-standard-its
♻️ Reusable Unit Tests reusable-unit-tests
🔄 Revised ITS revised-its
🔧 Standard ITS standard-its
🛠️ Static Checks static-checks
🧪 Unit and Integration Tests Unified unit-and-integration-tests-unified
🔬 Unit Tests unit-tests

Website Twitter Download Get Started Documentation Community Build Contribute License


Apache Druid

Druid is a high performance real-time analytics database. Druid's main value add is to reduce time to insight and action.

Druid is designed for workflows where fast queries and ingest really matter. Druid excels at powering UIs, running operational (ad-hoc) queries, or handling high concurrency. Consider Druid as an open source alternative to data warehouses for a variety of use cases. The design documentation explains the key concepts.

Getting started

You can get started with Druid with our local or Docker quickstart.

Druid provides a rich set of APIs (via HTTP and JDBC) for loading, managing, and querying your data. You can also interact with Druid via the built-in web console (shown below).

Load data

data loader Kafka

Load streaming and batch data using a point-and-click wizard to guide you through ingestion setup. Monitor one off tasks and ingestion supervisors.

Manage the cluster

management

Manage your cluster with ease. Get a view of your datasources, segments, ingestion tasks, and services from one convenient location. All powered by SQL systems tables, allowing you to see the underlying query for each view.

Issue queries

query view combo

Use the built-in query workbench to prototype DruidSQL and native queries or connect one of the many tools that help you make the most out of Druid.

Documentation

See the latest documentation for the documentation for the current official release. If you need information on a previous release, you can browse previous releases documentation.

Make documentation and tutorials updates in /docs using MarkDown and contribute them using a pull request.

Community

Visit the official project community page to read about getting involved in contributing to Apache Druid, and how we help one another use and operate Druid.

  • Druid users can find help in the druid-user mailing list on Google Groups, and have more technical conversations in #troubleshooting on Slack.
  • Druid development discussions take place in the druid-dev mailing list (dev@druid.apache.org). Subscribe by emailing dev-subscribe@druid.apache.org. For live conversations, join the #dev channel on Slack.

Check out the official community page for details of how to join the community Slack channels.

Find articles written by community members and a calendar of upcoming events on the project site - contribute your own events and articles by submitting a PR in the apache/druid-website-src repository.

Building from source

Please note that JDK 8 or JDK 11 is required to build Druid.

See the latest build guide for instructions on building Apache Druid from source.

Contributing

Please follow the community guidelines for contributing.

For instructions on setting up IntelliJ dev/intellij-setup.md

License

Apache License, Version 2.0