Apache Druid: a high performance real-time analytics database.
Go to file
Gian Merlino 69aac6c8dd
Direct UTF-8 access for "in" filters. (#12517)
* Direct UTF-8 access for "in" filters.

Directly related:

1) InDimFilter: Store stored Strings (in ValuesSet) plus sorted UTF-8
   ByteBuffers (in valuesUtf8). Use valuesUtf8 whenever possible. If
   necessary, the input set is copied into a ValuesSet. Much logic is
   simplified, because we always know what type the values set will be.
   I think that there won't even be an efficiency loss in most cases.
   InDimFilter is most frequently created by deserialization, and this
   patch updates the JsonCreator constructor to deserialize
   directly into a ValuesSet.

2) Add Utf8ValueSetIndex, which InDimFilter uses to avoid UTF-8 decodes
   during index lookups.

3) Add unsigned comparator to ByteBufferUtils and use it in
   GenericIndexed.BYTE_BUFFER_STRATEGY. This is important because UTF-8
   bytes can be compared as bytes if, and only if, the comparison
   is unsigned.

4) Add specialization to GenericIndexed.singleThreaded().indexOf that
   avoids needless ByteBuffer allocations.

5) Clarify that objects returned by ColumnIndexSupplier.as are not
   thread-safe. DictionaryEncodedStringIndexSupplier now calls
   singleThreaded() on all relevant GenericIndexed objects, saving
   a ByteBuffer allocation per access.

Also:

1) Fix performance regression in LikeFilter: since #12315, it applied
   the suffix matcher to all values in range even for type MATCH_ALL.

2) Add ObjectStrategy.canCompare() method. This fixes LikeFilterBenchmark,
   which was broken due to calls to strategy.compare in
   GenericIndexed.fromIterable.

* Add like-filter implementation tests.

* Add in-filter implementation tests.

* Add tests, fix issues.

* Fix style.

* Adjustments from review.
2022-05-20 01:51:28 -07:00
.github Lock hadoop dependencies to 2.8.5 (#11583) 2021-08-12 15:16:47 +05:30
.idea Use ExecutorService variables to assign ExecutorService Instances (#11373) 2021-06-25 16:56:34 -07:00
benchmarks Direct UTF-8 access for "in" filters. (#12517) 2022-05-20 01:51:28 -07:00
cloud add aws-java-sdk-sts to aws-common classpath (#12482) 2022-05-03 12:25:51 -07:00
codestyle GroupBy: Reduce allocations by reusing entry and key holders. (#12474) 2022-04-28 23:21:13 -07:00
core Direct UTF-8 access for "in" filters. (#12517) 2022-05-20 01:51:28 -07:00
dev Add git hooks that can run multiple scripts (#12300) 2022-03-09 07:16:47 +09:00
distribution Bump up the versions (#12480) 2022-04-27 14:28:20 +05:30
docs SQL: Add is_active to sys.segments, update examples and docs. (#11550) 2022-05-19 14:23:28 -07:00
examples Fix zulu8 set-up Dockerfile for hadoop and hadoop3 in hadoop ingestion tutorial (#12248) 2022-04-11 20:28:09 +05:30
extendedset Bump up the versions (#12480) 2022-04-27 14:28:20 +05:30
extensions-contrib Free ByteBuffers in tests and fix some bugs. (#12521) 2022-05-19 07:42:29 -07:00
extensions-core Free ByteBuffers in tests and fix some bugs. (#12521) 2022-05-19 07:42:29 -07:00
helm/druid Bump up the versions (#12480) 2022-04-27 14:28:20 +05:30
hll Free ByteBuffers in tests and fix some bugs. (#12521) 2022-05-19 07:42:29 -07:00
hooks Git hooks should fail on errors; pass args to git hooks (#12322) 2022-03-10 09:07:50 +09:00
indexing-hadoop Add authentication call before cleaning up intermediate files in hadoop ingestions (#12030) 2022-05-02 08:40:44 -05:00
indexing-service RemoteTaskRunner: Fix NPE in streamTaskReports. (#12006) 2022-05-19 14:23:55 -07:00
integration-tests SQL: Add is_active to sys.segments, update examples and docs. (#11550) 2022-05-19 14:23:28 -07:00
licenses Blueprint 4 (#12391) 2022-04-04 10:34:22 -07:00
processing Direct UTF-8 access for "in" filters. (#12517) 2022-05-20 01:51:28 -07:00
publications De-incubation cleanup in code, docs, packaging (#9108) 2020-01-03 12:33:19 -05:00
server Free ByteBuffers in tests and fix some bugs. (#12521) 2022-05-19 07:42:29 -07:00
services remake column indexes and query processing of filters (#12388) 2022-05-11 11:57:08 +05:30
sql Direct UTF-8 access for "in" filters. (#12517) 2022-05-20 01:51:28 -07:00
web-console Web console: fix go to segments not working (#12541) 2022-05-19 14:34:03 -07:00
website SQL: Add is_active to sys.segments, update examples and docs. (#11550) 2022-05-19 14:23:28 -07:00
.asf.yaml Add .asf.yaml. (#9083) 2019-12-20 16:45:38 -08:00
.backportrc.json Add 0.18.0 to .backportrc.json to facilitate backport. (#9661) 2020-04-11 13:49:04 -07:00
.codecov.yml Use Codecov (#8388) 2019-08-28 08:49:30 -07:00
.dockerignore Add docker container for druid (#6896) 2019-02-08 12:12:28 +00:00
.gitignore Refactor ResponseContext (#11828) 2021-12-06 17:03:12 -08:00
.lgtm.yml Suppress LGTM warnings about stack trace exposure (#9631) 2020-04-09 17:31:03 -07:00
.travis.yml Free ByteBuffers in tests and fix some bugs. (#12521) 2022-05-19 07:42:29 -07:00
CONTRIBUTING.md Fix numbered list formatting in markdown. (#9664) 2020-04-21 20:18:12 -07:00
LABELS Add plain text README.txt, use relative link from README.md to build.md (#7611) 2019-05-09 21:29:26 -07:00
LICENSE support Aliyun OSS service as deep storage (#9898) 2020-07-01 22:20:53 -07:00
NOTICE license.yaml fixes for code introduced related to AWS RDS token based password provider in PR #9518 (#10885) 2021-03-10 12:59:25 -08:00
README.md Add JDK 11 (#12333) 2022-03-16 15:03:04 -07:00
README.template De-incubation cleanup in code, docs, packaging (#9108) 2020-01-03 12:33:19 -05:00
check_test_suite.py suppress false positive cve (#11699) 2021-09-13 20:45:38 -07:00
check_test_suite_test.py suppress false positive cve (#11699) 2021-09-13 20:45:38 -07:00
licenses.yaml upgrade core Apache Kafka dependencies to 3.2.0 (#12538) 2022-05-19 09:04:52 -07:00
owasp-dependency-check-suppressions.xml CVE suppression (#12535) 2022-05-19 11:21:48 +05:30
pom.xml upgrade core Apache Kafka dependencies to 3.2.0 (#12538) 2022-05-19 09:04:52 -07:00
upload.sh Adding licenses and enable apache-rat-plugin. (#6215) 2018-09-18 08:39:26 -07:00

README.md

Slack Build Status Language grade: Java Coverage Status Docker Helm


Website | Documentation | Developer Mailing List | User Mailing List | Slack | Twitter | Download


Apache Druid

Druid is a high performance real-time analytics database. Druid's main value add is to reduce time to insight and action.

Druid is designed for workflows where fast queries and ingest really matter. Druid excels at powering UIs, running operational (ad-hoc) queries, or handling high concurrency. Consider Druid as an open source alternative to data warehouses for a variety of use cases. The design documentation explains the key concepts.

Getting started

You can get started with Druid with our local or Docker quickstart.

Druid provides a rich set of APIs (via HTTP and JDBC) for loading, managing, and querying your data. You can also interact with Druid via the built-in console (shown below).

Load data

data loader Kafka

Load streaming and batch data using a point-and-click wizard to guide you through ingestion setup. Monitor one off tasks and ingestion supervisors.

Manage the cluster

management

Manage your cluster with ease. Get a view of your datasources, segments, ingestion tasks, and services from one convenient location. All powered by SQL systems tables, allowing you to see the underlying query for each view.

Issue queries

query view combo

Use the built-in query workbench to prototype DruidSQL and native queries or connect one of the many tools that help you make the most out of Druid.

Documentation

You can find the documentation for the latest Druid release on the project website.

If you would like to contribute documentation, please do so under /docs in this repository and submit a pull request.

Community

Community support is available on the druid-user mailing list, which is hosted at Google Groups.

Development discussions occur on dev@druid.apache.org, which you can subscribe to by emailing dev-subscribe@druid.apache.org.

Chat with Druid committers and users in real-time on the Apache Druid Slack channel. Please use this invitation link to join and invite others.

Building from source

Please note that JDK 8 or JDK 11 is required to build Druid.

For instructions on building Druid from source, see docs/development/build.md

Contributing

Please follow the community guidelines for contributing.

For instructions on setting up IntelliJ dev/intellij-setup.md

License

Apache License, Version 2.0