Apache Druid: a high performance real-time analytics database.
Go to file
Suneet Saldanha 0ccfe5ca89 Expose JoinableFactory through Guice Bindings (#9271)
* Make JoinableFactory an extension point

This change makes it so that extensions can register a JoinableFactory that
should be used for a DataSource.

Extensions can provide the factories via DruidBinders#joinableFactoryBinder
Known DataSources - like InlineDataSource are provided in the
JoinableFactoryModule. This module installs a FactoryWarehouse that is
used to decide which factory should be used to generate the Joinable for
the provided DataSource.

The ExtensionPoint is marked as Beta since it is not yet clear if this
needs to remain available to other extensions or if the best way to
register a factory is by using the datasource class.

* Add module test

* remove useless bindings in test

* remove ExtensionPoint annotation

* Make LifecycleLock not final to help with testing
2020-01-28 13:59:06 -08:00
.github De-incubation cleanup in code, docs, packaging (#9108) 2020-01-03 12:33:19 -05:00
.idea Reconcile terminology and method naming to 'used/unused segments'; Rename MetadataSegmentManager to MetadataSegmentsManager (#7306) 2020-01-27 11:24:29 -08:00
benchmarks Reconcile terminology and method naming to 'used/unused segments'; Rename MetadataSegmentManager to MetadataSegmentsManager (#7306) 2020-01-27 11:24:29 -08:00
cloud De-incubation cleanup in code, docs, packaging (#9108) 2020-01-03 12:33:19 -05:00
codestyle De-incubation cleanup in code, docs, packaging (#9108) 2020-01-03 12:33:19 -05:00
core Expose JoinableFactory through Guice Bindings (#9271) 2020-01-28 13:59:06 -08:00
dev De-incubation cleanup in code, docs, packaging (#9108) 2020-01-03 12:33:19 -05:00
distribution Fix DRUID_CONFIG to DRUID_CONFIG_COMMON (#9193) 2020-01-27 02:52:01 -08:00
docs Reconcile terminology and method naming to 'used/unused segments'; Rename MetadataSegmentManager to MetadataSegmentsManager (#7306) 2020-01-27 11:24:29 -08:00
examples Minor doc updates (#9217) 2020-01-20 11:34:37 -08:00
extendedset Set version to 0.18.0-SNAPSHOT (#9109) 2020-01-02 17:55:10 -05:00
extensions-contrib Reconcile terminology and method naming to 'used/unused segments'; Rename MetadataSegmentManager to MetadataSegmentsManager (#7306) 2020-01-27 11:24:29 -08:00
extensions-core Reconcile terminology and method naming to 'used/unused segments'; Rename MetadataSegmentManager to MetadataSegmentsManager (#7306) 2020-01-27 11:24:29 -08:00
hll Set version to 0.18.0-SNAPSHOT (#9109) 2020-01-02 17:55:10 -05:00
indexing-hadoop Reconcile terminology and method naming to 'used/unused segments'; Rename MetadataSegmentManager to MetadataSegmentsManager (#7306) 2020-01-27 11:24:29 -08:00
indexing-service Reconcile terminology and method naming to 'used/unused segments'; Rename MetadataSegmentManager to MetadataSegmentsManager (#7306) 2020-01-27 11:24:29 -08:00
integration-tests Reconcile terminology and method naming to 'used/unused segments'; Rename MetadataSegmentManager to MetadataSegmentsManager (#7306) 2020-01-27 11:24:29 -08:00
licenses De-incubation cleanup in code, docs, packaging (#9108) 2020-01-03 12:33:19 -05:00
processing Expose JoinableFactory through Guice Bindings (#9271) 2020-01-28 13:59:06 -08:00
publications De-incubation cleanup in code, docs, packaging (#9108) 2020-01-03 12:33:19 -05:00
server Expose JoinableFactory through Guice Bindings (#9271) 2020-01-28 13:59:06 -08:00
services Expose JoinableFactory through Guice Bindings (#9271) 2020-01-28 13:59:06 -08:00
sql fix some issues with filters on numeric columns with nulls (#9251) 2020-01-27 18:01:01 -08:00
web-console Reconcile terminology and method naming to 'used/unused segments'; Rename MetadataSegmentManager to MetadataSegmentsManager (#7306) 2020-01-27 11:24:29 -08:00
website Add MostAvailableSizeStorageLocationSelectorStrategy (#8879) 2020-01-23 13:42:03 -08:00
.asf.yaml Add .asf.yaml. (#9083) 2019-12-20 16:45:38 -08:00
.backportrc.json Graduation update for ASF release process guide and download links (#9126) 2020-01-06 15:00:33 -06:00
.codecov.yml Use Codecov (#8388) 2019-08-28 08:49:30 -07:00
.dockerignore Add docker container for druid (#6896) 2019-02-08 12:12:28 +00:00
.gitignore autogenerate NOTICE.BINARY from NOTICE and licenses.yaml (#8306) 2019-08-21 12:46:27 -07:00
.lgtm.yml Add license header for LGTM yaml config file (#8902) 2019-11-18 18:26:45 -08:00
.travis.yml Parallel indexing single dim partitions (#8925) 2019-12-09 23:05:49 -08:00
CONTRIBUTING.md De-incubation cleanup in code, docs, packaging (#9108) 2020-01-03 12:33:19 -05:00
LABELS Add plain text README.txt, use relative link from README.md to build.md (#7611) 2019-05-09 21:29:26 -07:00
LICENSE De-incubation cleanup in code, docs, packaging (#9108) 2020-01-03 12:33:19 -05:00
NOTICE De-incubation cleanup in code, docs, packaging (#9108) 2020-01-03 12:33:19 -05:00
README.md Update README.md 2019-12-20 20:56:53 -08:00
README.template De-incubation cleanup in code, docs, packaging (#9108) 2020-01-03 12:33:19 -05:00
licenses.yaml fix build by updating kafka client to 2.2.2 for CVE-2019-12399 (#9259) 2020-01-27 11:07:02 -08:00
owasp-dependency-check-suppressions.xml Suppress CVE-2019-20330 for htrace-core-4.0.1 (#9189) 2020-01-14 21:15:24 -08:00
pom.xml fix build by updating kafka client to 2.2.2 for CVE-2019-12399 (#9259) 2020-01-27 11:07:02 -08:00
upload.sh Adding licenses and enable apache-rat-plugin. (#6215) 2018-09-18 08:39:26 -07:00

README.md

Slack Build Status Language grade: Java Coverage Status Docker


Website | Documentation | Developer Mailing List | User Mailing List | Slack | Twitter | Download


Apache Druid

Druid is a high performance real-time analytics database. Druid's main value add is to reduce time to insight and action.

Druid is designed for workflows where fast queries and ingest really matter. Druid excels at powering UIs, running operational (ad-hoc) queries, or handling high concurrency. Consider Druid as an open source alternative to data warehouses for a variety of use cases.

Getting started

You can get started with Druid with our quickstart.

Druid provides a rich set of APIs (via HTTP and JDBC) for loading, managing, and querying your data. You can also interact with Druid via the built-in console (shown below).

Load data

data loader Kafka

Load streaming and batch data using a point-and-click wizard to guide you through ingestion setup. Monitor one off tasks and ingestion supervisors.

Manage the cluster

management

Manage your cluster with ease. Get a view of your datasources, segments, ingestion tasks, and services from one convenient location. All powered by SQL systems tables, allowing you to see the underlying query for each view.

Issue queries

query view combo

Use the built-in query workbench to prototype DruidSQL and native queries or connect one of the many tools that help you make the most out of Druid.

Documentation

You can find the documentation for the latest Druid release on the project website.

If you would like to contribute documentation, please do so under /docs in this repository and submit a pull request.

Community

Community support is available on the druid-user mailing list, which is hosted at Google Groups.

Development discussions occur on dev@druid.apache.org, which you can subscribe to by emailing dev-subscribe@druid.apache.org.

Chat with Druid committers and users in real-time on the #druid channel in the Apache Slack team. Please use this invitation link to join the ASF Slack, and once joined, go into the #druid channel.

Building from source

Please note that JDK 8 is required to build Druid.

For instructions on building Druid from source, see docs/development/build.md

Contributing

Please follow the community guidelines for contributing.

For instructions on setting up IntelliJ dev/intellij-setup.md

License

Apache License, Version 2.0