Apache Druid: a high performance real-time analytics database.
Go to file
Kirill Kozlov 9f93be448e Fix logical operator in example (#3093) 2016-06-07 10:44:18 -07:00
api Fix JavaScriptConfig. (#3062) 2016-06-02 23:59:00 -07:00
aws-common Update version to 0.9.1-SNAPSHOT. 2016-03-17 10:34:20 -07:00
benchmarks Add benchmark data generator, basic ingestion/persist/merge/query benchmarks (#2875) 2016-05-25 16:39:37 -07:00
common Documented getAuditTime, getPayload methods in AuditEntry.java (#3045) 2016-06-02 08:20:33 -07:00
distribution [QTL] Implement LookupExtractorFactory of namespaced lookup (#2926) 2016-05-24 10:56:40 -07:00
docs Fix logical operator in example (#3093) 2016-06-07 10:44:18 -07:00
examples [QTL] Implement LookupExtractorFactory of namespaced lookup (#2926) 2016-05-24 10:56:40 -07:00
extensions-contrib Move QueryGranularity static fields to QueryGranularities (#2980) 2016-05-17 16:23:48 -07:00
extensions-core Async lookups-cached-global by default (#3074) 2016-06-03 15:58:10 -05:00
indexing-hadoop attempt to fix-2906 (#2985) 2016-05-18 15:12:38 -05:00
indexing-service add synchronization to SupervisorManager (#3077) 2016-06-07 00:29:23 -06:00
integration-tests adding QueryGranularity to segment metadata and optionally expose same from segmentMetadata query (#2873) 2016-05-03 11:31:10 -07:00
processing Fix NPE in registeredLookup extractionFn when "optimize" is not provided. (#3064) 2016-06-03 12:58:17 -05:00
publications Support min/max values for metadata query (#2208) 2016-02-12 09:35:58 +09:00
server bug fix for getNewNodes() in ListenerDiscoverer (#3092) 2016-06-07 16:32:42 +05:30
services Move QueryGranularity static fields to QueryGranularities (#2980) 2016-05-17 16:23:48 -07:00
.gitignore move distribution artifacts to distribution/target 2015-10-30 12:40:05 -05:00
.travis.yml Fail travis builds faster 2016-02-19 07:38:06 -08:00
CONTRIBUTING.md Fix typo in CONTRIBUTING.md (#2988) 2016-05-18 13:46:53 -07:00
DruidCorporateCLA.pdf fix CLA email / mailing address 2014-04-17 15:26:28 -07:00
DruidIndividualCLA.pdf fix CLA email / mailing address 2014-04-17 15:26:28 -07:00
LICENSE Clean up README and license 2015-02-18 23:09:28 -08:00
NOTICE more doc fixes 2016-02-17 09:43:47 -08:00
README.md update readme (#2830) 2016-04-13 11:33:31 -07:00
druid_intellij_formatting.xml Make formatting IntelliJ 2016 friendly (#2978) 2016-05-18 12:42:21 -07:00
eclipse.importorder Merge pull request #2905 from javasoze/eclipse_formatting 2016-04-29 18:42:03 -07:00
eclipse_formatting.xml Merge pull request #2905 from javasoze/eclipse_formatting 2016-04-29 18:42:03 -07:00
pom.xml exclude slf4j-log4j12 (#3075) 2016-06-03 11:39:23 -07:00
upload.sh update upload.sh to upload mysql-metadata-storage tarball too 2016-03-16 15:29:58 -05:00

README.md

Build Status Coverage Status

Druid

Druid is a distributed, column-oriented, real-time analytics data store that is commonly used to power exploratory dashboards in multi-tenant environments.

Druid excels as a data warehousing solution for fast aggregate queries on petabyte sized data sets. Druid supports a variety of flexible filters, exact calculations, approximate algorithms, and other useful calculations.

Druid can load both streaming and batch data and integrates with Samza, Kafka, Storm, Spark, and Hadoop.

License

Apache License, Version 2.0

More Information

More information about Druid can be found on http://www.druid.io.

Documentation

You can find the documentation for the latest Druid release on the project website.

If you would like to contribute documentation, please do so under /docs/content in this repository and submit a pull request.

Getting Started

You can get started with Druid with our quickstart.

Reporting Issues

If you find any bugs, please file a GitHub issue.

Community

Community support is available on the druid-user mailing list(druid-user@googlegroups.com).

Development discussions occur on the druid-development list(druid-development@googlegroups.com).

We also have a couple people hanging out on IRC in #druid-dev on irc.freenode.net.

Contributing

Please follow the guidelines listed here.