Apache Druid: a high performance real-time analytics database.
Go to file
Fangjin Yang 2fc5918e69 Merge pull request #2763 from druid-io/b-docs
clean up for extensions docs
2016-03-30 21:53:03 -07:00
api Conditional multi bind 2016-03-24 16:23:22 -05:00
aws-common Update version to 0.9.1-SNAPSHOT. 2016-03-17 10:34:20 -07:00
benchmarks BoundFilter optimizations, and related interface changes. 2016-03-25 14:11:48 -07:00
common Utility method for length estimation of utf8 2016-03-31 10:07:00 +09:00
distribution Update version to 0.9.1-SNAPSHOT. 2016-03-17 10:34:20 -07:00
docs clean up for extensions docs 2016-03-30 17:14:58 -07:00
examples Update version to 0.9.1-SNAPSHOT. 2016-03-17 10:34:20 -07:00
extensions-contrib reoganize code folder according to recent upstream folder changes, seperate it from avro code and take it into extensions-conrib. docs rewite too 2016-03-30 11:21:41 +08:00
extensions-core Downgrade geoip2, exclude com.google.http-client. 2016-03-25 14:43:22 -07:00
indexing-hadoop Downgrade geoip2, exclude com.google.http-client. 2016-03-25 14:43:22 -07:00
indexing-service Merge pull request #2683 from metamx/default_retry 2016-03-29 08:02:59 -07:00
integration-tests Merge pull request #2650 from rasahner/IT_index_hadoop 2016-03-23 13:04:55 -05:00
processing Merge pull request #2577 from navis/native-in-filter 2016-03-30 20:02:54 -07:00
publications Support min/max values for metadata query (#2208) 2016-02-12 09:35:58 +09:00
server Merge pull request #2722 from himanshug/fix_hadoop_jar_upload 2016-03-28 14:49:03 -07:00
services Add missing modules for validating tasks (Fix for #2618) 2016-03-30 14:39:20 +09:00
.gitignore move distribution artifacts to distribution/target 2015-10-30 12:40:05 -05:00
.travis.yml Fail travis builds faster 2016-02-19 07:38:06 -08:00
CONTRIBUTING.md updating how to contribute guide 2015-11-19 23:30:28 -06:00
DruidCorporateCLA.pdf fix CLA email / mailing address 2014-04-17 15:26:28 -07:00
DruidIndividualCLA.pdf fix CLA email / mailing address 2014-04-17 15:26:28 -07:00
LICENSE Clean up README and license 2015-02-18 23:09:28 -08:00
NOTICE more doc fixes 2016-02-17 09:43:47 -08:00
README.md more doc fixes 2016-02-17 09:43:47 -08:00
eclipse_formatting.xml towards a community led druid 2015-01-31 20:57:36 -08:00
intellij_formatting.jar Update Scala formatting in intellij_formatting.jar, and rename style to "Druid Java and Scala style". 2015-12-03 13:35:44 -08:00
pom.xml reoganize code folder according to recent upstream folder changes, seperate it from avro code and take it into extensions-conrib. docs rewite too 2016-03-30 11:21:41 +08:00
upload.sh update upload.sh to upload mysql-metadata-storage tarball too 2016-03-16 15:29:58 -05:00

README.md

Build Status Coverage Status

Druid

Druid is a distributed, column-oriented, real-time analytics data store that is commonly used to power exploratory dashboards in multi-tenant environments.

Druid excels as a data warehousing solution for fast aggregate queries on petabyte sized data sets. Druid supports a variety of flexible filters, exact calculations, approximate algorithms, and other useful calculations.

Druid can load both streaming and batch data and integrates with Samza, Kafka, Storm, and Hadoop.

License

Apache License, Version 2.0

More Information

More information about Druid can be found on http://www.druid.io.

Documentation

You can find the documentation for the latest Druid release on the project website.

If you would like to contribute documentation, please do so under /docs/content in this repository and submit a pull request.

Tutorials

We have a series of tutorials to get started with Druid. If you are just getting started, we suggest going over the first Druid tutorial.

Reporting Issues

If you find any bugs, please file a GitHub issue.

Community

Community support is available on the druid-user mailing list(druid-user@googlegroups.com).

Development discussions occur on the druid-development list(druid-development@googlegroups.com).

We also have a couple people hanging out on IRC in #druid-dev on irc.freenode.net.