Apache Druid: a high performance real-time analytics database.
Go to file
Michael Schnupp 33b4eb624d fix freeSpacePercent in segmentCache.locations (#5765)
* fix freeSpacePercent in segmentCache.locations

* the check should probably test the other way around
* documentation should put the option in the right place
* examples have a superfluous backslash

* add test to verify correct behavior

* switch to Path and test with jimfs

Path allows to use different filesystems.
Jimfs provides an actual (in memory) filesystem.
This also allows more complex test scenarios.

The behavior should be unchanged by this commit.

* Revert "switch to Path and test with jimfs"

This reverts commit 8b9a418d65.
2018-05-24 11:15:30 +09:00
.idea Remove unused code and exception declarations (#5461) 2018-03-16 22:11:12 +01:00
api Local indexing from RDBMS (#5441) 2018-05-22 12:33:01 +09:00
aws-common Support enablePathStyleAccess, disableChunkedEncoding, and forceGlobalBucketAccessEnabled for aws client (#5702) 2018-05-02 10:45:38 -07:00
benchmarks Refactor index merging, replace Rowboats with RowIterators and RowPointers (#5335) 2018-04-27 17:34:32 -07:00
ci Add TeamCity instructions (#5379) 2018-02-10 13:13:33 -08:00
codestyle Add GenericWhitespace checkstyle check (#5668) 2018-04-24 01:09:14 +05:30
common VersionedIntervalTimeline: Optimize construction with heavily populated holders. (#5777) 2018-05-16 09:16:59 -07:00
distribution Opentsdb emitter extension (#5380) 2018-02-13 13:10:22 -08:00
docs fix freeSpacePercent in segmentCache.locations (#5765) 2018-05-24 11:15:30 +09:00
examples fix freeSpacePercent in segmentCache.locations (#5765) 2018-05-24 11:15:30 +09:00
extendedset Remove unused code and exception declarations (#5461) 2018-03-16 22:11:12 +01:00
extensions-contrib support throw duplcate row during realtime ingestion in RealtimePlumber (#5693) 2018-05-04 10:12:25 -07:00
extensions-core Local indexing from RDBMS (#5441) 2018-05-22 12:33:01 +09:00
hll Remove unused code and exception declarations (#5461) 2018-03-16 22:11:12 +01:00
indexing-hadoop Fix for when Hadoop dataSource inputSpec is specified multiple times. (#5790) 2018-05-23 03:16:55 +05:30
indexing-service Local indexing from RDBMS (#5441) 2018-05-22 12:33:01 +09:00
integration-tests Support enablePathStyleAccess, disableChunkedEncoding, and forceGlobalBucketAccessEnabled for aws client (#5702) 2018-05-02 10:45:38 -07:00
java-util Simple cleanup for ThreadPoolTaskRunner and SetAndVerifyContextQueryRunner / Add ThreadPoolTaskRunnerTest (#5557) 2018-05-15 22:53:11 +05:30
processing Simple cleanup for ThreadPoolTaskRunner and SetAndVerifyContextQueryRunner / Add ThreadPoolTaskRunnerTest (#5557) 2018-05-15 22:53:11 +05:30
publications Changes to lambda architecture paper required for HICSS (#3382) 2016-09-06 21:32:21 -07:00
server fix freeSpacePercent in segmentCache.locations (#5765) 2018-05-24 11:15:30 +09:00
services Simple cleanup for ThreadPoolTaskRunner and SetAndVerifyContextQueryRunner / Add ThreadPoolTaskRunnerTest (#5557) 2018-05-15 22:53:11 +05:30
sql Kerberos Spnego Authentication Router Issue (#5706) 2018-05-05 20:33:51 -07:00
.gitignore git ignore dependency-reduced-pom.xml (#4711) 2017-08-23 10:10:50 -07:00
.travis.yml Use the official aws-sdk instead of jet3t (#5382) 2018-03-21 15:36:54 -07:00
CONTRIBUTING.md Replace dev list references in docs. (#5723) 2018-04-30 11:25:45 -07:00
DruidCorporateCLA.pdf fix CLA email / mailing address 2014-04-17 15:26:28 -07:00
DruidIndividualCLA.pdf fix CLA email / mailing address 2014-04-17 15:26:28 -07:00
INTELLIJ_SETUP.md Prohibit and remove unused declarations in the processing module (#4930) 2017-11-09 09:27:27 -08:00
LICENSE Clean up README and license 2015-02-18 23:09:28 -08:00
NOTICE Extension points for authentication/authorization (#4271) 2017-09-15 23:45:48 -07:00
README.md Replace dev list references in docs. (#5723) 2018-04-30 11:25:45 -07:00
druid_intellij_formatting.xml Make formatting IntelliJ 2016 friendly (#2978) 2016-05-18 12:42:21 -07:00
eclipse.importorder Merge pull request #2905 from javasoze/eclipse_formatting 2016-04-29 18:42:03 -07:00
eclipse_formatting.xml Merge pull request #2905 from javasoze/eclipse_formatting 2016-04-29 18:42:03 -07:00
intellij-sdk-config.jpg Prohibit and remove unused declarations in the processing module (#4930) 2017-11-09 09:27:27 -08:00
pom.xml Move to the org.lz4 dependency (#5746) 2018-05-07 08:16:45 -07:00
upload.sh upload.sh: Use awscli if s3cmd is not available. (#3114) 2016-06-08 17:01:46 -07:00

README.md

Build Status Inspections Status Coverage Status

Druid

Druid is a distributed, column-oriented, real-time analytics data store that is commonly used to power exploratory dashboards in multi-tenant environments.

Druid excels as a data warehousing solution for fast aggregate queries on petabyte sized data sets. Druid supports a variety of flexible filters, exact calculations, approximate algorithms, and other useful calculations.

Druid can load both streaming and batch data and integrates with Samza, Kafka, Storm, Spark, and Hadoop.

License

Apache License, Version 2.0

More Information

More information about Druid can be found on http://www.druid.io.

Documentation

You can find the documentation for the latest Druid release on the project website.

If you would like to contribute documentation, please do so under /docs/content in this repository and submit a pull request.

Getting Started

You can get started with Druid with our quickstart.

Reporting Issues

If you find any bugs, please file a GitHub issue.

Community

The Druid community is in the process of migrating to Apache by way of the Apache Incubator. Eventually, as we proceed along this path, our site will move from http://druid.io/ to https://druid.apache.org/.

Community support is available on the druid-user mailing list(druid-user@googlegroups.com), which is hosted at Google Groups.

Development discussions occur on dev@druid.apache.org, which you can subscribe to by emailing dev-subscribe@druid.apache.org.

We also have a couple people hanging out on IRC in #druid-dev on irc.freenode.net.

Contributing

Please follow the guidelines listed here.