7e61042794
* Bound memory in native batch ingest create segments * Move BatchAppenderatorDriverTest to indexing service... note that we had to put the sink back in sinks in mergeandpush since the persistent data needs to be dropped and the sink is required for that * Remove sinks from memory and clean up intermediate persists dirs manually after sink has been merged * Changed name from RealtimeAppenderator to StreamAppenderator * Style * Incorporating tests from StreamAppenderatorTest * Keep totalRows and cleanup code * Added missing dep * Fix unit test * Checkstyle * allowIncrementalPersists should always be true for batch * Added sinks metadata * clear sinks metadata when closing appenderator * Style + minor edits to log msgs * Update sinks metadata & totalRows when dropping a sink (segment) * Remove max * Intelli-j check * Keep a count of hydrants persisted by sink for sanity check before merge * Move out sanity * Add previous hydrant count to sink metadata * Remove redundant field from SinkMetadata * Remove unneeded functions * Cleanup unused code * Removed unused code * Remove unused field * Exclude it from jacoco because it is very hard to get branch coverage * Remove segment announcement and some other minor cleanup * Add fallback flag * Minor code cleanup * Checkstyle * Code review changes * Update batchMemoryMappedIndex name * Code review comments * Exclude class from coverage, will include again when packaging gets fixed * Moved test classes to server module * More BatchAppenderator cleanup * Fix bug in wrong counting of totalHydrants plus minor cleanup in add * Removed left over comments * Have BatchAppenderator follow the Appenderator contract for push & getSegments * Fix LGTM violations * Review comments * Add stats after push is done * Code review comments (cleanup, remove rest of synchronization constructs in batch appenderator, reneame feature flag, remove real time flag stuff from stream appenderator, etc.) * Update javadocs * Add thread safety notice to BatchAppenderator * Further cleanup config * More config cleanup |
||
---|---|---|
.github | ||
.idea | ||
benchmarks | ||
cloud | ||
codestyle | ||
core | ||
dev | ||
distribution | ||
docs | ||
examples | ||
extendedset | ||
extensions-contrib | ||
extensions-core | ||
helm/druid | ||
hll | ||
hooks | ||
indexing-hadoop | ||
indexing-service | ||
integration-tests | ||
licenses | ||
processing | ||
publications | ||
server | ||
services | ||
sql | ||
web-console | ||
website | ||
.asf.yaml | ||
.backportrc.json | ||
.codecov.yml | ||
.dockerignore | ||
.gitignore | ||
.lgtm.yml | ||
.travis.yml | ||
CONTRIBUTING.md | ||
LABELS | ||
LICENSE | ||
NOTICE | ||
README.md | ||
README.template | ||
check_test_suite.py | ||
check_test_suite_test.py | ||
licenses.yaml | ||
owasp-dependency-check-suppressions.xml | ||
pom.xml | ||
setup-hooks.sh | ||
upload.sh |
README.md
Website | Documentation | Developer Mailing List | User Mailing List | Slack | Twitter | Download
Apache Druid
Druid is a high performance real-time analytics database. Druid's main value add is to reduce time to insight and action.
Druid is designed for workflows where fast queries and ingest really matter. Druid excels at powering UIs, running operational (ad-hoc) queries, or handling high concurrency. Consider Druid as an open source alternative to data warehouses for a variety of use cases.
Getting started
You can get started with Druid with our local or Docker quickstart.
Druid provides a rich set of APIs (via HTTP and JDBC) for loading, managing, and querying your data. You can also interact with Druid via the built-in console (shown below).
Load data
Load streaming and batch data using a point-and-click wizard to guide you through ingestion setup. Monitor one off tasks and ingestion supervisors.
Manage the cluster
Manage your cluster with ease. Get a view of your datasources, segments, ingestion tasks, and services from one convenient location. All powered by SQL systems tables, allowing you to see the underlying query for each view.
Issue queries
Use the built-in query workbench to prototype DruidSQL and native queries or connect one of the many tools that help you make the most out of Druid.
Documentation
You can find the documentation for the latest Druid release on the project website.
If you would like to contribute documentation, please do so under
/docs
in this repository and submit a pull request.
Community
Community support is available on the druid-user mailing list, which is hosted at Google Groups.
Development discussions occur on dev@druid.apache.org, which you can subscribe to by emailing dev-subscribe@druid.apache.org.
Chat with Druid committers and users in real-time on the #druid
channel in the Apache Slack team. Please use this invitation link to join the ASF Slack, and once joined, go into the #druid
channel.
Building from source
Please note that JDK 8 is required to build Druid.
For instructions on building Druid from source, see docs/development/build.md
Contributing
Please follow the community guidelines for contributing.
For instructions on setting up IntelliJ dev/intellij-setup.md