This PR fixes the first and last vector aggregators and improves their readability. Following changes are introduced The folding is broken in the vectorized versions. We consider time before checking the folded object. If the numerical aggregator gets passed any other object type for some other reason (like String), then the aggregator considers it to be folded, even though it shouldn’t be. We should convert these objects to the desired type, and aggregate them properly. The aggregators must properly use generics. This would minimize the ClassCastException issues that can happen with mixed segment types. We are unifying the string first/last aggregators with numeric versions as well. The aggregators must aggregate null values (https://github.com/apache/druid/blob/master/processing/src/main/java/org/apache/druid/query/aggregation/first/StringFirstLastUtils.java#L55-L56 ). The aggregator should only ignore pairs with time == null, and not value == null Time nullity is ignored when trying to vectorize the data. String versions initialized with DateTimes.MIN that is equal to Long.MIN / 2. This can cause incorrect results in case the user enters a custom time column. NOTE: This is still present because it would require a larger refactor in all of the versions. There is a difference in what users might expect from the results because the code flow is changed (for example, the direction of the for loops, etc), however, this will only change the results, and not the contract set by first/last aggregators, which is that if multiple values have the same timestamp, then any of them can get picked. If the column is non-existent, the users might expect a change in the timestamp from DateTime.MAX to Long.MAX, because the code incorrectly used DateTime.MAX to initialize the aggregator, however, in case of a custom timestamp column, this might not be the case. The SQL query might be prohibited from using any Long since it requires a cast to the timestamp function that can fail, but AFAICT native queries don't have such limitations.
Apache Druid
Druid is a high performance real-time analytics database. Druid's main value add is to reduce time to insight and action.
Druid is designed for workflows where fast queries and ingest really matter. Druid excels at powering UIs, running operational (ad-hoc) queries, or handling high concurrency. Consider Druid as an open source alternative to data warehouses for a variety of use cases. The design documentation explains the key concepts.
Getting started
You can get started with Druid with our local or Docker quickstart.
Druid provides a rich set of APIs (via HTTP and JDBC) for loading, managing, and querying your data. You can also interact with Druid via the built-in web console (shown below).
Load data
Load streaming and batch data using a point-and-click wizard to guide you through ingestion setup. Monitor one off tasks and ingestion supervisors.
Manage the cluster
Manage your cluster with ease. Get a view of your datasources, segments, ingestion tasks, and services from one convenient location. All powered by SQL systems tables, allowing you to see the underlying query for each view.
Issue queries
Use the built-in query workbench to prototype DruidSQL and native queries or connect one of the many tools that help you make the most out of Druid.
Documentation
See the latest documentation for the documentation for the current official release. If you need information on a previous release, you can browse previous releases documentation.
Make documentation and tutorials updates in /docs
using Markdown or extended Markdown (MDX). Then, open a pull request.
To build the site locally, you need Node 16.14 or higher and to install Docusaurus 2 with npm|yarn install
in the website
directory. Then you can run npm|yarn start
to launch a local build of the docs.
If you're looking to update non-doc pages like Use Cases, those files are in the druid-website-src
repo.
Community
Visit the official project community page to read about getting involved in contributing to Apache Druid, and how we help one another use and operate Druid.
- Druid users can find help in the
druid-user
mailing list on Google Groups, and have more technical conversations in#troubleshooting
on Slack. - Druid development discussions take place in the
druid-dev
mailing list (dev@druid.apache.org). Subscribe by emailing dev-subscribe@druid.apache.org. For live conversations, join the#dev
channel on Slack.
Check out the official community page for details of how to join the community Slack channels.
Find articles written by community members and a calendar of upcoming events on the project site - contribute your own events and articles by submitting a PR in the apache/druid-website-src
repository.
Building from source
Please note that JDK 8 or JDK 11 is required to build Druid.
See the latest build guide for instructions on building Apache Druid from source.
Contributing
Please follow the community guidelines for contributing.
For instructions on setting up IntelliJ dev/intellij-setup.md