Commit Graph

11953 Commits

Author SHA1 Message Date
William Hyun a1c4eab522
Update ORC to 1.7.6 (#12928) 2022-08-23 01:09:38 -07:00
Clint Wylie 0c56b22a39
Update .spelling (#12940) 2022-08-22 18:47:40 -07:00
Clint Wylie 289e43281e
stricter behavior for parse_json, add try_parse_json, remove to_json (#12920) 2022-08-22 18:41:07 -07:00
Petar Petrov 6fec1d4c95
Add useNativeQueryExplain in sql query context documentation (#12924) (#12934)
Co-authored-by: Petar Petrov <petar.petrov@system73.com>
2022-08-22 16:31:15 +05:30
AmatyaAvadhanula 379df5f103
Kinesis docs and logs improvements (#12886)
Going ahead with the merge. CI is failing because of a code coverage change in the log line.
2022-08-22 14:49:42 +05:30
Rohan Garg a879d91a20
Remove misleading logging on router for JDBC queries (#12925) 2022-08-22 11:58:51 +05:30
Rohan Garg 3c129f6728
Add sql planning time metric (#12923) 2022-08-22 11:09:44 +05:30
Clint Wylie f8097ccfaa
basic docs for nested column query functions (#12922)
* basic docs for nested column query functions
2022-08-19 17:12:19 -07:00
Clint Wylie 69fe1f04e5
document virtualColumns in native query documentation, fix some redirects (#12917)
* document virtualColumns in native query documentation, fix some redirects

* after all that, forgot to run spellcheck locally

* review stuff
2022-08-18 20:49:23 -07:00
Paul Rogers eb902375a2
Light refactor of the heavily refactored statement classes (#12909)
Reflects lessons learned from working with consumers of
the new code.
2022-08-19 02:31:06 +05:30
Clint Wylie 7fb1153bba
add virtual columns to search query cache key (#12907)
* add virtual columns to search query cache key
2022-08-17 20:26:01 -07:00
Ian Roberts 770358dc34
Update tls-support.md (#12916)
Fixing " lists all possible values for the configs belong" in TLS section
2022-08-18 09:46:30 +08:00
imply-cheddar 536415b948
Stop leaking Avro objects from parser (#12828)
The Avro parsing code leaks some "object" representations.
We need to convert them into Maps/Lists so that other code
can understand and expect good things.  Previously, these
objects were handled with .toString(), but that's not a
good contract in terms of how to work with objects.
2022-08-18 03:16:20 +05:30
Xavier Léauté 752e42a312
fix running integration tests on macos aarch64 (#12913)
* add osx-aarch_64 netty-transport-native-kqueue native dependency
* align docker-java dependency versions using bom and update to 3.2.13
2022-08-17 18:03:24 +02:00
Karan Kumar a3a9c5f409
Fixing overlord issued too many redirects (#12908)
* Fixing race in overlord redirects where the node was redirecting to itself

* Fixing test cases
2022-08-17 18:27:39 +05:30
dependabot[bot] f70f7b4b89
Bump postgresql from 42.3.3 to 42.4.1 (#12871)
* Bump postgresql from 42.3.3 to 42.4.1

Bumps [postgresql](https://github.com/pgjdbc/pgjdbc) from 42.3.3 to 42.4.1.
- [Release notes](https://github.com/pgjdbc/pgjdbc/releases)
- [Changelog](https://github.com/pgjdbc/pgjdbc/blob/master/CHANGELOG.md)
- [Commits](https://github.com/pgjdbc/pgjdbc/compare/REL42.3.3...REL42.4.1)

---
updated-dependencies:
- dependency-name: org.postgresql:postgresql
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

* update licenses.yaml

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Xavier Léauté <xvrl@apache.org>
2022-08-16 23:25:39 +02:00
Gian Merlino d3015d0f8e
DruidQuery: Return a copy from withScanSignatureIfNeeded, as promised. (#12906)
The method wasn't following its contract, leading to pollution of the
overall planner context, when really we just want to create a new
context for a specific query.
2022-08-16 13:23:14 -07:00
Peter Marshall f665a0c077
Docs - Add links to Basic Tuning guide in process pages (#12741)
Added link to the relevant section of the Basic Cluster Tuning page on each process page.

This is in order to improve access to this information, which is not easy to find through search or nav.
2022-08-16 18:42:44 +05:30
Abhishek Agarwal adbebc174a
Fix flaky tests in SeekableStreamSupervisorStateTest (#12875)
* Fix flaky test in SeekableStreamSupervisorStateTest

* Fix for flaky security IT Test

* fix tests

* retry queries if there is some flakiness
2022-08-16 18:38:03 +05:30
Adarsh Sanjeev 3755f30bc4
Add export parameters for Java 11 (#12859)
* Add exports for Java 11 parameters

* Add parameters for data sketches
2022-08-16 13:05:45 +05:30
Lucas Capistrant ec8bdeb9f6
Document missing property - druid.announcer.skipSegmentAnnouncementOnZk (#12891)
* document missing config related to segment announcement

* improve wording

* improve wording

* update docs
2022-08-16 12:32:56 +05:30
Clint Wylie e42e025296
inject @Json ObjectMapper for to_json_string and parse_json expressions (#12900)
* inject @Json ObjectMapper for to_json_string and parse_json expressions

* fix npe

* better
2022-08-15 08:44:24 -07:00
Gian Merlino 28836dfa71
Fix race in TaskQueue.notifyStatus. (#12901)
* Fix race in TaskQueue.notifyStatus.

It was possible for manageInternal to relaunch a task while it was
being cleaned up, due to a race that happens when notifyStatus is
called to clean up a successful task:

1) In a critical section, notifyStatus removes the task from "tasks".
2) Outside a critical section, notifyStatus calls taskRunner.shutdown
   to let the task runner know it can clear out its data structures.
3) In a critical section, syncFromStorage adds the task back to "tasks",
   because it is still present in metadata storage.
4) In a critical section, manageInternalCritical notices that the task
   is in "tasks" and is not running in the taskRunner, so it launches
   it again.
5) In a (different) critical section, notifyStatus updates the metadata
   store to set the task status to SUCCESS.
6) The task continues running even though it should not be.

The possibility for this race was introduced in #12099, which shrunk
the critical section in notifyStatus. Prior to that patch, a single
critical section encompassed (1), (2), and (5), so the ordering above
was not possible.

This patch does the following:

1) Fixes the race by adding a recentlyCompletedTasks set that prevents
   the main management loop from doing anything with tasks that are
   currently being cleaned up.
2) Switches the order of the critical sections in notifyStatus, so
   metadata store updates happen first. This is useful in case of
   server failures: it ensures that if the Overlord fails in the midst
   of notifyStatus, then completed-task statuses are still available in
   ZK or on MMs for the next Overlord. (Those are cleaned up by
   taskRunner.shutdown, which formerly ran first.) This isn't related
   to the race described above, but is fixed opportunistically as part
   of the same patch.
3) Changes the "tasks" list to a map. Many operations require retrieval
   or removal of individual tasks; those are now O(1) instead of O(N)
   in the number of running tasks.
4) Changes various log messages to use task ID instead of full task
   payload, to make the logs more readable.

* Fix format string.

* Update comment.
2022-08-14 23:34:36 -07:00
Gian Merlino 6c5a43106a
SQL: Morph QueryMakerFactory into SqlEngine. (#12897)
* SQL: Morph QueryMakerFactory into SqlEngine.

Groundwork for introducing an indexing-service-task-based SQL engine
under the umbrella of #12262. Also includes some other changes related
to improving error behavior.

Main changes:

1) Elevate the QueryMakerFactory interface (an extension point that allows
   customization of how queries are made) into SqlEngine. SQL engines
   can influence planner behavior through EngineFeatures, and can fully
   control the mechanics of query execution using QueryMakers.

2) Remove the server-wide QueryMakerFactory choice, in favor of the choice
   being made by the SQL entrypoint. The indexing-service-task-based
   SQL engine would be associated with its own entrypoint, like
   /druid/v2/sql/task.

Other changes:

1) Adjust DruidPlanner to try either DRUID or BINDABLE convention based
   on analysis of the planned rels; never try both. In particular, we
   no longer try BINDABLE when DRUID fails. This simplifies the logic
   and improves error messages.

2) Adjust error message "Cannot build plan for query" to omit the SQL
   query text. Useful because the text can be quite long, which makes it
   easy to miss the text about the problem.

3) Add a feature to block context parameters used internally by the SQL
   planner from being supplied by end users.

4) Add a feature to enable adding row signature to the context for
   Scan queries. This is useful in building the task-based engine.

5) Add saffron.properties file that turns off sets and graphviz dumps
   in "cannot plan" errors. Significantly reduces log spam on the Broker.

* Fixes from CI.

* Changes from review.

* Can vectorize, now that join-to-filter is on by default.

* Checkstyle! And variable renames!

* Remove throws from test.
2022-08-14 23:31:19 -07:00
Gian Merlino 846345669d
Error handling improvements for frame channels. (#12895)
* Error handling improvements for frame channels.

Two changes:

1) Send errors down in-memory channels (BlockingQueueFrameChannel) on
   failure. This ensures that in situations where a chain of processors
   has been set up on a single machine, all processors see the root
   cause error. In particular, this means the final processor in the
   chain reports the root cause error, which ensures that someone with
   a handle to the final processor will get the proper error.

2) Update FrameFileHttpResponseHandler to expect that the final fetch,
   rather than being simply empty, is also empty with a special header.
   This ensures that the handler is able to tell the difference between
   an empty fetch due to being at EOF, and an empty fetch due to a
   truncated HTTP response (after the 200 OK and headers are sent down,
   but before any content appears).

* Fix tests, imports.

* Checkstyle!
2022-08-15 11:31:55 +05:30
Rohan Garg b26ab678b9
Do no create filters on right side table columns while join to filter conversion (#12899) 2022-08-14 08:35:23 -07:00
Paul Rogers 41712b7a3a
Refactor SqlLifecycle into statement classes (#12845)
* Refactor SqlLifecycle into statement classes

Create direct & prepared statements
Remove redundant exceptions from tests
Tidy up Calcite query tests
Make PlannerConfig more testable

* Build fixes

* Added builder to SqlQueryPlus

* Moved Calcites system properties to saffron.properties

* Build fix

* Resolve merge conflict

* Fix IntelliJ inspection issue

* Revisions from reviews

Backed out a revision to Calcite tests that didn't work out as planned

* Build fix

* Fixed spelling errors

* Fixed failed test

Prepare now enforces security; before it did not.

* Rebase and fix IntelliJ inspections issue

* Clean up exception handling

* Fix handling of JDBC auth errors

* Build fix

* More tweaks to security messages
2022-08-14 00:44:08 -07:00
vimil-saju 4d65c08576
changes to run examples when CDPATH environment variable is set where cd command returns current dir… (#12877)
* changes to run examples on macos where cd command returns current directory

* Update examples/bin/run-druid

Co-authored-by: Frank Chen <frankchen@apache.org>

* merging

* sending output of cd command to /dev/null

Co-authored-by: Frank Chen <frankchen@apache.org>
2022-08-14 13:15:24 +08:00
Clint Wylie f4e0909e92
fix bug with json_object expression not fully unwrapping inputs (#12893) 2022-08-13 21:15:19 -07:00
Rohan Garg 5394838030
Enable conversion of join to filter by default (#12868) 2022-08-13 20:37:43 +05:30
Rohan Garg af700bba0c
Fix hasBuiltInFilters for joins (#12894) 2022-08-13 16:26:24 +05:30
Gian Merlino 836430019a
Add EXTERNAL resource type. (#12896)
This is used to control access to the EXTERN function, which allows
reading external data in SQL. The EXTERN function is not usable in
production as of today, but it is used by the task-based SQL engine
contemplated in #12262.
2022-08-12 10:57:30 -07:00
Lucas Capistrant 3a3271eddc
Introduce defaultOnDiskStorage config for Group By (#12833)
* Introduce defaultOnDiskStorage config for groupBy

* add debug log to groupby query config

* Apply config change suggestion from review

* Remove accidental new lines

* update default value of new default disk storage config

* update debug log to have more descriptive text

* Make maxOnDiskStorage and defaultOnDiskStorage HumanRedadableBytes

* improve test coverage

* Provide default implementation to new default method on advice of reviewer
2022-08-12 09:40:21 -07:00
Karan Kumar 2f2d8ded5a
Introducing Storage connector Interface (#12874)
In the current druid code base, we have the interface DataSegmentPusher which allows us to push segments to the appropriate deep storage without the extension being worried about the semantics of how to push too deep storage.

While working on #12262, whose some part of the code will go as an extension, I realized that we do not have an interface that allows us to do basic "write, get, delete, deleteAll" operations on the appropriate deep storage without let's say pulling the s3-storage-extension dependency in the custom extension.

Hence, the idea of StorageConnector was born where the storage connector sits inside the druid core so all extensions have access to it.

Each deep storage implementation, for eg s3, GCS, will implement this interface.
Now with some Jackson magic, we bind the implementation of the correct deep storage implementation on runtime using a type variable.
2022-08-12 16:11:49 +05:30
Gian Merlino 38af5f7b57
NettyHttpClient: Cleaner state transitions for handlers. (#12889)
The Netty pipeline set up by the client can deliver multiple exceptions,
and can deliver chunks even after delivering exceptions. This makes it
difficult to implement HttpResponseHandlers. Looking at existing handler
implementations, I do not see attempts to handle this case, so it's also
a potential source of bugs.

This patch updates the client to track whether an exception was
encountered, and if so, to not call any additional methods on the handler
after exceptionCaught. It also harmonizes exception handling between
exceptionCaught and channelDisconnected.
2022-08-11 09:31:37 -07:00
Abhishek Agarwal b4985ccd5e
Suppress CVEs - Avatica, Postgres (#12884) 2022-08-10 14:18:19 +05:30
Paul Rogers 4706a4c572
Docker build for the revised ITs (#12707)
* Docker build for the revised ITs

* Fix POM versions

* Update comments from review suggestions
2022-08-10 14:17:33 +05:30
Paul Rogers 8ad8582dc8
Refactor DruidSchema & DruidTable (#12835)
Refactors the DruidSchema and DruidTable abstractions to prepare for the Druid Catalog.

As we add the catalog, we’ll want to combine physical segment metadata information with “hints” provided by the catalog. This is best done if we tidy up the existing code to more clearly separate responsibilities.

This PR is purely a refactoring move: no functionality changed. There is no difference to user functionality or external APIs. Functionality changes will come later as we add the catalog itself.

DruidSchema
In the present code, DruidSchema does three tasks:

Holds the segment metadata cache
Interfaces with an external schema manager
Acts as a schema to Calcite
This PR splits those responsibilities.

DruidSchema holds the Calcite schema for the druid namespace, combining information fro the segment metadata cache, from the external schema manager and (later) from the catalog.
SegmentMetadataCache holds the segment metadata cache formerly in DruidSchema.
DruidTable
The present DruidTable class is a bit of a kitchen sink: it holds all the various kinds of tables which Druid supports, and uses if-statements to handle behavior that differs between types. Yet, any given DruidTable will handle only one such table type. To more clearly model the actual table types, we split DruidTable into several classes:

DruidTable becomes an abstract base class to hold Druid-specific methods.
DatasourceTable represents a datasource.
ExternalTable represents an external table, such as from EXTERN or (later) from the catalog.
InlineTable represents the internal case in which we attach data directly to a table.
LookupTable represents Druid’s lookup table mechanism.
The new subclasses are more focused: they can be selective about the data they hold and the various predicates since they represent just one table type. This will be important as the catalog information will differ depending on table type and the new structure makes adding that logic cleaner.

DatasourceMetadata
Previously, the DruidSchema segment cache would work with DruidTable objects. With the catalog, we need a layer between the segment metadata and the table as presented to Calcite. To fix this, the new SegmentMetadataCache class uses a new DatasourceMetadata class as its cache entry to hold only the “physical” segment metadata information: it is up to the DruidTable to combine this with the catalog information in a later PR.

More Efficient Table Resolution
Calcite provides a convenient base class for schema objects: AbstractSchema. However, this class is a bit too convenient: all we have to do is provide a map of tables and Calcite does the rest. This means that, to resolve any single datasource, say, foo, we need to cache segment metadata, external schema information, and catalog information for all tables. Just so Calcite can do a map lookup.

There is nothing special about AbstractSchema. We can handle table lookups ourselves. The new AbstractTableSchema does this. In fact, all the rest of Calcite wants is to resolve individual tables by name, and, for commands we don’t use, to provide a list of table names.

DruidSchema now extends AbstractTableSchema. SegmentMetadataCache resolves individual tables (and provides table names.)

DruidSchemaManager
DruidSchemaManager provides a way to specify table schemas externally. In this sense, it is similar to the catalog, but only for datasources. It originally followed the AbstractSchema pattern: it implements provide a map of tables. This PR provides new optional methods for the table lookup and table names operations. The default implementations work the same way that AbstractSchema works: we get the entire map and pick out the information we need. Extensions that use this API should be revised to support the individual operations instead. Druid code no longer calls the original getTables() method.

The PR has one breaking change: since the DruidSchemaManager map is read-only to the rest of Druid, we should return a Map, not a ConcurrentMap.
2022-08-10 10:24:04 +05:30
Clint Wylie ee41cc770f
fix issue with SQL sum aggregator due to bug with DruidTypeSystem and AggregateRemoveRule (#12880)
* fix issue with SQL sum aggregator due to bug with DruidTypeSystem and AggregateRemoveRule

* fix style

* add comment about using custom sum function
2022-08-09 15:17:45 -07:00
David Palmer 2855fb6ff8
Change Kafka Lookup Extractor to not register consumer group (#12842)
* change kafka lookups module to not commit offsets

The current behaviour of the Kafka lookup extractor is to not commit
offsets by assigning a unique ID to the consumer group and setting
auto.offset.reset to earliest. This does the job but also pollutes the
Kafka broker with a bunch of "ghost" consumer groups that will never again be
used.

To fix this, we now set enable.auto.commit to false, which prevents the
ghost consumer groups being created in the first place.

* update docs to include new enable.auto.commit setting behaviour

* update kafka-lookup-extractor documentation

Provide some additional detail on functionality and configuration.
Hopefully this will make it clearer how the extractor works for
developers who aren't so familiar with Kafka.

* add comments better explaining the logic of the code

* add spelling exceptions for kafka lookup docs
2022-08-09 16:14:22 +05:30
絵空事スピリット ebe783dbdc
Correct minor format issue (#12882) 2022-08-09 18:15:41 +08:00
Hamish Ball abd7a9748d
Remove kafka lookup records when a record is tombstoned (#12819)
* remove kafka lookup records from factory when record tombstoned

* update kafka lookup docs to include tombstone behaviour

* change test wait time down to 10ms

Co-authored-by: David Palmer <david.palmer@adscale.co.nz>
2022-08-09 10:42:51 +05:30
Clint Wylie a7e89de610
fix JsonNode leaking from JSON flattener (#12873)
* fix JsonNode leaking from JSON flattener

* adjustments
2022-08-08 19:51:57 -07:00
David Hergenroeder 533c39f35a
Fix rollup docs bullet formatting (#12876) 2022-08-09 10:10:07 +08:00
Suneet Saldanha 267b32c2e2
Set druid.processing.fifo to true by default (#12571) 2022-08-08 10:18:24 -07:00
Gian Merlino 01d555e47b
Adjust "in" filter null behavior to match "selector". (#12863)
* Adjust "in" filter null behavior to match "selector".

Now, both of them match numeric nulls if constructed with a "null" value.

This is consistent as far as native execution goes, but doesn't match
the behavior of SQL = and IN. So, to address that, this patch also
updates the docs to clarify that the native filters do match nulls.

This patch also updates the SQL docs to describe how Boolean logic is
handled in addition to how NULL values are handled.

Fixes #12856.

* Fix test.
2022-08-08 09:08:36 -07:00
Karan Kumar 607b0b9310
Adding withName implementation to AggregatorFactory (#12862)
* Adding agg factory with name impl

* Adding test cases

* Fixing test case

* Fixing test case

* Updated java docs.
2022-08-08 18:31:56 +05:30
Jonathan Wei 2045a1345c
Fix NPE when applying a transform that outputs to __time (#12870) 2022-08-07 19:21:47 +05:30
Herb Brewer 9f8982a9a6
fix(druid-indexing): failed to get shardSpec for interval issue (#12573) 2022-08-05 17:57:36 -07:00
AmatyaAvadhanula d294404924
Kinesis ingestion with empty shards (#12792)
Kinesis ingestion requires all shards to have at least 1 record at the required position in druid.
Even if this is satisified initially, resharding the stream can lead to empty intermediate shards. A significant delay in writing to newly created shards was also problematic.

Kinesis shard sequence numbers are big integers. Introduce two more custom sequence tokens UNREAD_TRIM_HORIZON and UNREAD_LATEST to indicate that a shard has not been read from and that it needs to be read from the start or the end respectively.
These values can be used to avoid the need to read at least one record to obtain a sequence number for ingesting a newly discovered shard.

If a record cannot be obtained immediately, use a marker to obtain the relevant shardIterator and use this shardIterator to obtain a valid sequence number. As long as a valid sequence number is not obtained, continue storing the token as the offset.

These tokens (UNREAD_TRIM_HORIZON and UNREAD_LATEST) are logically ordered to be earlier than any valid sequence number.

However, the ordering requires a few subtle changes to the existing mechanism for record sequence validation:

The sequence availability check ensures that the current offset is before the earliest available sequence in the shard. However, current token being an UNREAD token indicates that any sequence number in the shard is valid (despite the ordering)

Kinesis sequence numbers are inclusive i.e if current sequence == end sequence, there are more records left to read.
However, the equality check is exclusive when dealing with UNREAD tokens.
2022-08-05 22:38:58 +05:30