Commit Graph

12340 Commits

Author SHA1 Message Date
Abhishek Agarwal 17936e2920
Add an option to enable HSTS in druid services (#13489)
* Add an option to enable HSTS

* Fix code and add docs

* Deduplicate headers

* unused import

* Fix spelling
2023-01-10 22:31:51 +05:30
Dongjoon Hyun 2503095296
Publish SBOM artifacts (#13648) 2023-01-10 16:08:10 +05:30
abhagraw 74a76c74b1
Updating dependency check version (#13649) 2023-01-10 14:43:19 +05:30
Abhishek Radhakrishnan 41fdf6eafb
Quote and escape literals in JDBC lookup to allow reserved identifiers. (#13632)
* Quote and escape table, key and column names.

* fix typo.

* More select statements.

* Derby lookup tests create quoted identifiers so it's compatible.

* Use Stringutils.replace() utility.

* quote the filter string.

* Squish doubly quote usage into a single function.

* Add parameterized test with reserved identifiers.

* few changes.
2023-01-10 12:11:54 +05:30
Maytas Monsereenusorn 62a105ee65
Add context to HadoopIngestionSpec (#13624)
* add context to HadoopIngestionSpec

* fix alert
2023-01-09 14:37:02 -10:00
Victoria Lim a800dae87a
doc: List Protobuf as a supported format (#13640) 2023-01-06 15:09:37 -08:00
317brian 6bbf4266b2
docs: documentation for unnest datasource (#13479)
Co-authored-by: Victoria Lim <vtlim@users.noreply.github.com>
2023-01-06 11:41:11 -08:00
imply-cheddar f1821a7c18
Add Sort Operator for Window Functions (#13619)
* Addition of NaiveSortMaker and Default implementation

Add the NaiveSortMaker which makes a sorter
object and a default implementation of the
interface.

This also allows us to plan multiple different window 
definitions on the same query.
2023-01-06 00:27:18 -08:00
Vadim Ogievetsky 4ee4d99b8d
better error reporting (#13636) 2023-01-05 20:00:33 -08:00
Vadim Ogievetsky fdc8aa2833
better show totals when grouping (#13631) 2023-01-05 18:06:32 -08:00
imply-cheddar a8ecc48ffe
Validate response headers and fix exception logging (#13609)
* Validate response headers and fix exception logging

A class of QueryException were throwing away their
causes making it really hard to determine what's
going wrong when something goes wrong in the SQL
planner specifically.  Fix that and adjust tests
 to do more validation of response headers as well.

We allow 404s and 307s to be returned even without 
authorization validated, but others get converted to 403
2023-01-05 14:15:15 -08:00
Varachit W 7a7874a952
Update docker-compose to use druid 24.0.1 (#13623) 2023-01-04 18:46:40 +05:30
abhagraw 365474ff1d
New IT Framework - InputSource and InputFormat Tests (#13597)
* New IT Framework - InputSource and InputFormat Tests

* Fixing checkstyle errors

* Updating InputSource setup

* Updating queries to use druid DB

* Making metadata setup queries to be idempotent

* Restore intellij files
2023-01-04 10:40:05 +05:30
Kashif Faraz 200c547d02
Include info about Hadoop 3 artifacts in release guide (#13594)
* Update release guide for hadoop3

* Update instructions to verify checksum and signature

* Update email lists
2023-01-03 13:53:53 +05:30
Kashif Faraz 36e6765596
Fix flaky test (#13603) 2023-01-03 13:52:05 +05:30
imply-cheddar 313d937236
Switch operators to a push-style API (#13600)
* Switch operators to a push-style API

This API generates nice stack-traces of processing
for Operators.
2022-12-22 22:01:55 -08:00
Vadim Ogievetsky 8773d619a2
Web console: tidy up stage UI (#13615)
* show the right info

* sort indicator

* nicer marker

* move error icon
2022-12-22 12:52:16 -08:00
Kashif Faraz 78ae0b7533
Upgrade to netty 4.1.86.Final to address CVEs (#13604)
This commit addresses the following CVEs:
- CVE-2021-43797
- CVE-2022-41881
2022-12-23 01:44:01 +05:30
AmatyaAvadhanula af05cfa78c
Fix shutdown in httpRemote task runner (#13558)
* Fix shutdown in httpRemote task runner

* Add UT
2022-12-22 14:50:04 +05:30
Kashif Faraz 0d97e658b2
Docs: Update quickstart instructions (#13611)
Changes:
- Remove specification of a Druid version in the quickstart, because the previous step
instructs downloading the latest version anyway.
- Mention usage of memory parameter in the quickstart
2022-12-22 11:51:08 +05:30
imply-cheddar 7b92b85168
Unify DummyRequest with MockHttpServletRequest (#13602)
We had 2 different classes both creating fake
instances of an HttpServletRequest, this makes
it to that we only have one in a common location
2022-12-21 20:15:08 -08:00
Clint Wylie fd63e5a514
fix issue with jdbc and query metrics (#13608)
* fix issue with metrics emitting and jdbc results by getting yielder from query processing thread

* more better
2022-12-21 19:32:53 -08:00
Peter Stöckli df55768535
Add CodeQL workflow (#13477)
* workflower: Add CodeQL workflow

* add modified CodeQL build config
2022-12-21 09:24:39 +05:30
Jason Koch 6c44dd8175
perf: core/TextReader for faster json ingestion (#13545)
* perf: provide a custom utf8 specific buffered line iterator (benchmark)

Benchmark                         Mode  Cnt     Score     Error  Units
JsonLineReaderBenchmark.baseline  avgt   15  3459.871 ± 106.175  us/op

* perf: provide a custom utf8 specific buffered line iterator

Benchmark                         Mode  Cnt     Score    Error  Units
JsonLineReaderBenchmark.baseline  avgt   15  3022.053 ± 51.286  us/op

* perf: provide a custom utf8 specific buffered line iterator (more tests)

* perf: provide a custom utf8 specific buffered line iterator (pr feedback)

Ensure field visibility is as limited as possible

Null check for buffer in constructor

* perf: provide a custom utf8 specific buffered line iterator (pr feedback)

Remove additional 'finished' variable.

* perf: provide a custom utf8 specific buffered line iterator (more tests and bugfix)
2022-12-19 23:12:37 -08:00
Kashif Faraz c1e2656644
Fix scope of dependencies in protobuf-extensions pom (#13593) 2022-12-19 13:56:55 +05:30
imply-cheddar 0efd0879a8
Unify the handling of HTTP between SQL and Native (#13564)
* Unify the handling of HTTP between SQL and Native

The SqlResource and QueryResource have been
using independent logic for things like error
handling and response context stuff.  This
became abundantly clear and painful during a
change I was making for Window Functions, so
I unified them into using the same code for
walking the response and serializing it.

Things are still not perfectly unified (it would
be the absolute best if the SqlResource just
took SQL, planned it and then delegated the
query run entirely to the QueryResource), but
this refactor doesn't take that fully on.

The new code leverages async query processing
from our jetty container, the different
interaction model with the Resource means that
a lot of tests had to be adjusted to align with
the async query model.  The semantics of the
tests remain the same with one exception: the
SqlResource used to not log requests that failed
authorization checks, now it does.
2022-12-19 00:25:33 -08:00
Vadim Ogievetsky 07597c687d
Docs: Remove large data file (#13595) 2022-12-19 13:14:22 +05:30
Gian Merlino ee890965f4
LocalInputSource: Serialize File paths without forcing resolution. (#13534)
* LocalInputSource: Serialize File paths without forcing resolution.

Fixes #13359.

* Add one more javadoc.
2022-12-19 11:47:36 +05:30
Victoria Lim 09d8b16447
Document shouldFinalize for sketches that have the parameter (#13524)
Co-authored-by: Kashif Faraz <kashif.faraz@gmail.com>
Co-authored-by: Charles Smith <techdocsmith@gmail.com>
2022-12-17 10:48:06 -08:00
Kashif Faraz e34e56295f
Suppress CVE-2022-1278, CVE-2022-2048, CVE-2022-3509, CVE-2022-40152 (#13590) 2022-12-17 20:09:52 +05:30
Vadim Ogievetsky e23abc710a
Web console: default max workers to cluster capacity and simplify live reports (#13577)
* step

* better capacity

* start with capacity

* more compressed stats display

* better rule editor

* fix SQL data loader also

* update snapshots

* new line

* better formatting
2022-12-16 15:13:32 -08:00
Vadim Ogievetsky 639decdf2e
fix preview droping out of MSQ mode (#13586) 2022-12-16 15:13:07 -08:00
317brian d9c27d6102
docs: add index page and related stuff for jupyter tutorials (#13342) 2022-12-16 13:33:50 -08:00
Kashif Faraz 1cc9bc9af9
Suppress CVE-2022-45685 and CVE-2022-45693 from jettison-1.3 (#13585) 2022-12-16 22:56:30 +05:30
Rishabh Singh f42722e627
Set monotonically increasing worker capacity in start-druid-main (#13581)
This commit updates the task memory allocation logic.
- min task count is 2 and max task count is number of cpus on the machine
- task count increases wrt total task memory
- task memory increases from 512m to 2g
2022-12-16 15:34:30 +05:30
Clint Wylie d9e5245ff0
allow string dimension indexer to handle byte[] as base64 strings (#13573)
This PR expands `StringDimensionIndexer` to handle conversion of `byte[]` to base64 encoded strings, rather than the current behavior of calling java `toString`. 

This issue was uncovered by a regression of sorts introduced by #13519, which updated the protobuf extension to directly convert stuff to java types, resulting in `bytes` typed values being converted as `byte[]` instead of a base64 string which the previous JSON based conversion created. While outputting `byte[]` is more consistent with other input formats, and preferable when the bytes can be consumed directly (such as complex types serde), when fed to a `StringDimensionIndexer`, it resulted in an ugly java `toString` because `processRowValsToUnsortedEncodedKeyComponent` is fed the output of `row.getRaw(..)`. Converting `byte[]` to a base64 string within `StringDimensionIndexer` is consistent with the behavior of calling `row.getDimension(..)` which does do this coercion (and why many tests on binary types appeared to be doing the expected thing).

I added some protobuf `bytes` tests, but they don't really hit the new `StringDimensionIndexer` behavior because they operate on the `InputRow` directly, and call `getDimension` to validate stuff. The parser based version still uses the old conversion mechanisms, so when not using a flattener incorrectly calls `toString` on the `ByteString`. I have encoded this behavior in the test for now, if we either update the parser to use the new flattener or just .. remove parsers we can remove this test stuff.
2022-12-16 14:50:17 +05:30
Clint Wylie 9ae7a36ccd
improve nested column storage format for broader compatibility (#13568)
* bump nested column format version
changes:
* nested field files are now named by their position in field paths list, rather than directly by the path itself. this fixes issues with valid json properties with commas and newlines breaking the csv file meta.smoosh
* update StructuredDataProcessor to deal in NestedPathPart to be consistent with other abstract path handling rather than building JQ syntax strings directly
* add v3 format segment and test
2022-12-15 15:39:26 -08:00
Gian Merlino 7f3c117e3a
SQL: Improve docs around casts. (#13466)
Main change: clarify that the "default value" for casts only applies if
druid.generic.useDefaultValueForNull = true.

Secondary change: adjust a bunch of wording from future to present tense.
2022-12-15 15:01:40 -08:00
317brian 668d1fad6b
docs: notebook only for API tutorial (#13345)
* docs: notebook for API tutorial

* Apply suggestions from code review

Co-authored-by: Charles Smith <techdocsmith@gmail.com>

* address the other comments

* typo

* add commentary to outputs

* address feedback from will

* delete unnecessary comment

Co-authored-by: Charles Smith <techdocsmith@gmail.com>
2022-12-15 13:16:07 -08:00
Kashif Faraz d6949b1b79
Track input processedBytes with MSQ ingestion (#13559)
Follow up to #13520

Bytes processed are currently tracked for intermediate stages in MSQ ingestion.
This patch adds the capability to track the bytes processed by an MSQ controller
task while reading from an external input source or a segment source.

Changes:
- Track `processedBytes` for every `InputSource` read in `ExternalInputSliceReader`
- Update `ChannelCounters` with the above obtained `processedBytes` when incrementing
the input file count.
- Update task report structure in docs

The total input processed bytes can be obtained by summing the `processedBytes` as follows:

totalBytes = 0
for every root stage (i.e. a stage which does not have another stage as an input):
    for every worker in that stage:
        for every input channel: (i.e. channels with prefix "input", e.g. "input0", "input1", etc.)
            totalBytes += processedBytes
2022-12-16 02:20:01 +05:30
Kashif Faraz 431a1195ca
Suppress CVE-2022-1471 from snakeyaml (#13557)
* Upgrade kube client to 17.0.0

* Remove snakeyaml CVE suppression

* Update licenses.yaml

* Revert changes and suppress cve
2022-12-15 21:39:14 +05:30
Clint Wylie 49cbfdff83
Fix cool nested column bug caused by not properly validating that global id is present in global dictionary before lookup up local id (#13561)
This commit fixes a bug with nested column "value set" indexes caused by not properly
validating that the globalId looked up for value is present in the global dictionary prior to
looking it up in the local dictionary, which when "adjusting" the global ids for value type
can cause incorrect selection of value indexes.

To use an example of a variant typed nested column with 3 values `["1", null, -2]`.
The string dictionary is `[null, "1"]`, the long dictionary is `[-2]` and our local dictionary is `[0, 1, 2]`.

The code for variant typed indexes checks if the value is present in all global dictionaries
and returns indexes for all matches. So in this case, we first lookup "1" in the string dictionary,
find it at global id 1, all is good. Now, we check the long dictionary for `1`, which due to 
`-(insertionpoint + 1)` gives us `-(1 + 2) = -2`. Since the global id space is actually stacked
dictionaries, global ids for long and double values must be "adjusted" by the size of string
dictionary, and size of string + size of long for doubles.

Prior to this patch we were not checking that the globalId is 0 or larger, we then immediately
looked up the `localDictionary.indexOf(-2 + adjustLong) = localDictionary.indexOf(-2 + 2) = localDictionary.indexOf(0)` ... which is an actual value contained in the dictionary! The fix is
to skip the longs completely since there were no global matches.

On to doubles, `-(insertionPoint + 1)` gives us `-(0 + 1) = -1`. The double adjust value is '3'
since 2 strings and 1 long, so `localDictionary.indexOf(-1 + 3)` = `localDictionary.indexOf(2)` 
which is also a real value in our local dictionary that is definitely not '1'.

So in this one case, looking for '1' incorrectly ended up matching every row.
2022-12-15 17:00:46 +05:30
Rishabh Singh 97bc0220c7
Update task memory computation in start-druid (#13563)
Changes:
* Use 80% of memory specified for running services (versus 50% earlier).
* Tasks get either 512m / 1024m or 2048m now (versus 512m or 2048m earlier). 
* Add direct memory for router.
2022-12-15 11:06:16 +05:30
Adarsh Sanjeev 2b605aa9cf
Multiple fixes for the MSQ stats merging piece which (#13463)
* Add validation checks to worker chat handler apis

* Merge things and polishing the error messages.

* Minor error message change

* Fixing race and adding some tests

* Fixing controller fetching stats from wrong workers.
Fixing race
Changing default mode to Parallel
Adding logging.
Fixing exceptions not propagated properly.

* Changing to kernel worker count

* Added a better logic to figure out assigned worker for a stage.

* Nits

* Moving to existing kernel methods

* Adding more coverage

Co-authored-by: cryptoe <karankumar1100@gmail.com>
2022-12-15 09:35:11 +05:30
imply-cheddar 089d8da561
Support Framing for Window Aggregations (#13514)
* Support Framing for Window Aggregations

This adds support for framing  over ROWS
for window aggregations.

Still not implemented as yet:
1. RANGE frames
2. Multiple different frames in the same query
3. Frames on last/first functions
2022-12-14 18:04:39 -08:00
Vadim Ogievetsky 2729e25295
Link to java docs (#13478)
* add link to page about selecting a JRE

* add link to script also

* simplify text
2022-12-14 11:45:23 -08:00
Rohan Garg 35c983a351
Use template file for adding table functions grammar (#13553) 2022-12-14 21:52:09 +05:30
Kashif Faraz 58a3acc2c4
Add InputStats to track bytes processed by a task (#13520)
This commit adds a new class `InputStats` to track the total bytes processed by a task.
The field `processedBytes` is published in task reports along with other row stats.

Major changes:
- Add class `InputStats` to track processed bytes
- Add method `InputSourceReader.read(InputStats)` to read input rows while counting bytes.
> Since we need to count the bytes, we could not just have a wrapper around `InputSourceReader` or `InputEntityReader` (the way `CountableInputSourceReader` does) because the `InputSourceReader` only deals with `InputRow`s and the byte information is already lost.
- Classic batch: Use the new `InputSourceReader.read(inputStats)` in `AbstractBatchIndexTask`
- Streaming: Increment `processedBytes` in `StreamChunkParser`. This does not use the new `InputSourceReader.read(inputStats)` method.
- Extend `InputStats` with `RowIngestionMeters` so that bytes can be exposed in task reports

Other changes:
- Update tests to verify the value of `processedBytes`
- Rename `MutableRowIngestionMeters` to `SimpleRowIngestionMeters` and remove duplicate class
- Replace `CacheTestSegmentCacheManager` with `NoopSegmentCacheManager`
- Refactor `KafkaIndexTaskTest` and `KinesisIndexTaskTest`
2022-12-13 18:54:42 +05:30
somu-imply 7682b0b6b1
Analysis refactor (#13501)
Refactor DataSource to have a getAnalysis method()

This removes various parts of the code where while loops and instanceof
checks were being used to walk through the structure of DataSource objects
in order to build a DataSourceAnalysis.  Instead we just ask the DataSource
for its analysis and allow the stack to rebuild whatever structure existed.
2022-12-12 17:35:44 -08:00
Gian Merlino de5a4bafcb
Zero-copy local deep storage. (#13394)
* Zero-copy local deep storage.

This is useful for local deep storage, since it reduces disk usage and
makes Historicals able to load segments instantaneously.

Two changes:

1) Introduce "druid.storage.zip" parameter for local storage, which defaults
   to false. This changes default behavior from writing an index.zip to writing
   a regular directory. This is safe to do even during a rolling update, because
   the older code actually already handled unzipped directories being present
   on local deep storage.

2) In LocalDataSegmentPuller and LocalDataSegmentPusher, use hard links
   instead of copies when possible. (Generally this is possible when the
   source and destination directory are on the same filesystem.)
2022-12-12 17:28:24 -08:00