* vectorize remaining math expressions
* fixes
* remove cannotVectorize() where no longer true
* disable vectorized groupby for numeric columns with nulls
* fixes
* Store hash partition function in dataSegment and allow segment pruning only when hash partition function is provided
* query context
* fix tests; add more test
* javadoc
* docs and more tests
* remove default and hadoop tests
* consistent name and fix javadoc
* spelling and field name
* default function for partitionsSpec
* other comments
* address comments
* fix tests and spelling
* test
* doc
* push down ValueType to ExprType conversion, tidy up
* determine expr output type for given input types
* revert unintended name change
* add nullable
* tidy up
* fixup
* more better
* fix signatures
* naming things is hard
* fix inspection
* javadoc
* make default implementation of Expr.getOutputType that returns null
* rename method
* more test
* add output for contains expr macro, split operation and function auto conversion
* Add support for all partitioing schemes for auto compaction
* annotate last compaction state for multi phase parallel indexing
* fix build and tests
* test
* better home
* better type tracking: add typed postaggs, finalized types for agg factories
* more javadoc
* adjustments
* transition to getTypeName to be used exclusively for complex types
* remove unused fn
* adjust
* more better
* rename getTypeName to getComplexTypeName
* setup expression post agg for type inference existing
* more javadocs
* fixup
* oops
* more test
* more test
* more comments/javadoc
* nulls
* explicitly handle only numeric and complex aggregators for incremental index
* checkstyle
* more tests
* adjust
* more tests to showcase difference in behavior
* timeseries longsum array
* remove DruidLeaderClient.goAsync(..) that does not follow redirect.
Replace its usage by DruidLeaadereClient.go(..) with
InputStreamFullResponseHandler
* remove ByteArrayResponseHolder dependency from JsonParserIterator
* add UT to cover lines in InputStreamFullResponseHandler
* refactor SystemSchema to reduce branches
* further reduce branches
* Revert "add UT to cover lines in InputStreamFullResponseHandler"
This reverts commit 330aba3dd9.
* UTs for InputStreamFullResponseHandler
* remove unused imports
* Add "offset" parameter to the Scan query.
It works by doing the query as normal and then throwing away the first
"offset" number of rows on the broker.
* Fix constructor call.
* Fix up JSONs.
* Fix call to ScanQuery.
* Doc update.
* Fix javadocs.
* Spotbugs, LGTM suppressions.
* Javadocs.
* Fix suppression.
* Stabilize Scan query result order, add tests.
* Update LGTM comment.
* Fixup.
* Test different batch sizes too.
* Nicer tests.
* Fix comment.
* Add "offset" parameter to GroupBy query.
It works by doing the query as normal and then throwing away the first
"offset" number of rows on the broker.
* Stabilize GroupBy sorts.
* Fix inspections.
* Fix suppression.
* Fixups.
* Move TopNSequence to druid-core.
* Addl comments.
* NumberedElement equals verification.
* Changes from review.
* support unit suffix on byte-related properties
* add doc
* change default value of byte-related properites in example files
* fix coding style
* fix doc
* fix CI
* suppress spelling errors
* improve code according to comments
* rename Bytes to HumanReadableBytes
* add getBytesInInt to get value safely
* improve doc
* fix problem reported by CI
* fix problem reported by CI
* resolve code review comments
* improve error message
* improve code & doc according to comments
* fix CI problem
* improve doc
* suppress spelling check errors
* Add segment pruning for hash based partitioning
* Update doc
* Add additional test
* Address comments
* Fix unit test failure
Co-authored-by: Jian Wang <jwang@pinterest.com>
* Add validation for authorizer name
* fix deps
* add javadocs
* Do not use resource filters
* Fix BasicAuthenticatorResource as well
* Add integration tests
* fix test
* fix
* Update Jetty to 9.4.30.v20200611.
This is the latest version currently available in the 9.4.x line.
* Various adjustments.
* Class name fixes.
* Remove unused HttpClientModule code.
* Add coverage suppressions.
* Another coverage suppression.
* Fix wildcards.
* Enabling Static Imports for Unit Testing DSLs (#331)
Co-authored-by: mohammadshoaib <mohammadshoaib@miqdigital.com>
* Feature 8885 - Enabling Static Imports for Unit Testing DSLs (#435)
* Enabling Static Imports for Unit Testing DSLs
* Using suppressions checkstyle to allow static imports only in the UTs
Co-authored-by: mohammadshoaib <mohammadshoaib@miqdigital.com>
* Removing the changes in the checkstyle because those are not needed
Co-authored-by: mohammadshoaib <mohammadshoaib@miqdigital.com>
* ensure close for all closeables of ParallelMergeCombiningSequence
* revert unneeded change
* consolidate methods
* catch throwable instead of exception
* Fill in the core partition set size properly for batch ingestion with
dynamic partitioning
* incomplete javadoc
* Address comments
* fix tests
* fix json serde, add tests
* checkstyle
* Set core partition set size for hash-partitioned segments properly in
batch ingestion
* test for both parallel and single-threaded task
* unused variables
* fix test
* unused imports
* add hash/range buckets
* some test adjustment and missing json serde
* centralized partition id allocation in parallel and simple tasks
* remove string partition chunk
* revive string partition chunk
* fill numCorePartitions for hadoop
* clean up hash stuffs
* resolved todos
* javadocs
* Fix tests
* add more tests
* doc
* unused imports
* Allow append to existing datasources when dynamic partitioing is used
* fix test
* checkstyle
* checkstyle
* fix test
* fix test
* fix other tests..
* checkstyle
* hansle unknown core partitions size in overlord segment allocation
* fail to append when numCorePartitions is unknown
* log
* fix comment; rename to be more intuitive
* double append test
* cleanup complete(); add tests
* fix build
* add tests
* address comments
* checkstyle
* Fill in the core partition set size properly for batch ingestion with
dynamic partitioning
* incomplete javadoc
* Address comments
* fix tests
* fix json serde, add tests
* checkstyle
* Set core partition set size for hash-partitioned segments properly in
batch ingestion
* test for both parallel and single-threaded task
* unused variables
* fix test
* unused imports
* add hash/range buckets
* some test adjustment and missing json serde
* centralized partition id allocation in parallel and simple tasks
* remove string partition chunk
* revive string partition chunk
* fill numCorePartitions for hadoop
* clean up hash stuffs
* resolved todos
* javadocs
* Fix tests
* add more tests
* doc
* unused imports
* ROUND and having comparators correctly handle doubles
Double.NaN, Double.POSITIVE_INFINITY and Double.NEGATIVE_INFINITY are not real
numbers. Because of this, they can not be converted to BigDecimal and instead
throw a NumberFormatException.
This change adds support for calculations that produce these numbers either
for use in the `ROUND` function or the HavingSpecMetricComparator by not
attempting to convert the number to a BigDecimal.
The bug in ROUND was first introduced in #7224 where we added the ability to
round to any decimal place. This PR changes the behavior back to using
`Math.round` if we recognize a number that can not be converted to a
BigDecimal.
* Add tests and fix spellcheck
* update error message in ExpressionsTest
* Address comments
* fix up round for infinity
* round non numeric doubles returns a double
* fix spotbugs
* Update docs/misc/math-expr.md
* Update docs/querying/sql.md
* lpad and rpad functions deal with empty pad
Return null if the pad string used by the `lpad` and `rpad` functions is
an empty string
* Fix rpad
* Match PostgreSQL behavior in SQL compliant null handling mode
* Match PostgreSQL behavior for pad -ve len
* address review comments
* Empty partitionDimension has less rollup compared to the case when it is explicitly specified
* Adding a unit test for the empty partitionDimension scenario. Fixing another test which was failing
* Fixing CI Build Inspection Issue
* Addressing all review comments
* Updating the javadocs for the hash method in HashBasedNumberedShardSpec
This change removes ListenableFutures.transformAsync in favor of the
existing Guava Futures.transform implementation. Our own implementation
had a bug which did not fail the future if the applied function threw an
exception, resulting in the future never completing.
An attempt was made to fix this bug, however when running againts Guava's own
tests, our version failed another half dozen tests, so it was decided to not
continue down that path and scrap our own implementation.
Explanation for how was this bug manifested itself:
An exception thrown in BaseAppenderatorDriver.publishInBackground when
invoked via transformAsync in StreamAppenderatorDriver.publish will
cause the resulting future to never complete.
This explains why when encountering https://github.com/apache/druid/issues/9845
the task will never complete, forever waiting for the publishFuture to
register the handoff. As a result, the corresponding "Error while
publishing segments ..." message only gets logged once the index task
times out and is forcefully shutdown when the future is force-cancelled
by the executor.
* - GroupByQueryEngineV2: Fix leak of intermediate processing buffer when
exceptions are thrown before result sequence is created.
- PooledTopNAlgorithm: Fix leak of intermediate processing buffer when
exceptions are thrown before the PooledTopNParams object is created.
- BlockingPool: Remove unused "take" methods.
* Add tests to verify that buffers have been returned.
* Fix various Yielder leaks.
- CombiningSequence leaked the input yielder from "toYielder" if it ran
into an exception while accumulating the last value from the input
yielder.
- MergeSequence leaked input yielders from "toYielder" if it ran into
an exception while building the initial priority queue.
- ScanQueryRunnerFactory leaked the input yielder in its
"priorityQueueSortAndLimit" strategy if it ran into an exception
while scanning and sorting.
- YieldingSequenceBase.accumulate chomped IOExceptions thrown in
"accumulate" during yielder closing.
* Add tests.
* Fix braces.
* Number based columns representing time in custom format cannot be used as timestamp column in Druid.
Prior to this fix, if an integer column in parquet is storing dateint in format yyyyMMdd, it cannot be used as timestamp column in Druid as the timestamp parser interprets it as a number storing UTC time instead of treating it as a number representing time in yyyyMMdd format. Data formats like TSV or CSV don't suffer from this problem as the timestamp is passed in an as string which the timestamp parser is able to parse correctly.
* refactor SeekableStreamSupervisor usage of RecordSupplier to reduce contention between background threads and main thread, refactor KinesisRecordSupplier, refactor Kinesis lag metric collection and emitting
* fix style and test
* cleanup, refactor, javadocs, test
* fixes
* keep collecting current offsets and lag if unhealthy in background reporting thread
* review stuffs
* add comment