* Link up row-based datasources to serving layer.
- Add SegmentWrangler interface that allows linking of DataSources to Segments.
- Add LocalQuerySegmentWalker that uses SegmentWranglers to compute queries on
data that is available locally.
- Modify ClientQuerySegmentWalker to use LocalQuerySegmentWalker when the base
datasource is concrete and not a table.
- Add SegmentWranglerModule to the Broker so it has them available and can
properly instantiate . LocalQuerySegmentWalkers.
- Set InlineDataSource and LookupDataSource to concrete, since they can be
directly queried now.
* Fix tests.
* Ability to directly query row-based datasources.
Includes:
- Foundational classes RowBasedSegment, RowBasedStorageAdapter,
RowBasedCursor provide a queryable interface on top of a
RowBasedColumnSelectorFactory.
- Add LookupSegment: A RowBasedSegment that is built on lookup data.
- Improve capability reporting in RowBasedColumnSelectorFactory.
* Fix import.
* Remove unthrown IOException.
* Harmonization and bug-fixing for selector and filter behavior on unknown types.
- Migrate ValueMatcherColumnSelectorStrategy to newer ColumnProcessorFactory
system, and set defaultType COMPLEX so unknown types can be dynamically matched.
- Remove ValueGetters in favor of ColumnComparisonFilter doing its own thing.
- Switch various methods to use convertObjectToX when casting to numbers, rather
than ad-hoc and inconsistent logic.
- Fix bug in RowBasedExpressionColumnValueSelector: isBindingArray should return
true even for 0- or 1- element arrays.
- Adjust various javadocs.
* Add throwParseExceptions option to Rows.objectToNumber, switch back to that.
* Update tests.
* Adjust moment sketch tests.
* Add OnHeapMemorySegmentWriteOutMediumFactory
Add a factory for OnHeapMemorySegmentWriteOutMedium to support direct writing via Spark.
* Register OnHeapMemorySegmentWriteOutMediumFactory.
Register OnHeapMemorySegmentWriteOutMediumFactory with SegmentWriteOutMediumFactory.
* Remove unnecessary throws
The base `makeSegmentWriteOutMedium` throws an IOException, but the particular implementation of OnHeapMemorySegmentWriteOutMediumFactory does not throw a checked exception.
* Update SegmentWriteOutMedium docs to include onHeapMemory
Update the SegmentWriteOutMedium section of the indexing docs to include a description of the new OnHeapSegmentMediumWriteOut option.
* BufferArrayGrouper: Fix potential overflow in requiredBufferCapacity.
If cardinality was high, the computation could overflow an int. There
were tests for this, but the tests were wrong.
* Nicer.
* Add SQL GROUPING SETS support.
Built on top of the subtotalsSpec feature in the groupBy query. This also involves
two changes to subtotalsSpec:
- Alter behavior so limitSpec is applied after subtotalsSpec, rather than applied to
each grouping set. This is more in line with SQL standard behavior. I think it is okay
to make this change, since the old behavior was not documented, so users should
hopefully not be depending on it.
- Fix a bug where virtual columns were included in the subtotal queries, but they
should not have been.
Also fixes two bugs in query equality checking:
- BaseQuery: Use getDuration() instead of "duration" in equals and hashCode, since the
latter is lazily initialized and might be null in one query but not the other.
- GroupByQuery: Include subtotalsSpec in equals and hashCode.
* Fix bugs.
* Fix tests.
* PR updates.
* Grouping class hygiene.
* add Expr.stringify which produces parseable expression strings, parser support for null values in arrays, and parser support for empty numeric arrays
* oops, macros are expressions too
* style
* spotbugs
* qualified type arrays
* review stuffs
* simplify grammar
* more permissive array parsing
* reuse expr joiner
* fix it
* Run IntelliJ inspections on Travis
Running IntelliJ inspections currently takes about 90 minutes, but they
can be run in about 30 minutes on Travis.
* Restore assert statements
* Fix timestamp extract fn to match postgres
Update the timestamp extract function so that it matches the PostgreSQL docs.
Examples from the PostgreSQL docs were added as tests for DECADE, CENTURY
and MILLENIUM extraction.
There were bugs in CENTURY and MILLENIUM that were spotted because of intelliJ
inspections - 'Integer division in floating point context'
* Update CalciteQueryTest
* remove useless round
* mark integer division as an error
When the time extraction Top N algorithm is looking for aggregators, it makes
2 calls to hashCode on the key. Use Map#computeIfAbsent instead so that the
hashCode is calculated only once
* Codestyle - use java style array declaration
Replaced C-style array declarations with java style declarations and marked
the intelliJ inspection as an error
* cleanup test code
* Forbid easily misused HashSet and HashMap constructors
* Add two LinkedHashMap constructors to forbidden-apis and create utility method as replacement for them
* Fix visibility of constant in CollectionUtils.java
* Make an exception for an instance of LinkedHashMap#<init>(int) because proper sizing is used
* revert changes to sql module tests that should be in separate PR
* Finish reverting changes to sql module tests that were flagged in checkstyle during CI
* Add netty dependency resulting from SupressForbidden
* Add HashVectorGrouper based on MemoryOpenHashTable.
Additional supporting changes:
1) Modifies VectorGrouper interface to use Memory instead of ByteBuffers.
2) Modifies BufferArrayGrouper to match the new VectorGrouper interface.
3) Removes "implements VectorGrouper" from BufferHashGrouper.
* Fix comment.
* Fix another comment.
* Remove unused stuff.
* Include hoisted bounds checks.
* Checks against too-large keySpaces.
* Add MemoryOpenHashTable, a table similar to ByteBufferHashTable.
With some key differences to improve speed and design simplicity:
1) Uses Memory rather than ByteBuffer for its backing storage.
2) Uses faster hashing and comparison routines (see HashTableUtils).
3) Capacity is always a power of two, allowing simpler design and more
efficient implementation of findBucket.
4) Does not implement growability; instead, leaves that to its callers.
The idea is this removes the need for subclasses, while still giving
callers flexibility in how to handle table-full scenarios.
* Fix LGTM warnings.
* Adjust dependencies.
* Remove easymock from druid-benchmarks.
* Adjustments from review.
* Fix datasketches unit tests.
* Fix checkstyle.
* Speed up joins on indexed tables with string keys
When joining on index tables with string keys, caching the computation
of row id to row numbers improves performance on the
JoinAndLookupBenchmark.joinIndexTableStringKey* benchmarks by about 10%
if the column cache is enabled an by about 100% if the column cache is
disabled.
* Faster cache impl and handle unknown cardinality
* Remove unused dependency
* Hoist cardinality check outside of hot loop
* Fix dummy DimensionSelector for tests
* Guicify druid sql module
Break up the SQLModule in to smaller modules and provide a binding that
modules can use to register schemas with druid sql.
* fix some tests
* address code review
* tests compile
* Working tests
* Add all the tests
* fix up licenses and dependencies
* add calcite dependency to druid-benchmarks
* tests pass
* rename the schemas
* SQL join support for lookups.
1) Add LookupSchema to SQL, so lookups show up in the catalog.
2) Add join-related rels and rules to SQL, allowing joins to be planned into
native Druid queries.
* Add two missing LookupSchema calls in tests.
* Fix tests.
* Fix typo.
* Add LookupJoinableFactory.
Enables joins where the right-hand side is a lookup. Includes an
integration test.
Also, includes changes to LookupExtractorFactoryContainerProvider:
1) Add "getAllLookupNames", which will be needed to eventually connect
lookups to Druid's SQL catalog.
2) Convert "get" from nullable to Optional return.
3) Swap out most usages of LookupReferencesManager in favor of the
simpler LookupExtractorFactoryContainerProvider interface.
* Fixes for tests.
* Fix another test.
* Java 11 message fix.
* Fixups.
* Fixup benchmark class.
* Add getRightColumns to JoinConditionAnalysis
This change other implementations of JoinableFactory to ask the analysis
for the right key columns instead of having to calculate it themselves.
* Address some review comments
* more code review stuff
Add microbenchmark for joins. Enabling the column cache improves
performance by ~70% for the benchmarks for joins with string keys.
Adjusting LookupJoinMatcher.matchCondition() to have fewer branches,
improves performance by ~10% for the benchmarks for joins with lookups.
* intelliJ inspections cleanup
- remove redundant escapes
- performance warnings
- access static member via instance reference
- static method declared final
- inner class may be static
Most of these changes are aesthetic, however, they will allow inspections to
be enabled as part of CI checks going forward
The valuable changes in this delta are:
- using StringBuilder instead of string addition in a loop
indexing-hadoop/.../Utils.java
processing/.../ByteBufferMinMaxOffsetHeap.java
- Use class variables instead of static variables for parameterized test
processing/src/.../ScanQueryLimitRowIteratorTest.java
* Add intelliJ inspection warnings as errors to druid profile
* one more static inner class
* Make JoinableFactory an extension point
This change makes it so that extensions can register a JoinableFactory that
should be used for a DataSource.
Extensions can provide the factories via DruidBinders#joinableFactoryBinder
Known DataSources - like InlineDataSource are provided in the
JoinableFactoryModule. This module installs a FactoryWarehouse that is
used to decide which factory should be used to generate the Joinable for
the provided DataSource.
The ExtensionPoint is marked as Beta since it is not yet clear if this
needs to remain available to other extensions or if the best way to
register a factory is by using the datasource class.
* Add module test
* remove useless bindings in test
* remove ExtensionPoint annotation
* Make LifecycleLock not final to help with testing
* Add JoinableFactory interface and use it in the query stack.
Also includes InlineJoinableFactory, which enables joining against
inline datasources. This is the first patch where a basic join query
actually works. It includes integration tests.
* Fix test issues.
* Adjustments from code review.
Builds on #9235, using the datasource analysis functionality to replace various ad-hoc
approaches. The most interesting changes are in ClientQuerySegmentWalker (brokers),
ServerManager (historicals), and SinkQuerySegmentWalker (indexing tasks).
Other changes related to improving how we analyze queries:
1) Changes TimelineServerView to return an Optional timeline, which I thought made
the analysis changes cleaner to implement.
2) Added QueryToolChest#canPerformSubquery, which is now used by query entry points to
determine whether it is safe to pass a subquery dataSource to the query toolchest.
Fixes an issue introduced in #5471 where subqueries under non-groupBy-typed queries
were silently ignored, since neither the query entry point nor the toolchest did
anything special with them.
3) Removes the QueryPlus.withQuerySegmentSpec method, which was mostly being used in
error-prone ways (ignoring any potential subqueries, and not verifying that the
underlying data source is actually a table). Replaces with a new function,
Queries.withSpecificSegments, that includes sanity checks.
* Add join-related DataSource types, and analysis functionality.
Builds on #9111 and implements the datasource analysis mentioned in #8728. Still can't
handle join datasources, but we're a step closer.
Join-related DataSource types:
1) Add "join", "lookup", and "inline" datasources.
2) Add "getChildren" and "withChildren" methods to DataSource, which will be used
in the future for query rewriting (e.g. inlining of subqueries).
DataSource analysis functionality:
1) Add DataSourceAnalysis class, which breaks down datasources into three components:
outer queries, a base datasource (left-most of the highest level left-leaning join
tree), and other joined-in leaf datasources (the right-hand branches of the
left-leaning join tree).
2) Add "isConcrete", "isGlobal", and "isCacheable" methods to DataSource in order to
support analysis.
Other notes:
1) Renamed DataSource#getNames to DataSource#getTableNames, which I think is clearer.
Also, made it a Set, so implementations don't need to worry about duplicates.
2) The addition of "isCacheable" should work around #8713, since UnionDataSource now
returns false for cacheability.
* Remove javadoc comment.
* Updates reflecting code review.
* Add comments.
* Add more comments.
* Optimize JoinCondition matching
The LookupJoinMatcher needs to check if a condition is always true or false
multiple times. This can be pre-computed to speed up the match checking
This change reduces the time it takes to perform a for joining on a long key
from ~ 36 ms/op to 23 ms/ op
* Rename variables
* fix typo
* null handling for numeric first/last aggregators, refactor to not extend nullable numeric agg since they are complex typed aggs
* initially null or not based on config
* review stuff, make string first/last consistent with null handling of numeric columns, more tests
* docs
* handle nil selectors, revert to primitive first/last types so groupby v1 works...
* Speed up String first/last aggregators when folding isn't needed.
Examines the value column, and disables fold checking via a needsFoldCheck
flag if that column can't possibly contain SerializableLongStringPairs. This
is helpful because it avoids calling getObject on the value selector when
unnecessary; say, because the time selector didn't yield an earlier or later
value.
* PR comments.
* Move fastLooseChop to StringUtils.
* Add HashJoinSegment, a virtual segment for joins.
An initial step towards #8728. This patch adds enough functionality to implement a joining
cursor on top of a normal datasource. It does not include enough to actually do a query. For
that, future patches will need to wire this low-level functionality into the query language.
* Fixups.
* Fix missing format argument.
* Various tests and minor improvements.
* Changes.
* Remove or add tests for unused stuff.
* Fix up package locations.
* Fix double-checked locking in predicate suppliers in BoundDimFilter
* Fix double-checked locking in predicate suppliers in BoundDimFilter
* 1. Use Suppliers.memoize() to initialize and publish singleton.
2. Fix coding style.
* Fix coding style
* Fix double-checked locking bug for predicate suppliers in InDimFilter
* Add FileUtils.createTempDir() and enforce its usage.
The purpose of this is to improve error messages. Previously, the error
message on a nonexistent or unwritable temp directory would be
"Failed to create directory within 10,000 attempts".
* Further updates.
* Another update.
* Remove commons-io from benchmark.
* Fix tests.
* Refactor parallel indexing perfect rollup partitioning
Refactoring to make it easier to later add range partitioning for
perfect rollup parallel indexing. This is accomplished by adding several
new base classes (e.g., PerfectRollupWorkerTask) and new classes for
encapsulating logic that needs to be changed for different partitioning
strategies (e.g., IndexTaskInputRowIteratorBuilder).
The code is functionally equivalent to before except for the following
small behavior changes:
1) PartialSegmentMergeTask: Previously, this task had a priority of
DEFAULT_TASK_PRIORITY. It now has a priority of
DEFAULT_BATCH_INDEX_TASK_PRIORITY (via the new PerfectRollupWorkerTask
base class), since it is a batch index task.
2) ParallelIndexPhaseRunner: A decorator was added to
subTaskSpecIterator to ensure the subtasks are generated with unique
ids. Previously, only tests (i.e., MultiPhaseParallelIndexingTest)
would have this decorator, but this behavior is desired for non-test
code as well.
* Fix forbidden apis and pmd warnings
* Fix analyze dependencies warnings
* Fix IndexTask json and add IT diags
* Fix parallel index supervisor<->worker serde
* Fix TeamCity inspection errors/warnings
* Fix TeamCity inspection errors/warnings again
* Integrate changes with those from #8823
* Address review comments
* Address more review comments
* Fix forbidden apis
* Address more review comments
* Tidy up lifecycle, query, and ingestion logging.
The goal of this patch is to improve the clarity and usefulness of
Druid's logging for cluster operators. For more information, see
https://twitter.com/cowtowncoder/status/1195469299814555648.
Concretely, this patch does the following:
- Changes a lot of INFO logs to DEBUG, and DEBUG to TRACE, with the
goal of reducing redundancy and improving clarity by avoiding
showing rarely-useful log messages. This includes most "starting"
and "stopping" messages, and most messages related to individual
columns.
- Adds new log4j2 templates that show operators how to enabled DEBUG
logging for certain important packages.
- Eliminate stack traces for query errors, unless log level is DEBUG
or more. This is useful because query errors often indicate user
error rather than system error, but dumping stack trace often gave
operators the impression that there was a system failure.
- Adds task id to Appenderator, AppenderatorDriver thread names. In
the default log4j2 configuration, this will put them in log lines
as well. It's very useful if a user is using the Indexer, where
multiple tasks run in the same JVM.
- More consistent terminology when it comes to "sequences" (sets of
segments that are handed-off together by Kafka ingestion) and
"offsets" (cursors in partitions). These terms had been confused in
some log messages due to the fact that Kinesis calls offsets
"sequence numbers".
- Replaces some ugly toString calls with either the JSONification or
something more operator-accessible (like a URL or segment identifier,
instead of JSON object representing the same).
* Adjustments.
* Adjust integration test.
* transformSpec + array expressions
changes:
* added array expression support to transformSpec
* removed ParseSpec.verify since its only use afaict was preventing transform expr that did not replace their input from functioning
* hijacked index task test to test changes
* remove docs about being unsupported
* re-arrange test assert
* unused imports
* imports
* fix tests
* preserve types
* suppress warning, fixes, add test
* formatting
* cleanup
* better list to array type conversion and tests
* fix oops
* use peekable iterator for numeric column selector null checking instead of bitmap.get for those sweet sweet nanoseconds
* remove unused method
* slight optimization i think
* remove clone from wrappers since we do not use and is confusing
* fixes and tests
* int instead of Integer
* fix it
* fixes, more tests
* fix
* SQL: EARLIEST, LATEST aggregators.
I chose these names instead of FIRST, LAST because those are already
reserved functions in Calcite that mean something different. I think
these are also better names anyway.
* Finalify.
* SQL updates.
* Adjust aggregator calls.
* Validations, test updates.
* Review docs.
There is a class of bugs due to the fact that BaseObjectColumnValueSelector
has both "getObject" and "isNull" methods, but in most selector implementations
and most call sites, it is clear that the intent of "isNull" is only to apply
to the primitive getters, not the object getter. This makes sense, because the
purpose of isNull is to enable detection of nulls in otherwise-primitive columns.
Imagine a string column with a numeric selector built on top of it. You would
want it to return isNull = true, so numeric aggregators don't treat it as
all zeroes.
Sometimes this design leads people to accidentally guard non-primitive get
methods with "selector.isNull" checks, which is improper.
This patch has three goals:
1) Fix null-handling bugs that already exist in this class.
2) Make interface and doc changes that reduce the probability of future bugs.
3) Fix other, unrelated bugs I noticed in the stringFirst and stringLast
aggregators while fixing null-handling bugs. I thought about splitting this
into its own patch, but it ended up being tough to split from the
null-handling fixes.
For (1) the fixes are,
- Fix StringFirst and StringLastAggregatorFactory to stop guarding getObject
calls on isNull, by no longer extending NullableAggregatorFactory. Now uses
-1 as a sigil value for null, to differentiate nulls and empty strings.
- Fix ExpressionFilter to stop guarding getObject calls on isNull. Also, use
eval.asBoolean() to avoid calling getLong on the selector after already
calling getObject.
- Fix ObjectBloomFilterAggregator to stop guarding DimensionSelector calls
on isNull. Also, refactored slightly to avoid the overhead of calling
getObject followed by another getter (see BloomFilterAggregatorFactory for
part of this).
For (2) the main changes are,
- Remove the "isNull" method from BaseObjectColumnValueSelector.
- Clarify "isNull" doc on BaseNullableColumnValueSelector.
- Rename NullableAggregatorFactory -> NullbleNumericAggregatorFactory to emphasize
that it only works on aggregators that take numbers as input.
- Similar naming changes to the Aggregator, BufferAggregator, and AggregateCombiner.
- Similar naming changes to helper methods for groupBy, ValueMatchers, etc.
For (3) the other fixes for StringFirst and StringLastAggregatorFactory are,
- Fixed buffer overrun in the buffer aggregators when some characters in the string
code into more than one byte (the old code used "substring" to apply a byte limit,
which is bad). I did this by introducing a new StringUtils.toUtf8WithLimit method.
- Fixed weird IncrementalIndex logic that led to reading nulls for the timestamp.
- Adjusted weird StringFirst/Last logic that worked around the weird IncrementalIndex
behavior.
- Refactored to share code between the four aggregators.
- Improved test coverage.
- Made the base stringFirst, stringLast aggregators adaptive, and streamlined the
xFold versions into aliases. The adaptiveness is similar to how other aggregators
like hyperUnique work.
* sketch of broker parallel merges done in small batches on fork join pool
* fix non-terminating sequences, auto compute parallelism
* adjust benches
* adjust benchmarks
* now hella more faster, fixed dumb
* fix
* remove comments
* log.info for debug
* javadoc
* safer block for sequence to yielder conversion
* refactor LifecycleForkJoinPool into LifecycleForkJoinPoolProvider which wraps a ForkJoinPool
* smooth yield rate adjustment, more logs to help tune
* cleanup, less logs
* error handling, bug fixes, on by default, more parallel, more tests
* remove unused var
* comments
* timeboundary mergeFn
* simplify, more javadoc
* formatting
* pushdown config
* use nanos consistently, move logs back to debug level, bit more javadoc
* static terminal result batch
* javadoc for nullability of createMergeFn
* cleanup
* oops
* fix race, add docs
* spelling, remove todo, add unhandled exception log
* cleanup, revert unintended change
* another unintended change
* review stuff
* add ParallelMergeCombiningSequenceBenchmark, fixes
* hyper-threading is the enemy
* fix initial start delay, lol
* parallelism computer now balances partition sizes to partition counts using sqrt of sequence count instead of sequence count by 2
* fix those important style issues with the benchmarks code
* lazy sequence creation for benchmarks
* more benchmark comments
* stable sequence generation time
* update defaults to use 100ms target time, 4096 batch size, 16384 initial yield, also update user docs
* add jmh thread based benchmarks, cleanup some stuff
* oops
* style
* add spread to jmh thread benchmark start range, more comments to benchmarks parameters and purpose
* retool benchmark to allow modeling more typical heterogenous heavy workloads
* spelling
* fix
* refactor benchmarks
* formatting
* docs
* add maxThreadStartDelay parameter to threaded benchmark
* why does catch need to be on its own line but else doesnt
* remove select query
* thanks teamcity
* oops
* oops
* add back a SelectQuery class that throws RuntimeExceptions linking to docs
* adjust text
* update docs per review
* deprecated
* Stateful auto compaction
* javaodc
* add removed test back
* fix test
* adding indexSpec to compactionState
* fix build
* add lastCompactionState
* address comments
* extract CompactionState
* fix doc
* fix build and test
* Add a task context to store compaction state; add javadoc
* fix it test
* groupBy query: optional limit push down to segment scan
* make segment level limit push down configurable
* fix teamcity errors
* fix segment limit pushdown flag handling on query level config override
* use equals for comparator check
* fix sql and null handling
* fix unused imports
* handle null offset in NullableValueGroupByColumnSelectorStrategy for buffer comparator similar to RowBasedGrouperHelper.NullableRowBasedKeySerdeHelper
* add timeout support for JsonParserIterator init future
* add queryId
* should be less than 1
* fix
* fix npe
* fix lgtm
* adjust exception, nullable
* fix test
* refactor
* revert queryId change
* add log.warn to tie exception to json parser iterator
* update DimensionDictionarySelector.getValueCardinality() javadoc
* unknown cardinality in StringDictionaryEncodedColumn dim selector
* revert StringDictionaryEncodedColumn change as that fails GroupBy-v1 execution for many working queries
* fix/add more comments
* string column handling for long min/max/sum aggregators
* add apache license to new files
* use 'L' as suffix for long literal instead of 'l'
* return null in ParallelCombiner.SettableColumnSelectorFactory.getColumnCapabilities(String) as is required by contract of ColumnSelectorFactory interface
* fix more tests
* LoggingEmitter: print event as json
* use DefaultRequestLogEventBuilderFactory in emitting request logger by default
* print context in query metric as json
* removed unused jsonMapper from DefaultQueryMetrics
* add comment
* remove change to DefaultRequestLogEventBuilderFactory.java
* Fallback to parsing classpath for hadoop task in Java 9+
In Java 9 and above we cannot assume that the system classloader is an
instance of URLClassLoader. This change adds a fallback method to parse
the system classpath in that case, and adds a unit test to validate it matches
what JDK8 would do.
Note: This has not been tested in an actual hadoop setup, so this is mostly
to help us pass unit tests.
* Remove granularity test of dubious value
One of our granularity tests relies on system classloader being a URLClassLoaders to
catch a bug related to class initialization and static initializers using a subclass (see
#2979)
This test was added to catch a potential regression, but it assumes we would add back
the same type of static initializers to this specific class, so it seems to be of dubious value
as a unit test and mostly serves to illustrate the bug.
relates to #5589
When building column/dimension selectors, calling computeIfAbsent can
cause the applied function to modify the same cache through virtual
column references. The JDK11 map implementation detects this change and
will throw an exception.
This fix – while not as elegant – breaks the single call into two
steps to avoid this problem.
* check ctyle for constant field name
* check ctyle for constant field name
* check ctyle for constant field name
* check ctyle for constant field name
* check ctyle for constant field name
* check ctyle for constant field name
* check ctyle for constant field name
* check ctyle for constant field name
* check ctyle for constant field name
* merging with upstream
* review-1
* unknow changes
* unknow changes
* review-2
* merging with master
* review-2 1 changes
* review changes-2 2
* bug fix
* use Number instead of long for response context to be forgiving of json serde to int or long
* test that encounters issue without fix
* now with more test
* is ints
* make double sum/min/max agg work on string columns
* style and compilation fixes
* fix tests
* address review comments
* add comment on SimpleDoubleAggregatorFactory
* make checkstyle happy
* Refactored ResponseContext and aggregated its keys into Enum
* Added unit tests for ResponseContext and refactored the serialization
* Removed unused methods
* Fixed code style
* Fixed code style
* Fixed code style
* Made SerializationResult static
* Updated according to the PR discussion:
Renamed an argument
Updated comparator
Replaced Pair usage with Map.Entry
Added a comment about quadratic complexity
Removed boolean field with an expression
Renamed SerializationResult field
Renamed the method merge to add and renamed several context keys
Renamed field and method related to scanRowsLimit
Updated a comment
Simplified a block of code
Renamed a variable
* Added JsonProperty annotation to renamed ScanQuery field
* Extension-friendly context key implementation
* Refactored ResponseContext: updated delegate type, comments and exceptions
Reducing serialized context length by removing some of its'
collection elements
* Fixed tests
* Simplified response context truncation during serialization
* Extracted a method of removing elements from a response context and
added some comments
* Fixed typos and updated comments
* Add IPv4 druid expressions
New druid expressions for filtering IPv4 addresses:
- ipv4address_match: Check if IP address belongs to a subnet
- ipv4address_parse: Convert string IP address to long
- ipv4address_stringify: Convert long IP address to string
These expressions operate on IP addresses represented as either strings
or longs, so that they can be applied to dimensions with mixed
representation of IP addresses. The filtering is more efficient when
operating on IP addresses as longs. In other words, the intended use
case is:
1) Use ipv4address_parse to convert to long at ingestion time
2) Use ipv4address_match to filter (on longs) at query time
3) Use ipv4adress_stringify to convert to (readable) string at query
time
* Fix licenses and null handling
* Simplify IPv4 expressions
* Fix tests
* Fix check for valid ipv4 address string
* GroupBy array-based result rows.
Fixes#8118; see that proposal for details.
Other than the GroupBy changes, the main other "interesting" classes are:
- ResultRow: The array-based result type.
- BaseQuery: T is no longer required to be Comparable.
- QueryToolChest: Adds "decorateObjectMapper" to enable query-aware serialization
and deserialization of result rows (necessary due to their positional nature).
- QueryResource: Uses the new decoration functionality.
- DirectDruidClient: Also uses the new decoration functionality.
- QueryMaker (in Druid SQL): Modifications to read ResultRows.
These classes weren't changed, but got some new javadocs:
- BySegmentQueryRunner
- FinalizeResultsQueryRunner
- Query
* Adjustments for TC stuff.
* remove unecessary lock in ForegroundCachePopulator leading to a lot of contention
* mutableboolean, javadocs,document some cache configs that were missing
* more doc stuff
* adjustments
* remove background documentation
* 1. Added TimestampExtractExprMacro.Unit for MILLISECOND 2. expr eval for MILLISECOND 3. Added a test case to test extracting millisecond from expression. #7935
* 1. Adding DATASOURCE4 in tests. 2. Adding test TimeExtractWithMilliseconds
* Fixing testInformationSchemaTables test
* Fixing failing tests in DruidAvaticaHandlerTest
* Adding cannotVectorize() call before the test
* Extract time function - Adding support for MICROSECOND, ISODOW, ISOYEAR and CENTURY time units, documentation changes.
* Adding MILLISECOND in test case
* Adding support DECADE and MILLENNIUM, updating test case and documentation
* Fixing expression eval for DECADE and MILLENIUM
* add CachingClusteredClient benchmark, refactor some stuff
* revert WeightedServerSelectorStrategy to ConnectionCountServerSelectorStrategy and remove getWeight since felt artificial, default mergeResults in toolchest implementation for topn, search, select
* adjust javadoc
* adjustments
* oops
* use it
* use BinaryOperator, remove CombiningFunction, use Comparator instead of Ordering, other review adjustments
* rename createComparator to createResultComparator, fix typo, firstNonNull nullable parameters
* doc updates and changes to use the CollectionUtils.mapValues utility method
* Add Structural Search patterns to intelliJ
* refactoring from PR comments
* put -> putIfAbsent
* do single key lookup
* Benchmarks: New SqlBenchmark, add caching & vectorization to some others.
- Introduce a new SqlBenchmark geared towards benchmarking a wide
variety of SQL queries. Rename the old SqlBenchmark to
SqlVsNativeBenchmark.
- Add (optional) caching to SegmentGenerator to enable easier
benchmarking of larger segments.
- Add vectorization to FilteredAggregatorBenchmark and GroupByBenchmark.
* Query vectorization.
This patch includes vectorized timeseries and groupBy engines, as well
as some analogs of your favorite Druid classes:
- VectorCursor is like Cursor. (It comes from StorageAdapter.makeVectorCursor.)
- VectorColumnSelectorFactory is like ColumnSelectorFactory, and it has
methods to create analogs of the column selectors you know and love.
- VectorOffset and ReadableVectorOffset are like Offset and ReadableOffset.
- VectorAggregator is like BufferAggregator.
- VectorValueMatcher is like ValueMatcher.
There are some noticeable differences between vectorized and regular
execution:
- Unlike regular cursors, vector cursors do not understand time
granularity. They expect query engines to handle this on their own,
which a new VectorCursorGranularizer class helps with. This is to
avoid too much batch-splitting and to respect the fact that vector
selectors are somewhat more heavyweight than regular selectors.
- Unlike FilteredOffset, FilteredVectorOffset does not leverage indexes
for filters that might partially support them (like an OR of one
filter that supports indexing and another that doesn't). I'm not sure
that this behavior is desirable anyway (it is potentially too eager)
but, at any rate, it'd be better to harmonize it between the two
classes. Potentially they should both do some different thing that
is smarter than what either of them is doing right now.
- When vector cursors are created by QueryableIndexCursorSequenceBuilder,
they use a morphing binary-then-linear search to find their start and
end rows, rather than linear search.
Limitations in this patch are:
- Only timeseries and groupBy have vectorized engines.
- GroupBy doesn't handle multi-value dimensions yet.
- Vector cursors cannot handle virtual columns or descending order.
- Only some filters have vectorized matchers: "selector", "bound", "in",
"like", "regex", "search", "and", "or", and "not".
- Only some aggregators have vectorized implementations: "count",
"doubleSum", "floatSum", "longSum", "hyperUnique", and "filtered".
- Dimension specs other than "default" don't work yet (no extraction
functions or filtered dimension specs).
Currently, the testing strategy includes adding vectorization-enabled
tests to TimeseriesQueryRunnerTest, GroupByQueryRunnerTest,
GroupByTimeseriesQueryRunnerTest, CalciteQueryTest, and all of the
filtering tests that extend BaseFilterTest. In all of those classes,
there are some test cases that don't support vectorization. They are
marked by special function calls like "cannotVectorize" or "skipVectorize"
that tell the test harness to either expect an exception or to skip the
test case.
Testing should be expanded in the future -- a project in and of itself.
Related to #3011.
* WIP
* Adjustments for unused things.
* Adjust javadocs.
* DimensionDictionarySelector adjustments.
* Add "clone" to BatchIteratorAdapter.
* ValueMatcher javadocs.
* Fix benchmark.
* Fixups post-merge.
* Expect exception on testGroupByWithStringVirtualColumn for IncrementalIndex.
* BloomDimFilterSqlTest: Tag two non-vectorizable tests.
* Minor adjustments.
* Update surefire, bump up Xmx in Travis.
* Some more adjustments.
* Javadoc adjustments
* AggregatorAdapters adjustments.
* Additional comments.
* Remove switching search.
* Only missiles.
Make static imports forbidden in tests and remove all occurrences to be
consistent with the non-test code.
Also, various changes to files affected by above:
- Reformat to adhere to druid style guide
- Fix various IntelliJ warnings
- Fix various SonarLint warnings (e.g., the expected/actual args to
Assert.assertEquals() were flipped)
* GroupBy: Fix improper uses of StorageAdapter#getColumnCapabilities.
1) A usage in "isArrayAggregateApplicable" that would potentially incorrectly use
array-based aggregation on a virtual column that shadows a real column.
2) A usage in "process" that would potentially use the more expensive multi-value
aggregation path on a singly-valued virtual column. (No correctness issue, but
a performance issue.)
* Add addl javadoc.
* ExpressionVirtualColumn: Set multi-value flag.
* more sql support for expression array functions
* prepend/slice
* doc fixes
* fix imports
* fix tests
* add null numeric expr for proper conversions between ExprEval and Expr and back to ExprEval
* re-arrange
* imports :(
* add append/prepend test
* array support for expression language for multi-value string columns
* fix tests?
* fixes
* more tests
* fixes
* cleanup
* more better, more test
* ignore inspection
* license
* license fix
* inspection
* remove dumb import
* more better
* some comments
* add expr rewrite for arrayfn args for more magic, tests
* test stuff
* more tests
* fix test
* fix test
* castfunc can deal with arrays
* needs more empty array
* more tests, make cast to long array more forgiving
* refactor
* simplify ExprMacro Expr implementations with base classes in core
* oops
* more test
* use Shuttle for Parser.flatten, javadoc, cleanup
* fixes and more tests
* unused import
* fixes
* javadocs, cleanup, refactors
* fix imports
* more javadoc
* more javadoc
* more
* more javadocs, nonnullbydefault, minor refactor
* markdown fix
* adjustments
* more doc
* move initial filter out
* docs
* map empty arg lambda, apply function argument validation
* check function args at parse time instead of eval time
* more immutable
* more more immutable
* clarify grammar
* fix docs
* empty array is string test, we need a way to make arrays better maybe in the future, or define empty arrays as other types..
* AggregatorUtil should cache parsed expression to avoid memory problem (OOM/FGC) when Expression is used in metricsSpec
* remove debug log check in Parser.parse
* remove cache and use suppliers.memorize
* Add checkstyle for "Local variable names shouldn't start with capital"
* Adjust some local variables to constants
* Replace StringUtils.LINE_SEPARATOR with System.lineSeparator()
* VirtualColumn updates for exploiting base column internal structure
* unit tests for virtual column interface updates
* groupBy needs to use VirtualizedColumnSelectorFactory if outer query in
nested groupBy has virtual columns.
* fix strict compile checks
* fix teamcity build errors
* add comment explaining useVirtualizedColumnSelectorFactory flag in RowBasedGrouperHelper.createGrouperAccumulatorPair(..)
* make ComplexColumn an interface and ExtensionPoint
* incorporate review comments
* make ColumnValueSelector @ExtensionPoint
* more java docs
* add close() method to ComplexColumn interface
* Bump Checkstyle to 8.20
Moderate severity vulnerability that affects:
com.puppycrawl.tools:checkstyle
Checkstyle prior to 8.18 loads external DTDs by default,
which can potentially lead to denial of service attacks
or the leaking of confidential information.
Affected versions: < 8.18
* Oops, missed one
* Oops, missed a few
* Set direct memory if unable to detect JVM config
Java 9 and above prevents us from detecting the maximum available direct
memory.
This change adds a fallback method to use at most 25% of maximum heap
size, which should be a reasonable default.
Unless -XX:MaxDirectMemorySize is set, recent JVMs will default maximum
direct memory to match the maximum heap size, so this should work out of
the box in most cases. For completeness we print instructions in the log
to explain how to adjust settings if necessary.
* skip test rather than succeeding
* reword log message
Co-Authored-By: Himanshu <g.himanshu@gmail.com>
* First set of changes for tDigest histogram
* Add license
* Address code review comments
* Add a doc page for new T-Digest sketch aggregators. Minor code cleanup and comments.
* Remove synchronization from BufferAggregators. Address code review comments
* Fix typo
* Fix exception when using complex aggs with result level caching
* Add test comments
* checkstyle
* Add helper function for getting aggs from cache
* Move method to CacheStrategy
* Revert QueryToolChest changes
* Update test comments
* update easymock / powermock for to 4.0.2 / 2.0.2 for JDK11 support
* update tests to use new easymock interfaces
* fix tests failing due to easymock fixes
* remove dependency on jmockit
* fix race condition in ResourcePoolTest
* Java 9 compatible specialized class compilation
We currently use Unsafe.defineClass to compile specialized classes,
which has been removed in Java 9 and above. This change switches to
MethodHandles.Lookup.defineClass at runtime, which provides similar
functionality in newer JDK versions.
* add comments
* fix incorrect comment
* add unsafe utility class
* make comments java-doc style
* fix checkstyle errors
* rename unsafe -> unsafeutil
* move defineClass method to utility class
* rename unsafeutil -> unsafeutils to match other utility class names
* remove extra lookup method
* add utiliy class docs
* more comments
* minor comments and formatting
* Initial commit
* Added test for int to long conversion
* Add appenderator test for realtime scan query
* get rid of todo
* Fix forbidden apis
* Jon's recommendations
* Formatting
* Make JavaScript and XML errors non-TeamCity errors; Update JavaScript language level to ES6 in IntelliJ settings
* Add license comment to assembly-2.0.0.xsd
* Add .idea/README.md with comments
* Add SegmentDescriptor interval in the hash while calculating Etag
* Add computeResultLevelCacheKey to CacheStrategy
Make HavingSpec cacheable and implement getCacheKey for subclasses
Add unit tests for computeResultLevelCacheKey
* Add more tests
* Use CacheKeyBuilder for HavingSpec's getCacheKey
* Initialize aggregators map to avoid NPE
* adjust cachekey builder for HavingSpec to ignore aggregators
* unused import
* PR comments
* Update scan query runner factory to accept SpecificSegmentSpec
* nit
* Sorry travis
* Improve logging and fix doc
* Bug fix
* Friendlier error msgs and tests to cover bug
* Address Gian's comments
* Fix doc
* Added tests for empty and null column list
* Style
* Fix checking wrong order (looking at query param when it should be
looking at the null-handled order)
* Add test case for null order
* Fix ScanQueryRunnerTest
* Forbidden APIs fixed
* refactor lookups to be more chill to router
* remove accidental change
* fix and combine LookupIntrospectionResourceTest
* fix inspection
* rename RouterLookupModule to LookupSerdeModule and RouterLookupExtractorFactoryContainerProvider to NoopLookupExtractorFactoryContainerProvider
* make comment generic
* use ConfigResourceFilter instead of StateResourceFilter
* fix indentation
* unused import
* another unused import
* refactor some stuff into processing module, split up LookupModule.java classes into their own files
* Moved Scan Builder to Druids class and started on Scan Benchmark setup
* Need to form queries
* It runs.
* Stuff for time-ordered scan query
* Move ScanResultValue timestamp comparator to a separate class for testing
* Licensing stuff
* Change benchmark
* Remove todos
* Added TimestampComparator tests
* Change number of benchmark iterations
* Added time ordering to the scan benchmark
* Changed benchmark params
* More param changes
* Benchmark param change
* Made Jon's changes and removed TODOs
* Broke some long lines into two lines
* nit
* Decrease segment size for less memory usage
* Wrote tests for heapsort scan result values and fixed bug where iterator
wasn't returning elements in correct order
* Wrote more tests for scan result value sort
* Committing a param change to kick teamcity
* Fixed codestyle and forbidden API errors
* .
* Improved conciseness
* nit
* Created an error message for when someone tries to time order a result
set > threshold limit
* Set to spaces over tabs
* Fixing tests WIP
* Fixed failing calcite tests
* Kicking travis with change to benchmark param
* added all query types to scan benchmark
* Fixed benchmark queries
* Renamed sort function
* Added javadoc on ScanResultValueTimestampComparator
* Unused import
* Added more javadoc
* improved doc
* Removed unused import to satisfy PMD check
* Small changes
* Changes based on Gian's comments
* Fixed failing test due to null resultFormat
* Added config and get # of segments
* Set up time ordering strategy decision tree
* Refactor and pQueue works
* Cleanup
* Ordering is correct on n-way merge -> still need to batch events into
ScanResultValues
* WIP
* Sequence stuff is so dirty :(
* Fixed bug introduced by replacing deque with list
* Wrote docs
* Multi-historical setup works
* WIP
* Change so batching only occurs on broker for time-ordered scans
Restricted batching to broker for time-ordered queries and adjusted
tests
Formatting
Cleanup
* Fixed mistakes in merge
* Fixed failing tests
* Reset config
* Wrote tests and added Javadoc
* Nit-change on javadoc
* Checkstyle fix
* Improved test and appeased TeamCity
* Sorry, checkstyle
* Applied Jon's recommended changes
* Checkstyle fix
* Optimization
* Fixed tests
* Updated error message
* Added error message for UOE
* Renaming
* Finish rename
* Smarter limiting for pQueue method
* Optimized n-way merge strategy
* Rename segment limit -> segment partitions limit
* Added a bit of docs
* More comments
* Fix checkstyle and test
* Nit comment
* Fixed failing tests -> allow usage of all types of segment spec
* Fixed failing tests -> allow usage of all types of segment spec
* Revert "Fixed failing tests -> allow usage of all types of segment spec"
This reverts commit ec470288c7.
* Revert "Merge branch '6088-Time-Ordering-On-Scans-N-Way-Merge' of github.com:justinborromeo/incubator-druid into 6088-Time-Ordering-On-Scans-N-Way-Merge"
This reverts commit 57033f36df, reversing
changes made to 8f01d8dd16.
* Check type of segment spec before using for time ordering
* Fix bug in numRowsScanned
* Fix bug messing up count of rows
* Fix docs and flipped boolean in ScanQueryLimitRowIterator
* Refactor n-way merge
* Added test for n-way merge
* Refixed regression
* Checkstyle and doc update
* Modified sequence limit to accept longs and added test for long limits
* doc fix
* Implemented Clint's recommendations
* Throw caught exception.
* Throw caught exceptions.
* Related checkstyle rule is added to prevent further bugs.
* RuntimeException() is used instead of Throwables.propagate().
* Missing import is added.
* Throwables are propogated if possible.
* Throwables are propogated if possible.
* Throwables are propogated if possible.
* Throwables are propogated if possible.
* * Checkstyle definition is improved.
* Throwables.propagate() usages are removed.
* Checkstyle pattern is changed for only scanning "Throwables.propagate(" instead of checking lookbehind.
* Throwable is kept before firing a Runtime Exception.
* Fix unused assignments.
* Locale problem is fixed which fails tests.
* Forbidden apis definition is improved to prevent using com.ibm.icu.text.SimpleDateFormat and com.ibm.icu.text.DateFormatSymbols without using any Locale defined.
* Error message is improved.
Similar to other bugs fixed in #6220, but this one was missed. This bug would
cause "extraction" dimensionSpecs on the "__time" column with non-STRING
outputTypes to potentially be output as STRING sometimes instead of LONG,
causing incompletely merged results.
For selectors with internal caches (like SingleScanTimeDimensionSelector,
SingleLongInputCachingExpressionColumnValueSelector, etc) we can get a perf
boost and memory usage decrease by sharing selectors.
* Added checkstyle for "Methods starting with Capital Letters" and changed the method names violating this.
* Un-abbreviate the method names in the calcite tests
* Fixed checkstyle errors
* Changed asserts position in the code
* Moved Scan Builder to Druids class and started on Scan Benchmark setup
* Need to form queries
* It runs.
* Remove todos
* Change number of benchmark iterations
* Changed benchmark params
* More param changes
* Made Jon's changes and removed TODOs
* Broke some long lines into two lines
* Decrease segment size for less memory usage
* Committing a param change to kick teamcity
* Prohibit assigning concurrent maps into Map-types variables and fields; Fix a race condition in CoordinatorRuleManager; improve logic in DirectDruidClient and ResourcePool
* Enforce that if compute(), computeIfAbsent(), computeIfPresent() or merge() is called on a ConcurrentHashMap, it's stored in a ConcurrentHashMap-typed variable, not ConcurrentMap; add comments explaining get()-before-computeIfAbsent() optimization; refactor Counters; fix a race condition in Intialization.java
* Remove unnecessary comment
* Checkstyle
* Fix getFromExtensions()
* Add a reference to the comment about guarded computeIfAbsent() optimization; IdentityHashMap optimization
* Fix UriCacheGeneratorTest
* Workaround issue with MaterializedViewQueryQueryToolChest
* Strengthen Appenderator's contract regarding concurrency
* fix issue with SingleLongInputCachingExpressionColumnValueSelector when sql compatible null handling enabled
* add test with doubles to show same behavior for floats/doubles that lack the optimization of longs
* simplify
* fix import
* blooming aggs
* partially address review
* fix docs
* minor test refactor after rebase
* use copied bloomkfilter
* add ByteBuffer methods to BloomKFilter to allow agg to use in place, simplify some things, more tests
* add methods to BloomKFilter to get number of set bits, use in comparator, fixes
* more docs
* fix
* fix style
* simplify bloomfilter bytebuffer merge, change methods to allow passing buffer offsets
* oof, more fixes
* more sane docs example
* fix it
* do the right thing in the right place
* formatting
* fix
* avoid conflict
* typo fixes, faster comparator, docs for comparator behavior
* unused imports
* use buffer comparator instead of deserializing
* striped readwrite lock for buffer agg, null handling comparator, other review changes
* style fixes
* style
* remove sync for now
* oops
* consistency
* inspect runtime shape of selector instead of selector plus, static comparator, add inner exception on serde exception
* CardinalityBufferAggregator inspect selectors instead of selectorPluses
* fix style
* refactor away from using ColumnSelectorPlus and ColumnSelectorStrategyFactory to instead use specialized aggregators for each supported column type, other review comments
* adjustment
* fix teamcity error?
* rename nil aggs to empty, change empty agg constructor signature, add comments
* use stringutils base64 stuff to be chill with master
* add aggregate combiner, comment
* * Add few methods about base64 into StringUtils
* Use `java.util.Base64` instead of others
* Add org.apache.commons.codec.binary.Base64 & com.google.common.io.BaseEncoding into druid-forbidden-apis
* Rename encodeBase64String & decodeBase64String
* Update druid-forbidden-apis
* Replacing Math.random() with ThreadLocalRandom.current().nextDouble()
* Added java.lang.Math#random() in forbidden-apis.txt
* Minor change in the message - druid-forbidden-apis.txt
* use SqlLifecyle to manage sql execution, add sqlId
* add sql request logger
* fix UT
* rename sqlId to sqlQueryId, sql/time to sqlQuery/time, etc
* add docs and more sql request logger impls
* add UT for http and jdbc
* fix forbidden use of com.google.common.base.Charsets
* fix UT in QuantileSqlAggregatorTest, supressed unused warning of getSqlQueryId
* do not use default method in QueryMetrics interface
* capitalize 'sql' everywhere in the non-property parts of the docs
* use RequestLogger interface to log sql query
* minor bugfixes and add switching request logger
* add filePattern configs for FileRequestLogger
* address review comments, adjust sql request log format
* fix inspection error
* try SuppressWarnings("RedundantThrows") to fix inspection error on ComposingRequestLoggerProvider