* Fix equalsAndHashCode in ClientCompactQueryTuningConfig
This change introduces a dependency to EqualsVerifier for the test scope.
The dependency is licensed under Apache 2. The library makes it trivial
to add equals and hashCode checks to prevent bugs like this from happening
in the future
* fix checkstyle
* fix test name
* Parallel indexing single dim partitions
Implements single dimension range partitioning for native parallel batch
indexing as described in #8769. This initial version requires the
druid-datasketches extension to be loaded.
The algorithm has 5 phases that are orchestrated by the supervisor in
`ParallelIndexSupervisorTask#runRangePartitionMultiPhaseParallel()`.
These phases and the main classes involved are described below:
1) In parallel, determine the distribution of dimension values for each
input source split.
`PartialDimensionDistributionTask` uses `StringSketch` to generate
the approximate distribution of dimension values for each input
source split. If the rows are ungrouped,
`PartialDimensionDistributionTask.UngroupedRowDimensionValueFilter`
uses a Bloom filter to skip rows that would be grouped. The final
distribution is sent back to the supervisor via
`DimensionDistributionReport`.
2) The range partitions are determined.
In `ParallelIndexSupervisorTask#determineAllRangePartitions()`, the
supervisor uses `StringSketchMerger` to merge the individual
`StringSketch`es created in the preceding phase. The merged sketch is
then used to create the range partitions.
3) In parallel, generate partial range-partitioned segments.
`PartialRangeSegmentGenerateTask` uses the range partitions
determined in the preceding phase and
`RangePartitionCachingLocalSegmentAllocator` to generate
`SingleDimensionShardSpec`s. The partition information is sent back
to the supervisor via `GeneratedGenericPartitionsReport`.
4) The partial range segments are grouped.
In `ParallelIndexSupervisorTask#groupGenericPartitionLocationsPerPartition()`,
the supervisor creates the `PartialGenericSegmentMergeIOConfig`s
necessary for the next phase.
5) In parallel, merge partial range-partitioned segments.
`PartialGenericSegmentMergeTask` uses `GenericPartitionLocation` to
retrieve the partial range-partitioned segments generated earlier and
then merges and publishes them.
* Fix dependencies & forbidden apis
* Fixes for integration test
* Address review comments
* Fix docs, strict compile, sketch check, rollup check
* Fix first shard spec, partition serde, single subtask
* Fix first partition check in test
* Misc rewording/refactoring to address code review
* Fix doc link
* Split batch index integration test
* Do not run parallel-batch-index twice
* Adjust last partition
* Split ITParallelIndexTest to reduce runtime
* Rename test class
* Allow null values in range partitions
* Indicate which phase failed
* Improve asserts in tests
* Address security vulnerabilities CVSS >= 7
Update dependencies to address security vulnerabilities with CVSS scores
of 7 or higher. A new Travis CI job is added to prevent new
high/critical security vulnerabilities from being added.
Updated dependencies:
- api-util 1.0.0 -> 1.0.3
- jackson 2.9.10 -> 2.10.1
- kafka 2.1.0 -> 2.1.1
- libthrift 0.10.0 -> 0.13.0
- protobuf 3.2.0 -> 3.11.0
The following high/critical security vulnerabilities are currently
suppressed (so that the new Travis CI job can be added now) and are left
as future work to fix:
- hibernate-validator:5.2.5
- jackson-mapper-asl:1.9.13
- libthrift:0.6.1
- netty:3.10.6
- nimbus-jose-jwt:4.41.1
* Rename EDL1 license file
* Fix inspection errors
* first steps
* clean licenses
* fix capabilities
* fix specs
* more tests
* new web console on coordinator and overlord, remove setup for old consoles, old configs
* better message
* update licenses
* sync license files
* more button
* fix tslint issue
* jetty-rewrite dependency to add redirects for old console paths
* put dependency in the right place
* fix overlord detection
* fix notices, dedupe licenses
* make segment timeline work in no SQL mode
* update license
* revert hard coded coordinator mode from testing
* update restricted mode copy
* IndexerSQLMetadataStorageCoordinator.getTimelineForIntervalsWithHandle() don't fetch abutting intervals; simplify getUsedSegmentsForIntervals()
* Add VersionedIntervalTimeline.findNonOvershadowedObjectsInInterval() method; Propagate the decision about whether only visible segmetns or visible and overshadowed segments should be returned from IndexerMetadataStorageCoordinator's methods to the user logic; Rename SegmentListUsedAction to RetrieveUsedSegmentsAction, SegmetnListUnusedAction to RetrieveUnusedSegmentsAction, and UsedSegmentLister to UsedSegmentsRetriever
* Fix tests
* More fixes
* Add javadoc notes about returning Collection instead of Set. Add JacksonUtils.readValue() to reduce boilerplate code
* Fix KinesisIndexTaskTest, factor out common parts from KinesisIndexTaskTest and KafkaIndexTaskTest into SeekableStreamIndexTaskTestBase
* More test fixes
* More test fixes
* Add a comment to VersionedIntervalTimelineTestBase
* Fix tests
* Set DataSegment.size(0) in more tests
* Specify DataSegment.size(0) in more places in tests
* Fix more tests
* Fix DruidSchemaTest
* Set DataSegment's size in more tests and benchmarks
* Fix HdfsDataSegmentPusherTest
* Doc changes addressing comments
* Extended doc for visibility
* Typo
* Typo 2
* Address comment
* Fix missing jackson jars for hadoop ingestion
* PR comments
* pom ordering
* New approach
* Remove all jackson-core/mapper-asl exclusions from hdfs storage
* Implementing dropwizard emitter for druid
making metric manager and alert emitters as optional
* Refactor and make things work
more improvements
improve docs
refactrings
* Fix teamcity inspections
* review comments
* more review comments
* add limit to max number of gauges
* update pom version
* fix pom
* review comments
* review comment
* review comments
* fix broken doc link
review comments
review comments
* review comments
* fix checkstyle
* more spell check fixes
* fix travis failures
* Fix dependency analyze warnings
Update the maven dependency plugin to the latest version and fix all
warnings for unused declared and used undeclared dependencies in the
compile scope. Added new travis job to add the check to CI. Also fixed
some source code files to use the correct packages for their imports and
updated druid-forbidden-apis to prevent regressions.
* Address review comments
* Adjust scope for org.glassfish.jaxb:jaxb-runtime
* Fix dependencies for hdfs-storage
* Consolidate netty4 versions
* Exit JVM on curator unhandled errors
If an unhandled error occurs when curator is talking to ZooKeeper, exit
the JVM in addition to stopping the lifecycle to prevent the process
from being left in a zombie state. With this change,
BoundedExponentialBackoffRetryWithQuit is no longer needed as when
curator exceeds the configured retries, it triggers its unhandled error
listeners. A new "connectionTimeoutMs" CuratorConfig setting is added
mostly to facilitate testing curator unhandled errors, but it may be
useful for users as well.
* Address review comments
* enable unit tests with JDK11
This enables unit tests with openjdk11, splitting up the build into
stages to have it fail faster
The integration test docker image still uses openjdk8, so there is
little reason to run those tests with JDK11 yet
* remove stages
* Use Codecov
Upload coverage reports to Codecov. For now, having Codecov comment on
PRs or enforcing a minimum coverage threshold are both disabled until
the Codecov coverage reports look reliable:
https://codecov.io/gh/apache/incubator-druid
* Split bash and curl into separate lines
* Zookeeper version is updated.
* Zookeeper version is updated at licenses.yaml
* licenses.yaml is updated and dependencies are fixed to make the project successfully build.
* Zookeeper versions are fixed at licenses.yaml
The coveralls code coverage reports inaccurate coverage for our parallel
builds. Disable it until it can be fixed or a better alternative can be
found.
* Enable code coverage
Code coverage was disabled via
https://github.com/apache/incubator-druid/pull/3122 due to an issue with
cobertura in Travis CI. Switch code coverage tool from cobertura to
jacoco to avoid issue and re-enable coveralls for Travis CI.
* Exclude non-production code
* Exclude benchmark generated code
* Exclude DruidTestRunnerFactory
After enabling parallel builds for "mvn install", the sigar dependency
would sometimes resolve to the incorrect artifact repo for some of the
maven modules. This issue seems to be fixed by moving the definition of
the sigar dependency's artifact repo to the root POM.
Also, depending on network speeds, "mvn -q install" may take longer than
the default 10 minute timeout to print any output. Use travis_wait to
extend the timeout to 15 minutes.
Reorganize Travis CI jobs into smaller faster (and more) jobs. Add
various maven options to skip unnecessary work and refactored Travis CI
job definitions to follow DRY.
Detailed changes:
.travis.yml
- Refactor build logic to get rid of copy-and-paste logic
- Skip static checks and enable parallelism for maven install
- Split static analysis into different jobs to ease triage
- Use "name" attribute instead of NAME environment variable
- Split "indexing" and "web console" out of "other modules test"
- Split 2 integration test jobs into multiple smaller jobs
build.sh
- Enable parallelism
- Disable more static checks
travis_script_integration.sh
travis_script_integration_part2.sh
integration-tests/README.md
- Use TestNG groups instead of shell scripts and move definition of jobs
into Travis CI yaml
integration-tests/pom.xml
- Show elapsed time of individual tests to aid in future rebalancing of
Travis CI integration test jobs run time
TestNGGroup.java
- Use TestNG groups to make it easy to have multiple Travis CI
integration test jobs. TestNG groups also make it easier to have an
"other" integration test group and make it less likely a test will
accidentally not be included in a CI job.
IT*Test.java
AbstractITBatchIndexTest.java
AbstractKafkaIndexerTest.java
- Add TestNG group
- Fix various IntelliJ inspection warnings
- Reduce scope of helper methods since the TestNG group annotation on
the class makes TestNG consider all public methods as test methods
pom.xml
- Allow enforce plugin to be run from command-line
- Bump resources plugin version so that "[debug] execute contextualize"
output is correctly suppressed by "mvn -q"
- Bump exec plugin version so that skip property is renamed from "skip"
to "exec.skip"
web-console/pom.xml
- Add property to allow disabling javascript-related work. This property
is overridden in Travis CI to speed up the jobs.
* Add IPv4 druid expressions
New druid expressions for filtering IPv4 addresses:
- ipv4address_match: Check if IP address belongs to a subnet
- ipv4address_parse: Convert string IP address to long
- ipv4address_stringify: Convert long IP address to string
These expressions operate on IP addresses represented as either strings
or longs, so that they can be applied to dimensions with mixed
representation of IP addresses. The filtering is more efficient when
operating on IP addresses as longs. In other words, the intended use
case is:
1) Use ipv4address_parse to convert to long at ingestion time
2) Use ipv4address_match to filter (on longs) at query time
3) Use ipv4adress_stringify to convert to (readable) string at query
time
* Fix licenses and null handling
* Simplify IPv4 expressions
* Fix tests
* Fix check for valid ipv4 address string
* Fix dependency analyze warnings
Update the maven dependency plugin to the latest version and fix all
warnings for unused declared and used undeclared dependencies in the
compile scope. Added new travis job to add the check to CI. Also fixed
some source code files to use the correct packages for their imports.
* Fix licenses and dependencies
* Fix licenses and dependencies again
* Fix integration test dependency
* Address review comments
* Fix unit test dependencies
* Fix integration test dependency
* Fix integration test dependency again
* Fix integration test dependency third time
* Fix integration test dependency fourth time
* Fix compile error
* Fix assert package
* Benchmarks: New SqlBenchmark, add caching & vectorization to some others.
- Introduce a new SqlBenchmark geared towards benchmarking a wide
variety of SQL queries. Rename the old SqlBenchmark to
SqlVsNativeBenchmark.
- Add (optional) caching to SegmentGenerator to enable easier
benchmarking of larger segments.
- Add vectorization to FilteredAggregatorBenchmark and GroupByBenchmark.
* Query vectorization.
This patch includes vectorized timeseries and groupBy engines, as well
as some analogs of your favorite Druid classes:
- VectorCursor is like Cursor. (It comes from StorageAdapter.makeVectorCursor.)
- VectorColumnSelectorFactory is like ColumnSelectorFactory, and it has
methods to create analogs of the column selectors you know and love.
- VectorOffset and ReadableVectorOffset are like Offset and ReadableOffset.
- VectorAggregator is like BufferAggregator.
- VectorValueMatcher is like ValueMatcher.
There are some noticeable differences between vectorized and regular
execution:
- Unlike regular cursors, vector cursors do not understand time
granularity. They expect query engines to handle this on their own,
which a new VectorCursorGranularizer class helps with. This is to
avoid too much batch-splitting and to respect the fact that vector
selectors are somewhat more heavyweight than regular selectors.
- Unlike FilteredOffset, FilteredVectorOffset does not leverage indexes
for filters that might partially support them (like an OR of one
filter that supports indexing and another that doesn't). I'm not sure
that this behavior is desirable anyway (it is potentially too eager)
but, at any rate, it'd be better to harmonize it between the two
classes. Potentially they should both do some different thing that
is smarter than what either of them is doing right now.
- When vector cursors are created by QueryableIndexCursorSequenceBuilder,
they use a morphing binary-then-linear search to find their start and
end rows, rather than linear search.
Limitations in this patch are:
- Only timeseries and groupBy have vectorized engines.
- GroupBy doesn't handle multi-value dimensions yet.
- Vector cursors cannot handle virtual columns or descending order.
- Only some filters have vectorized matchers: "selector", "bound", "in",
"like", "regex", "search", "and", "or", and "not".
- Only some aggregators have vectorized implementations: "count",
"doubleSum", "floatSum", "longSum", "hyperUnique", and "filtered".
- Dimension specs other than "default" don't work yet (no extraction
functions or filtered dimension specs).
Currently, the testing strategy includes adding vectorization-enabled
tests to TimeseriesQueryRunnerTest, GroupByQueryRunnerTest,
GroupByTimeseriesQueryRunnerTest, CalciteQueryTest, and all of the
filtering tests that extend BaseFilterTest. In all of those classes,
there are some test cases that don't support vectorization. They are
marked by special function calls like "cannotVectorize" or "skipVectorize"
that tell the test harness to either expect an exception or to skip the
test case.
Testing should be expanded in the future -- a project in and of itself.
Related to #3011.
* WIP
* Adjustments for unused things.
* Adjust javadocs.
* DimensionDictionarySelector adjustments.
* Add "clone" to BatchIteratorAdapter.
* ValueMatcher javadocs.
* Fix benchmark.
* Fixups post-merge.
* Expect exception on testGroupByWithStringVirtualColumn for IncrementalIndex.
* BloomDimFilterSqlTest: Tag two non-vectorizable tests.
* Minor adjustments.
* Update surefire, bump up Xmx in Travis.
* Some more adjustments.
* Javadoc adjustments
* AggregatorAdapters adjustments.
* Additional comments.
* Remove switching search.
* Only missiles.
* Add the pull-request template
* Rewording
* Replaced checklist link, added Rat exclusion
* Update the PR template. Add Concurrency Checklist to the repository
* Merge Description and Design sections. Softer language. Removed requirement to test in production environment. Added a committer's instruction to justify addition of meta tags.
* Rephrase item about comments
* Add license header
* Add item to concurrency checklist