* Remove references to Docker Machine
Removing a broken link to an obsolete repo.
While at it, removing references to Docker Machine, which was obsolete as of Docker v1.12 (avail. 2016). This version introduced Docker as native MacOS and Windows apps.
* Update README.md
Wording nit.
* Fix timestamp extract fn to match postgres
Update the timestamp extract function so that it matches the PostgreSQL docs.
Examples from the PostgreSQL docs were added as tests for DECADE, CENTURY
and MILLENIUM extraction.
There were bugs in CENTURY and MILLENIUM that were spotted because of intelliJ
inspections - 'Integer division in floating point context'
* Update CalciteQueryTest
* remove useless round
* mark integer division as an error
* Use ExecutorService instead of ScheduledExecutorService where necessary - #9286
* Added inspection rule to prohibit ScheduledExecutorService assignment to ExecutorService
* IMPLY-1946: Improve code quality and unit test coverage of the Azure extension
* Update unit tests to increase test coverage for the extension
* Clean up any messy code
* Enfore code coverage as part of tests.
* * Update azure extension pom to remove unnecessary things
* update jacoco thresholds
* * updgrade version of azure-storage library version uses to
most upto-date version
* implement Azure InputSource reader and deprecate Azure FireHose
* implement azure InputSource reader
* deprecate Azure FireHose implementation
* * exclude common libraries that are included from druid core
* Implement more of Azure input source.
* * Add tests
* * Add more tests
* * deprecate azure firehose
* * added more tests
* * rollback fix for google cloud batch ingestion bug. Will be
fixed in another PR.
* * Added javadocs for all azure related classes
* Addressed review comments
* * Remove dependency on org.apache.commons:commons-collections4
* Fix LGTM warnings
* Add com.google.inject.extensions:guice-assistedinject to licenses
* * rename classes as suggested in review comments
* * Address review comments
* * Address review comments
* * Address review comments
* Change security vulnerability scan to cron job
Previously, when new CVEs were reported, the security vulnerability scan
would unfortunately block PRs that did not modify any dependencies. To
prevent this issue, the security scan is now run as a Travis cron job
that runs on master and notifies the druid dev list if it fails. The
security scan has also been added to the "apache-release" maven profile,
to ensure that it passes before a release.
Also adjusted some Travis CI job failure help messages to not be folded
in the Travis CI job logs.
* Dedup plugin configuration definition
More functional tests to cover handling of input data that has a
partition dimension that contains:
1) Null values: Should be in first partition
2) Multi values: Should cause superbatch task to abort
When the time extraction Top N algorithm is looking for aggregators, it makes
2 calls to hashCode on the key. Use Map#computeIfAbsent instead so that the
hashCode is calculated only once
* Codestyle - use java style array declaration
Replaced C-style array declarations with java style declarations and marked
the intelliJ inspection as an error
* cleanup test code
* Refactoring codes around ingestion:
- Parallel index task and simple task now use the same segment allocator implementation. This is reusable for the future implementation as well.
- Added PartitionAnalysis to store the analysis of the partitioning
- Move some util methods to SegmentLockHelper and rename it to TaskLockHelper
* fix build
* fix SingleDimensionShardSpecFactory
* optimize SingledimensionShardSpecFactory
* fix test
* shard spec builder
* import order
* shardSpecBuilder -> partialShardSpec
* build -> complete
* fix comment; add unit tests for partitionBoundaries
* add more tests and fix javadoc
* fix toString(); add serde tests for HashBasedNumberedPartialShardSpec and SegmentAllocateAction
* fix test
* add equality test for hash and range partial shard specs
* Forbid easily misused HashSet and HashMap constructors
* Add two LinkedHashMap constructors to forbidden-apis and create utility method as replacement for them
* Fix visibility of constant in CollectionUtils.java
* Make an exception for an instance of LinkedHashMap#<init>(int) because proper sizing is used
* revert changes to sql module tests that should be in separate PR
* Finish reverting changes to sql module tests that were flagged in checkstyle during CI
* Add netty dependency resulting from SupressForbidden
* Add HashVectorGrouper based on MemoryOpenHashTable.
Additional supporting changes:
1) Modifies VectorGrouper interface to use Memory instead of ByteBuffers.
2) Modifies BufferArrayGrouper to match the new VectorGrouper interface.
3) Removes "implements VectorGrouper" from BufferHashGrouper.
* Fix comment.
* Fix another comment.
* Remove unused stuff.
* Include hoisted bounds checks.
* Checks against too-large keySpaces.
* Create new dynamic config to pause coordinator helpers when needed
* Fix spelling mistakes flagged in Travis build
* Add an integration test for coordinator pause dynamic config
* Improve documentation for new dynamic coordinator config and remove un-needed info logs in favor of debug
* address naming convention of 'deep store' vs 'deep storage' in new configs doc line
* Fix newline at end of configuration index.md
* Last try to resolve newline issue in configuration readme
* fix spell checks from travis build
* Fix another flagges spelling error from Travis
* Remove EasyMock dependency from CalciteTests.
Useful because CalciteTests is used by other modules (e.g. druid-benchmarks)
and we don't want them to have to pull in EasyMock.
* CalciteTests no longer needs curator-x-discovery either.
* Add MemoryOpenHashTable, a table similar to ByteBufferHashTable.
With some key differences to improve speed and design simplicity:
1) Uses Memory rather than ByteBuffer for its backing storage.
2) Uses faster hashing and comparison routines (see HashTableUtils).
3) Capacity is always a power of two, allowing simpler design and more
efficient implementation of findBucket.
4) Does not implement growability; instead, leaves that to its callers.
The idea is this removes the need for subclasses, while still giving
callers flexibility in how to handle table-full scenarios.
* Fix LGTM warnings.
* Adjust dependencies.
* Remove easymock from druid-benchmarks.
* Adjustments from review.
* Fix datasketches unit tests.
* Fix checkstyle.
* Speed up joins on indexed tables with string keys
When joining on index tables with string keys, caching the computation
of row id to row numbers improves performance on the
JoinAndLookupBenchmark.joinIndexTableStringKey* benchmarks by about 10%
if the column cache is enabled an by about 100% if the column cache is
disabled.
* Faster cache impl and handle unknown cardinality
* Remove unused dependency
* Hoist cardinality check outside of hot loop
* Fix dummy DimensionSelector for tests
* implement shell for greatest sql aggregator with hardcoded long values
* implement functional long greatest aggregator for direct access columns
* implement greatest & least sql aggregators for long & double types using abstract base class
* add javadocs, unit tests & handling for floats for greatest/least postaggregations
* minor checkstyle fix
* improve naming for the test cases
* make inner class static
* remove blank lines to retest travis build
* change trivial text to rerun travis build
* implement suggested updates for greatest/least sql aggs & fix checkstyle issues
* fix stale comments in greatest/least sql aggs abstract base
* Update sql.md
* improve sql function definitions for greatest/least sql aggs
* add more tests for greatest/least sql aggs
* add tests to cover invalid greatest/least sql expressions
* rename & reorder greatest least sql tests
By default native batch ingestion was only getting a batch of 10
files at a time when used with google cloud. The Default for other
cloud providers is 1024, and should be similar for google cloud.
The low batch size was caused by mistype. This change updates the
batch size to 1024 when using google cloud.
* Guicify druid sql module
Break up the SQLModule in to smaller modules and provide a binding that
modules can use to register schemas with druid sql.
* fix some tests
* address code review
* tests compile
* Working tests
* Add all the tests
* fix up licenses and dependencies
* add calcite dependency to druid-benchmarks
* tests pass
* rename the schemas
* IMPLY-1946: Improve code quality and unit test coverage of the Azure extension
* Update unit tests to increase test coverage for the extension
* Clean up any messy code
* Enfore code coverage as part of tests.
* * Update azure extension pom to remove unnecessary things
* update jacoco thresholds
* * updgrade version of azure-storage library version uses to
most upto-date version
* * exclude common libraries that are included from druid core
* * address review comments
* SQL join support for lookups.
1) Add LookupSchema to SQL, so lookups show up in the catalog.
2) Add join-related rels and rules to SQL, allowing joins to be planned into
native Druid queries.
* Add two missing LookupSchema calls in tests.
* Fix tests.
* Fix typo.
This is important because if a user has the hdfs extension loaded, but is not
using hdfs deep storage, then they will not have storageDirectory set and will
get the following error:
IllegalArgumentException: Can not create a Path from an empty string
at io.druid.storage.hdfs.HdfsDataSegmentKiller.<init>(HdfsDataSegmentKiller.java:47)
This scenario is realistic: it comes up when someone has the hdfs extension
loaded because they want to use HdfsInputSource, but don't want to use hdfs for
deep storage.
Fixes#4694.