* RowBasedIndexedTable: Add specialized index types for long keys.
Two new index types are added:
1) Use an int-array-based index in cases where the difference between
the min and max values isn't too large, and keys are unique.
2) Use a Long2ObjectOpenHashMap (instead of the prior Java HashMap) in
all other cases.
In addition:
1) RowBasedIndexBuilder, a new class, is responsible for picking which
index implementation to use.
2) The IndexedTable.Index interface is extended to support using
unboxed primitives in the unique-long-keys case, and callers are
updated to use the new functionality.
Other key types continue to use indexes backed by Java HashMaps.
* Fixup logic.
* Add tests.
* Adding more worker metrics to Druid Overlord
* Changing the nomenclature from worker to peon as that represents the metrics that we want to monitor better
* Few more instance of worker usage replaced with peon
* Modifying the peon idle count logic to only use eligible workers available capacity
* Changing the naming to task slot count instead of peon
* Adding some unit test coverage for the new test runner apis
* Addressing Review Comments
* Modifying the TaskSlotCountStatsProvider apis so that overlords which are not leader do not emit these metrics
* Fixing the spelling issue in the docs
* Setting the annotation Nullable on the TaskSlotCountStatsProvider methods
* Compaction config UI optional numShards
Specifying `numShards` for hashed partitions is no longer required after
https://github.com/apache/druid/pull/10419. Update the UI to make
`numShards` an optional field for hash partitions.
* Update snapshot
When using the web console to load data by reindexing from Druid, the
`Datasource` and `Interval` inputs are required during the `Connect`
step. Unlike the `Datasource` input, the `Interval` input did not have a
blue outline to indicate that it was required as the `IntervalInput`
component did not support an `intent` property.
* vectorize remaining math expressions
* fixes
* remove cannotVectorize() where no longer true
* disable vectorized groupby for numeric columns with nulls
* fixes
Add an E2E test for the common case web console workflow of setting up
autocompaction that changes the partitions from dynamic to hashed.
Also fix an issue with the async test setup to properly wait for the web
console to be ready.
* Store hash partition function in dataSegment and allow segment pruning only when hash partition function is provided
* query context
* fix tests; add more test
* javadoc
* docs and more tests
* remove default and hadoop tests
* consistent name and fix javadoc
* spelling and field name
* default function for partitionsSpec
* other comments
* address comments
* fix tests and spelling
* test
* doc
* compaction dialog update
* fix test snapshot
* Update web-console/src/dialogs/compaction-dialog/compaction-dialog.tsx
Co-authored-by: Chi Cao Minh <chi.caominh@imply.io>
* Update web-console/src/dialogs/compaction-dialog/compaction-dialog.tsx
Co-authored-by: Chi Cao Minh <chi.caominh@imply.io>
* feedback changes
Co-authored-by: Chi Cao Minh <chi.caominh@imply.io>
* Include Sequence-building time in CPU time metric.
Meaningful work can be done while building Sequences, and we should
count this work. On the Broker, this includes subquery processing
work done by the mergeResults call of the GroupByQueryQueryToolChest.
* Add test.
* subtotalsSpec results with null values
Document the format change in results of a groupBy query with a subtotalsSpec. This update applies to 0.18 and later.
* Review catches
* Auto-compaction snapshot API
* Auto-compaction snapshot API
* Auto-compaction snapshot API
* Auto-compaction snapshot API
* Auto-compaction snapshot API
* Auto-compaction snapshot API
* Auto-compaction snapshot API
* fix when not all compacted segments are iterated
* add unit tests
* add unit tests
* add unit tests
* add unit tests
* add unit tests
* add unit tests
* add some tests to make code cov happy
* address comments
* address comments
* address comments
* address comments
* make code coverage happy
* address comments
* address comments
* address comments
* address comments
* Adding more dimensions to the audit log entry
* Making adding payload in audit metric optional
* Changing the name of the parameter to includePayloadAsDimensionInMetric. Adding a unit test
* Fixing the intellij code introspection issues
* recreate the balancer executor only when needed
* fix UT error
* shutdown the balancer executor in stopBeingLeader and stop
* remove commented code
* remove comments
* Working
* add test
* doc
* fix test
* split other integration test
* exclude other-index from other tests
* doc anchor fix
* adjust task slots and number of merge tasks
* spell check
* reduce maxNumConcurrentSubTasks to 1
* maxNumConcurrentSubtasks for range partitinoing
* reduce memory for historical
* change group name
The code coverage diff calculation assumes the TRAVIS_BRANCH environment
variable is the name of a branch; however, for tag builds it is the name
of the tag so the diff calculation fails. Since builds triggered by tags
do not have a code diff, the coverage check should be skipped to avoid
the error and to save some CI resources.
* push down ValueType to ExprType conversion, tidy up
* determine expr output type for given input types
* revert unintended name change
* add nullable
* tidy up
* fixup
* more better
* fix signatures
* naming things is hard
* fix inspection
* javadoc
* make default implementation of Expr.getOutputType that returns null
* rename method
* more test
* add output for contains expr macro, split operation and function auto conversion