* Add documentation for when caching is unsupported
* Minor changes
* Minor doc fix
* Review comments
* Add more details
* Fix spelling check
* Fix doc for union query
* Trailing dot
* fix negative queuedSize problem in CuratorLoadQueuePeon
* add comment and optimize test case
* fix typo
Co-authored-by: huagnhui.bigrey <huanghui.bigrey@bytedance.com>
* fix JSON format
* Change all columns in sys segments to be JSON
* Change all columns in sys segments to be JSON
* add tests
* fix failing tests
* fix failing tests
* added query error suggestions
* simplify the SQLs
* change segment size display to rows
* suggestion tests
* update snapshot
* make error detection more robust
* remove errant console log
* fix imports
* put suggestion on top
* better error rendering
* format as millions
* add .druid.pid to gitignore
* rename segment_size to segment_rows, fix visability, fix divide by zero
* update snapshots
* Web console: targetRowsPerSegment for hashed partionin
Added `targetRowsPerSegment` to the web console for hashed partition for both
the auto compaction view and as part of the ingestion workflow.
The help text was also updated to indicate when a user should care about
updating these fields
* code review
* update test snapshots
* oops
* Add shuffle metrics for parallel indexing
* javadoc and concurrency test
* concurrency
* fix javadoc
* Feature flag
* doc
* fix doc and add a test
* checkstyle
* add tests
* fix build and address comments
* Proposed changes for making joins cacheable
* Add unit tests
* Fix tests
* simplify logic
* Pull empty byte array logic out of CachingQueryRunner
* remove useless null check
* Minor refactor
* Fix tests
* Fix segment caching on Broker
* Move join cache key computation in Broker
Move join cache key computation in Broker from ResultLevelCachingQueryRunner to CachingClusteredClient
* Fix compilation
* Review comments
* Add more tests
* Fix inspection errors
* Pushed condition analysis to JoinableFactory
* review comments
* Disable join caching for broker and add prefix key to BroadcastSegmentIndexedTable
* Remove commented lines
* Fix populateCache
* Disable caching for selective datasources
Refactored the code so that we can decide at the data source level, whether to enable cache for broker or data nodes
* Fix Avro OCF detection prefix and run formation detection on raw input
* Support Avro Fixed and Enum types correctly
* Check Avro version byte in format detection
* Add test for AvroOCFReader.sample
Ensures that the Sampler doesn't receive raw input that it can't
serialize into JSON.
* Document Avro type handling
* Add TS unit tests for guessInputFormat
- Update playwright to latest version
- Provide environment variable to disable/enable headless mode
- Allow running E2E tests against any druid cluster running on standard
ports (tutorial-batch.spec.ts now uses an absolute instead of relative
path for the input data)
- Provide environment variable to change target web console port
- Druid setup does not need to download zookeeper
Restore the web console's ability to view a datasource's compaction
configuration via the "action" menu. Refactoring done in
https://github.com/apache/druid/pull/10438 introduced a regression that
always caused the default compaction configuration to be shown via the
"action" menu instead.
Regression test is added in e2e-tests/auto-compaction.spec.ts.
Add an E2E test for the web console workflow of reindexing a Druid
datasource to change the secondary partitioning type. The new test
changes dynamic to single dim partitions since the autocompaction test
already does dynamic to hashed partitions.
Also, run the web console E2E tests in parallel to reduce CI time and
change naming convention for test datasources to make it easier to map
them to the corresponding test run.
Main changes:
1) web-consolee2e-tests/reindexing.spec.ts
- new E2E test
2) web-console/e2e-tests/component/load-data/data-connector/reindex.ts
- new data loader connector for druid input source
3) web-console/e2e-tests/component/load-data/config/partition.ts
- move partition spec definitions from compaction.ts
- add new single dim partition spec definition
* RowBasedIndexedTable: Add specialized index types for long keys.
Two new index types are added:
1) Use an int-array-based index in cases where the difference between
the min and max values isn't too large, and keys are unique.
2) Use a Long2ObjectOpenHashMap (instead of the prior Java HashMap) in
all other cases.
In addition:
1) RowBasedIndexBuilder, a new class, is responsible for picking which
index implementation to use.
2) The IndexedTable.Index interface is extended to support using
unboxed primitives in the unique-long-keys case, and callers are
updated to use the new functionality.
Other key types continue to use indexes backed by Java HashMaps.
* Fixup logic.
* Add tests.
* Adding more worker metrics to Druid Overlord
* Changing the nomenclature from worker to peon as that represents the metrics that we want to monitor better
* Few more instance of worker usage replaced with peon
* Modifying the peon idle count logic to only use eligible workers available capacity
* Changing the naming to task slot count instead of peon
* Adding some unit test coverage for the new test runner apis
* Addressing Review Comments
* Modifying the TaskSlotCountStatsProvider apis so that overlords which are not leader do not emit these metrics
* Fixing the spelling issue in the docs
* Setting the annotation Nullable on the TaskSlotCountStatsProvider methods
* Compaction config UI optional numShards
Specifying `numShards` for hashed partitions is no longer required after
https://github.com/apache/druid/pull/10419. Update the UI to make
`numShards` an optional field for hash partitions.
* Update snapshot
When using the web console to load data by reindexing from Druid, the
`Datasource` and `Interval` inputs are required during the `Connect`
step. Unlike the `Datasource` input, the `Interval` input did not have a
blue outline to indicate that it was required as the `IntervalInput`
component did not support an `intent` property.
* vectorize remaining math expressions
* fixes
* remove cannotVectorize() where no longer true
* disable vectorized groupby for numeric columns with nulls
* fixes
Add an E2E test for the common case web console workflow of setting up
autocompaction that changes the partitions from dynamic to hashed.
Also fix an issue with the async test setup to properly wait for the web
console to be ready.
* Store hash partition function in dataSegment and allow segment pruning only when hash partition function is provided
* query context
* fix tests; add more test
* javadoc
* docs and more tests
* remove default and hadoop tests
* consistent name and fix javadoc
* spelling and field name
* default function for partitionsSpec
* other comments
* address comments
* fix tests and spelling
* test
* doc