Tries to address the comments made on #16284 after merged.
Changes:
- Remove method `Supervisor.getLagMetric()`
- Add method `Supervisor.computeLagForAutoScaler()`
- Remove classes `LagMetric` and `LagMetricTest`
Statsd client sometimes drops metrics when this queueSize of statsd client with max unprocessed messages is completely full. This causes some high cardinality metrics like per partition lag being droppped.
There are multiple parameters of statsdclient that can be initialized and can help increase the load/capacity of client to not to drop metrics more frequently.
Properties like queueSize, poolSize, processorWorkers and senderWorkers will now be configurable at runtime
* Additional short circuiting knowledge in filter bundles.
Three updates:
1) The parameter "selectionRowCount" on "makeFilterBundle" is renamed
"applyRowCount", and redefined as an upper bound on rows remaining
after short-circuiting (rather than number of rows selected so far).
This definition works better for OR filters, which pass through the
FALSE set rather than the TRUE set to the next subfilter.
2) AndFilter uses min(applyRowCount, indexIntersectionSize) rather
than using selectionRowCount for the first subfilter and indexIntersectionSize
for each filter thereafter. This improves accuracy when the incoming
applyRowCount is smaller than the row count from the first few indexes.
3) OrFilter uses min(applyRowCount, totalRowCount - indexUnionSize) rather
than applyRowCount for subfilters. This allows an OR filter to pass
information about short-circuiting to its subfilters.
To help write tests for this, the patch also moves the sampled
wikiticker data file from sql to processing.
* Forbidden APIs.
* Forbidden APIs.
* Better comments.
* Fix inspection.
* Adjustments to tests.
Changes:
- Add column `task_allocator_id` to `pendingSegments` metadata table.
- Add column `upgraded_from_segment_id` to `pendingSegments` metadata table.
- Add interface `PendingSegmentAllocatingTask` and implement it by all tasks which
can allocate pending segments.
- Use `taskAllocatorId` to identify the task (and its sub-tasks or replicas) to which
a pending segment has been allocated.
- Perform active cleanup of pending segments in `TaskLockbox` once there are no
active tasks for the corresponding task allocator id.
- When committing APPEND segments, also commit all upgraded pending segments
corresponding to that task allocator id.
- When committing REPLACE segments, upgrade all overlapping pending segments in
the same transaction.
Bug:
#15724 introduced a bug where a rolling upgrade would cause all task locations
returned by the Overlord on an older version to be unknown.
Fix:
If the new API fails, fall back to single task status API which always returns a valid task location.
Update dependencies to address CVEs:
- Update netty from 4.1.107.Final to 4.1.108.Final to address: CVE-2024-29025
- Update zookeeper from 3.8.3 to 3.8.4 to address: CVE-2024-23944
Release notes:
- Update netty from 4.1.107.Final to 4.1.108.Final to address: CVE-2024-29025
- Update zookeeper from 3.8.3 to 3.8.4 to address: CVE-2024-23944
* Adds Druid SQL query examples for the Timeseries and GroupBy Native queries in the stats aggregator docs page
* Updates intervals in Native Query to remove excess Time part in timestamp
* Moves Druid SQL section above Native query because sql used more often by users
* removes old Druid SQL sections
* Adds TopN Druid SQL query using ORDER BY and LIMIT
* Adds table for Druid SQL variance and standard deviation functions
* Update docs/development/extensions-core/stats.md
Co-authored-by: Abhishek Radhakrishnan <abhishek.rb19@gmail.com>
---------
Co-authored-by: Karan Kumar <karankumar1100@gmail.com>
Co-authored-by: Abhishek Radhakrishnan <abhishek.rb19@gmail.com>
Currently, export creates the files at the provided destination. The addition of the manifest file will provide a list of files created as part of the manifest. This will allow easier consumption of the data exported from Druid, especially for automated data pipelines
Follow up to #16217
Changes:
- Update `OverlordClient.getReportAsMap()` to return `TaskReport.ReportMap`
- Move the following classes to `org.apache.druid.indexer.report` in the `druid-processing` module
- `TaskReport`
- `KillTaskReport`
- `IngestionStatsAndErrorsTaskReport`
- `TaskContextReport`
- `TaskReportFileWriter`
- `SingleFileTaskReportFileWriter`
- `TaskReportSerdeTest`
- Remove `MsqOverlordResourceTestClient` as it had only one method
which is already present in `OverlordResourceTestClient` itself
The default value for druid.coordinator.kill.period (if unspecified) has changed from P1D to the value of druid.coordinator.period.indexingPeriod. Operators can choose to override druid.coordinator.kill.period and that will take precedence over the default behavior.
The default value for the coordinator dynamic config killTaskSlotRatio is updated from 1.0 to 0.1. This ensures that that kill tasks take up only 1 task slot right out-of-the-box instead of taking up all the task slots.
* Remove stale comment and inline canDutyRun()
* druid.coordinator.kill.period defaults to druid.coordinator.period.indexingPeriod if not set.
- Remove the default P1D value for druid.coordinator.kill.period. Instead default
druid.coordinator.kill.period to whatever value druid.coordinator.period.indexingPeriod is set
to if the former config isn't specified.
- If druid.coordinator.kill.period is set, the value will take precedence over
druid.coordinator.period.indexingPeriod
* Update server/src/test/java/org/apache/druid/server/coordinator/DruidCoordinatorConfigTest.java
* Fix checkstyle error
* Clarify comment
* Update server/src/main/java/org/apache/druid/server/coordinator/DruidCoordinatorConfig.java
* Put back canDutyRun()
* Default killTaskSlotsRatio to 0.1 instead of 1.0 (all slots)
* Fix typo DEFAULT_MAX_COMPACTION_TASK_SLOTS
* Remove unused test method.
* Update default value of killTaskSlotsRatio in docs and web-console default mock
* Move initDuty() after params and config setup.
* Fix ORDER BY on certain GROUPING SETS.
DefaultLimitSpec (part of native groupBy) had a bug where it would assume
that results are naturally ordered by dimensions even when subtotalsSpec
is present. However, this is not necessarily the case. For certain
combinations of ORDER BY and GROUPING SETS, this would cause the ORDER BY
to be ignored.
* Fix test testGroupByWithSubtotalsSpecWithOrderLimitForcePushdown. Resorting was necessary.
* Updated the drill test expected results which are failing due to druid's default sorting algorithm taking nulls first approach.
* Corrected the queries where date time values are directly provided
* marked 2 cases failing with resultset casting issues
Changes:
- Add `TaskContextEnricher` interface to improve task management and monitoring
- Invoke `enrichContext` in `TaskQueue.add()` whenever a new task is submitted to the Overlord
- Add `TaskContextReport` to write out task context information in reports
* SQL tests: avoid mixing skip and cannot vectorize.
skipVectorize switches off vectorization tests completely, and
cannotVectorize turns vectorization tests into negative tests. It doesn't
make sense to use them together, so this patch makes it an error to do so,
and cleans up cases where both are mentioned.
This patch also has the effect of changing various tests from skipVectorize
to cannotVectorize, because in the past when both were mentioned,
skipVectorize would take priority.
* Fix bug with StringAnyAggregatorFactory attempting to vectorize when it cannt.
* Fix tests.
Compaction in the native engine by default records the state of compaction for each segment in the lastCompactionState segment field. This PR adds support for doing the same in the MSQ engine, targeted for future cases such as REPLACE and compaction done via MSQ.
Note that this PR doesn't implicitly store the compaction state for MSQ replace tasks; it is stored with flag "storeCompactionState": true in the query context.
Current Runtime Exceptions generated while writing frames only include the exception itself without including the name of the column they were encountered in. This patch introduces the further information in the error and makes it non-retryable.
This PR logs the segment type and reason chosen. It also adds it to the query report, to be displayed in the UI.
This PR adds a new section to the reports, segmentReport. This contains the segment type created, if the query is an ingestion, and null otherwise.
* Reduce upload buffer size in GoogleTaskLogs.
Use a 1MB upload buffer, rather than the default of 15 MB in the API client. This is
mainly because MMs may upload logs in parallel, and typically have small heaps. The
default-sized 15 MB buffers add up quickly and can cause a MM to run out of memory.
* Make bufferSize a nullable Integer. Add tests.
* Better forced mode indication
* more robust
* Update web-console/src/components/header-bar/header-bar.tsx
Co-authored-by: Charles Smith <techdocsmith@gmail.com>
* Update web-console/src/components/header-bar/header-bar.tsx
Co-authored-by: Charles Smith <techdocsmith@gmail.com>
* Update web-console/src/components/header-bar/restricted-mode/__snapshots__/restricted-mode.spec.tsx.snap
Co-authored-by: Charles Smith <techdocsmith@gmail.com>
* Update web-console/src/components/header-bar/restricted-mode/__snapshots__/restricted-mode.spec.tsx.snap
Co-authored-by: Charles Smith <techdocsmith@gmail.com>
* Update web-console/src/components/header-bar/restricted-mode/restricted-mode.tsx
Co-authored-by: Charles Smith <techdocsmith@gmail.com>
* Update web-console/src/components/header-bar/restricted-mode/restricted-mode.tsx
Co-authored-by: Charles Smith <techdocsmith@gmail.com>
* Update web-console/src/components/header-bar/restricted-mode/restricted-mode.tsx
Co-authored-by: Charles Smith <techdocsmith@gmail.com>
* Update web-console/src/components/header-bar/restricted-mode/restricted-mode.tsx
Co-authored-by: Charles Smith <techdocsmith@gmail.com>
* Update web-console/src/components/header-bar/restricted-mode/restricted-mode.tsx
Co-authored-by: Charles Smith <techdocsmith@gmail.com>
* Update web-console/src/components/header-bar/restricted-mode/restricted-mode.tsx
Co-authored-by: Charles Smith <techdocsmith@gmail.com>
* Update web-console/src/components/header-bar/restricted-mode/restricted-mode.tsx
Co-authored-by: Charles Smith <techdocsmith@gmail.com>
* reformat
* forced => manual capability detection
* typo
* typo2
---------
Co-authored-by: Charles Smith <techdocsmith@gmail.com>
Support for exporting msq results to gcs bucket. This is essentially copying the logic of s3 export for gs, originally done by @adarshsanjeev in this PR - #15689
* Allow typedIn to run in replace-with-default mode.
Useful when data servers, like Historicals, are running in replace-with-default
mode and the Broker is running in SQL-compatible mode, which can happen during
a rolling update that is applying a mode change.