Commit Graph

3095 Commits

Author SHA1 Message Date
Abhishek Radhakrishnan fb7bb0953d
Kill segments by versions (#15994)
* Kill task version support.

Kill tasks by default kill all versions of unused segments in the specified
interval. Users wanting to delete specific versions (for example, data compliance
reasons) and keep rest of the versions can specify the optional version in the
kill task payload.

* Formatting changes.

* Multi version tests in RetrieveSegmentsActionsTest

Sort of like method-level parameterized tests.

* Address review feedback

* Accept a list of versions instead of a single version.

Support multiple versions.

* Tests for multiple versions.

* Update docs

* Cleanup

* Address review comments.

Retain the old interface method and make it default and route it to
the method with nullable versions variant. Update usages to use the
default method where versions doesn't matter.

* Remove versions from retreive used segments action.

* Some updates.

* Apply suggestions from code review

Co-authored-by: Kashif Faraz <kashif.faraz@gmail.com>

* /s/actual/observed/g

* minor test cleanup

* WIP: Test fixes and updates. Also add test for kill by version with used load spec.

Checkpoint.

---------

Co-authored-by: Kashif Faraz <kashif.faraz@gmail.com>
2024-03-13 09:37:30 +05:30
Katya Macedo 6f6f86c325
Update `maxRowsInMemory` and `maxBytesInMemory` description (#16104) 2024-03-12 14:40:15 -07:00
George Shiqi Wu 94d2a28465
Add deep storage segment metric (#16072)
* Add new metric for deepStorage segments

* Add docs

* change metric name
2024-03-11 10:24:46 -04:00
Sensor 2d62b4f09b
docs refinement: json format (#16080)
* docs refinement: json format

* Update docs/api-reference/tasks-api.md

Co-authored-by: Kashif Faraz <kashif.faraz@gmail.com>

---------

Co-authored-by: Kashif Faraz <kashif.faraz@gmail.com>
2024-03-11 15:49:14 +08:00
Charles Smith 3caacba8c5
update window functions doc (#15902)
Co-authored-by: Victoria Lim <vtlim@users.noreply.github.com>
2024-03-07 15:16:52 -08:00
Jill Osborne 67ae0ff450
Update docs for rabbit community extension (#16069)
* Updated docs for rabbit community extension

* Updated after review
2024-03-07 11:29:53 -08:00
Adithya Chakilam 564c44ed85
Add stats segmentsRead and segmentsPublished to compaction task reports (#15947)
Changes:
- Add visibility into number of segments read/published by each parallel compaction
- Add new fields `segmentsRead`, `segmentsPublished` to `IngestionStatsAndErrorsTaskReportData`
- Update `ParallelIndexSupervisorTask` to populate the new stats
2024-03-07 09:37:23 +05:30
Charles Smith ebf3bdd909
restore information about truncated responses to sql api (#16001)
Co-authored-by: 317brian <53799971+317brian@users.noreply.github.com>
Co-authored-by: Victoria Lim <lim.t.victoria@gmail.com>
2024-03-06 14:03:58 -08:00
Sergio Ferragut d38703281c
updated description of rowsPerPage in export operations (#16048)
* updated description of rowsPerPage in export operations

* Update docs/multi-stage-query/reference.md

Co-authored-by: Charles Smith <techdocsmith@gmail.com>

---------

Co-authored-by: Charles Smith <techdocsmith@gmail.com>
2024-03-05 15:42:12 -08:00
Gian Merlino 566013a5f5
Docs: Fix spelling of 5 GB. (#16040)
The spellchecker does not consider "5GB" to be spelled correctly.
2024-03-04 22:37:38 -08:00
zachjsh 720f1e834a
Add support for AzureDNSZone enabled storage accounts used for deep storage (#16016)
* Add support for AzureDNSZone enabled storage accounts used for deep storage

Added a new config to AzureAccountConfig

`storageAccountEndpointSuffix`

which allows the user to specify a storage account endpoint suffix where the underlying
storage account is enabled for AzureDNSZone. The previous config `endpointSuffix`, did not allow
support for such accounts. The previous config has been deprecated in favor of this new config. Also
fixed an issue where `managedIdentityClientId` was not being set properly

* * address review comments

* * add back azure government link and docs
2024-03-04 16:13:28 -05:00
Gian Merlino 930655ff18
Move retries into DataSegmentPusher implementations. (#15938)
* Move retries into DataSegmentPusher implementations.

The individual implementations know better when they should and should
not retry. They can also generate better error messages.

The inspiration for this patch was a situation where EntityTooLarge was
generated by the S3DataSegmentPusher, and retried uselessly by the
retry harness in PartialSegmentMergeTask.

* Fix missing var.

* Adjust imports.

* Tests, comments, style.

* Remove unused import.
2024-03-04 10:36:21 -08:00
Katya Macedo ced8be3044
docs: Add upgrade notes for Druid 29.0.0 (#16022) 2024-03-04 08:58:52 -08:00
Sensor 4e9b758661
Support CPU resource configurable for Kubernates job under MoK Mode (#16008)
* support CPU resource configurable for Kubernates job

* update property doc

* fix test name

* refine doc format
2024-03-04 10:12:09 -05:00
Adithya Chakilam ec52f686c0
Fix compaction tasks reports getting overwritten (#15981)
* Fix compaction tasks reports geting overwrittened

* only skip for compactiont task

* address comments

* fix boolean

* move boolean flag to task rather than spec

* rename variable

* add docs, fix missing case

* Update docs/ingestion/tasks.md

* rename var

* add task report decode check in IT

* change assert
2024-03-04 10:10:17 -05:00
317brian b3015bd7ce
docs: mention acid-compliance for meta store (#16014)
* docs: add mermaid diagram support

* fix crash when parsing data in data loader that can not be parsed (#15983)

* update jetty to address CVE (#16000)

* Concurrent replace should work with supervisors using concurrent locks (#15995)

* Concurrent replace should work with supervisors using concurrent locks

* Ignore supervisors with useConcurrentLocks set to false

* Apply feedback

* Add pre-check for heavy debug logs (#15706)

Co-authored-by: Kashif Faraz <kashif.faraz@gmail.com>
Co-authored-by: Benedict Jin <asdf2014@apache.org>

* Remove helm paths from CodeQL config (#16006)

* docs: mention acid-compliance for metadb

---------

Co-authored-by: Vadim Ogievetsky <vadim@ogievetsky.com>
Co-authored-by: Jan Werner <105367074+janjwerner-confluent@users.noreply.github.com>
Co-authored-by: AmatyaAvadhanula <amatya.avadhanula@imply.io>
Co-authored-by: Sensor <fectrain@outlook.com>
Co-authored-by: Kashif Faraz <kashif.faraz@gmail.com>
Co-authored-by: Benedict Jin <asdf2014@apache.org>
2024-03-04 11:00:38 +08:00
Zoltan Haindrich bf0995f846
Introduce dynamic table append (#15897) 2024-03-01 04:31:57 -05:00
317brian 3df161f73c
docs: update security doc for hashing (#15970)
* docs: add mermaid diagram support

* docs: update druid-basic-security doc to mention caching

* Update docs/development/extensions-core/druid-basic-security.md

Co-authored-by: Kashif Faraz <kashif.faraz@gmail.com>

---------

Co-authored-by: Kashif Faraz <kashif.faraz@gmail.com>
2024-02-28 09:48:37 +08:00
benkrug 0c601bf430
Update basic-cluster-tuning.md (#14964)
* Update basic-cluster-tuning.md

The sentence "When free system memory is greater than or equal to druid.segmentCache.locations, the more segment data the Historical can be held in the memory-mapped segment cache" didn't read well.  Updated to clarify it.

* Update docs/operations/basic-cluster-tuning.md

* Update docs/operations/basic-cluster-tuning.md

---------

Co-authored-by: Charles Smith <techdocsmith@gmail.com>
2024-02-28 09:48:20 +08:00
AlbericByte e7d753d4b0
update the doc for dump-segment tool when using jdk11+ (#15971)
* update the doc for dump-segment tool when using jdk11+

* update the style

* fix spell check error
2024-02-28 09:40:10 +08:00
Abhishek Radhakrishnan beccc401e1
Segments created in the same batch have the same `created_date` entry & rename metric (#15977)
* All segments stored in the same batch have the same created_date entry.

In the absence of a group_id column, this metadata would allow us to easily
reason about and troubleshoot ingestion-related issues.

* Rename metric name and code references to eligibleUnusedSegments.

Address review comment from https://github.com/apache/druid/pull/15941#discussion_r1503631992
2024-02-27 17:28:43 +05:30
Karan Kumar 5bb5b41b18
Adding task pending time in MSQ reports (#15966)
Added a new field pendingMs in MSQ task reports. This helps in figuring out the exact run time of the MSQ worker tasks.
    Fixed data races.
2024-02-27 14:41:28 +05:30
Abhishek Radhakrishnan 38ecf980d0
Refactor and add tests and metric to KillUnusedSegments duty (auto-kill) (#15941)
* Kill duty and test improvements.

Initial commit with:
- Bug fixes - auto-kill can throw NPE when there are no datasources present and defaults mismatch.
- Add new stat for candidate segment intervals killed.
- Move a couple of debug logs to info logs for improved visibility (should only log once per kill period).
- Remove redundant checks for code readability.
- Updated tests from using mocks (also the mocks weren't using last updated timestamp) and
  add more test coverage for different config parameters.
- Add a couple of unit tests that are ignored for the eternity case to prove that
  the kill duty doesn't clean up segments with ALL grain or that end in DateTimes.MAX.
- Migrate Druid exception from user to operator persona.

* Address review comments.

* Remove unused methods.

* fix up format specifier and validate bad config tests.

* Consolidate the helpers a bit more and add another test.

* Update test names. Add javadoc placeholders for slightly involved tests.

* Add docs for metric kill/candidateUnusedSegments/count.

Also, rename to disambiguate.

* Comments.

* Apply logging suggestions from code review

Co-authored-by: Kashif Faraz <kashif.faraz@gmail.com>

* Review comments

- Clarify docs on eligibility.
- Add test for multiple segments in the same interval. Clarify comment.
- Remove log line from test.
- Remove lastUpdatedDate = now.plus(10) from test.

* minor cleanup.

* Clarify javadocs for getUnusedSegmentIntervals().

---------

Co-authored-by: Kashif Faraz <kashif.faraz@gmail.com>
2024-02-27 12:14:41 +05:30
Abhishek Radhakrishnan 67a6224d91
Fix up incorrect `PARTITIONED BY` error messages (#15961)
* Fix up typos, inaccuracies and clean up code related to PARTITIONED BY.

* Remove wrapper function and update tests to use DruidExceptionMatcher.

* Checkstyle and Intellij inspection fixes.
2024-02-26 14:17:53 -05:00
Benjamin Hopp ebb7190545
Docs: Change single-dim to hashed in example for index task (#15529) 2024-02-26 09:16:10 +05:30
Gian Merlino b69f89d9f8
Clarify where to set druid.monitoring.monitors. (#15729) 2024-02-23 18:49:37 +05:30
Adithya Chakilam 1f443d218c
Enable partition stats on streaming task completion report (#15930)
Changes:
- Add visibility into number of records processed by each streaming task per partition
- Add field `recordsProcessed` to `IngestionStatsAndErrorsTaskReportData`
- Populate number of records processed per partition in `SeekableStreamIndexTaskRunner`
2024-02-23 16:29:03 +05:30
Jamie 80942d5754
Feature: add support for ingesting from rabbitmq super streams (#14137)
* Add support for ingesting from Rabbit MQ Super Streams
2024-02-22 10:50:37 +05:30
George Shiqi Wu 59bb72a926
Fix parsing of env variables when properties have underscores (#15919)
* Fix parsing of env variables when properties have underscores

* Add documentation

* Use a % sign instead
2024-02-21 13:18:21 -05:00
317brian c98d54f3c4
docs: delete unused file that causes confusion (#15910) 2024-02-14 16:42:02 -08:00
Peter Marshall cae9cbd7d7
Update tasks.md (#15887)
Remove erroneous white space causing render issues on this page.
2024-02-13 05:20:09 -08:00
Clint Wylie dad8398a4d
start process of deprecating non-sql compatible legacy configurations (#15713)
Starting the process to officially deprecate non SQL compatible modes by updating docs to aggressively call out that Druids non SQL compliant modes are deprecated and will go away someday. There are no code or behavior changes at this PR.
2024-02-13 15:31:45 +05:30
Katya Macedo 0f29ece6a9
[Docs] Refactor streaming ingestion section (#15591)
Merging the work so far. @ektravel , @vogievetsky if there are additional improvements, let's track them & make another pr.



* Refactor streaming ingestion docs

* Update property definition

* Update after review

* Update known issues

* Move kinesis and kafka topics to ingestion, add redirects

* Saving changes

* Saving

* Add input format text

* Update after review

* Minor text edit

* Update example syntax

* Revert back to colon

* Fix merge conflicts

* Fix broken links

* Fix spelling error
2024-02-12 13:52:42 -08:00
Charles Smith 2a42b11660
remove legacy Jupyter tutorial files (#15834)
* remove legacy files

* redirection for the jupyter tutorial page

* remove tutorial from sidebar

* remove redirection
2024-02-12 13:45:47 -08:00
Gian Merlino 7fea34abdd
LOOKUP docs: clarify behavior of replaceMissingValueWith. (#15879)
Clarify behavior when expr is null.
2024-02-11 13:11:00 -08:00
zachjsh f9ee2c353b
Extend the PARTITION BY clause to accept string literals for the time partitioning (#15836)
This PR contains a portion of the changes from the inactive draft PR for integrating the catalog with the Calcite planner https://github.com/apache/druid/pull/13686 from @paul-rogers, extending the PARTITION BY clause to accept string literals for the time partitioning
2024-02-09 11:45:38 -05:00
Tom 11a8624ef1
allow for kafka-emitter to have extra dimensions be set for each event it emits (#15845)
* allow for kafka-emitter to have extra dimensions be set for each event it emits

* fix checktsyle issue in kafkaemitterconfig

* make changes to fix docs, and cleanup copy paste error in #toString()

* undo formatting to markdown table

* add more branches so test passes

* fix checkstyle issue
2024-02-08 22:55:24 -08:00
Abhishek Radhakrishnan 1a5b57df84
Update `groupId` for delta-lake and iceberg extensions (#15843)
* Update the group id to org.apache.druid.extensions.contrib for contrib exts.

* Note iceberg and delta lake extensions in extensions.md

* properties and shell backticks

* Update groupId in distribution/pom.xml

* remove delta-lake from dist.

* Add note on downloading extension.
2024-02-07 23:54:06 -08:00
Adarsh Sanjeev 514b3b4d01
Add export capabilities to MSQ with SQL syntax (#15689)
* Add test

* Parser changes to support export statements

* Fix builds

* Address comments

* Add frame processor

* Address review comments

* Fix builds

* Update syntax

* Webconsole workaround

* Refactor

* Refactor

* Change export file path

* Update docs

* Remove webconsole changes

* Fix spelling mistake

* Parser changes, add tests

* Parser changes, resolve build warnings

* Fix failing test

* Fix failing test

* Fix IT tests

* Add tests

* Cleanup

* Fix unparse

* Fix forbidden API

* Update docs

* Update docs

* Address review comments

* Address review comments

* Fix tests

* Address review comments

* Fix insert unparse

* Add external write resource action

* Fix tests

* Add resource check to overlord resource

* Fix tests

* Add IT

* Update syntax

* Update tests

* Update permission

* Address review comments

* Address review comments

* Address review comments

* Add tests

* Add check for runtime parameter for bucket and path

* Add check for runtime parameter for bucket and path

* Add tests

* Update docs

* Fix NPE

* Update docs, remove deadcode

* Fix formatting
2024-02-07 22:08:50 +05:30
Vadim Ogievetsky f2b242b6e6
update console to core Druid changes (#15854) 2024-02-07 19:44:25 +05:30
Pramod Immaneni 59bca0951a
Parallelize storage of incremental segments (#13982)
During ingestion, incremental segments are created in memory for the different time chunks and persisted to disk when certain thresholds are reached (max number of rows, max memory, incremental persist period etc). In the case where there are a lot of dimension and metrics (1000+) it was observed that the creation/serialization of incremental segment file format for persistence and persisting the file took a while and it was blocking ingestion of new data. This affected the real-time ingestion. This serialization and persistence can be parallelized across the different time chunks. This update aims to do that.

The patch adds a simple configuration parameter to the ingestion tuning configuration to specify number of persistence threads. The default value is 1 if it not specified which makes it the same as it is today.
2024-02-07 10:43:05 +05:30
317brian 2dc71c7874
docs: fix rendering (#15835) 2024-02-06 07:18:43 -08:00
Gian Merlino 54b30646f3
Add sqlReverseLookupThreshold for ReverseLookupRule. (#15832)
If lots of keys map to the same value, reversing a LOOKUP call can slow
things down unacceptably. To protect against this, this patch introduces
a parameter sqlReverseLookupThreshold representing the maximum size of an
IN filter that will be created as part of lookup reversal.

If inSubQueryThreshold is set to a smaller value than
sqlReverseLookupThreshold, then inSubQueryThreshold will be used instead.
This allows users to use that single parameter to control IN sizes if they
wish.
2024-02-06 16:32:05 +05:30
Atul Mohan 2e46a98024
Add range filtering support for iceberg ingestion (#15782)
* Add range filtering support for iceberg ingestion

* Docs formatting

* Spelling
2024-02-01 23:32:30 -08:00
Aru Raghuwanshi 223f29d64c
Update input-sources.md for fixing the warehouse path example under S3 (#15823) 2024-02-01 23:32:05 -08:00
317brian 6d617c34d2
docs: revise concurrent append and replace (#15760)
Co-authored-by: Victoria Lim <vtlim@users.noreply.github.com>
2024-02-01 11:03:36 -08:00
Laksh Singla 7d65caf0c5
Update the docs for EARLIEST_BY/LATEST_BY aggregators with the newly added numeric capabilities (#15670) 2024-02-01 10:24:43 +05:30
Abhishek Radhakrishnan 9f95a691f7
Extension to read and ingest Delta Lake tables (#15755)
* something

* test commit

* compilation fix

* more compilation fixes (fixme placeholders)

* Comment out druid-kereberos build since it conflicts with newly added transitive deps from delta-lake

Will need to sort out the dependencies later.

* checkpoint

* remove snapshot schema since we can get schema from the row

* iterator bug fix

* json json json

* sampler flow

* empty impls for read(InputStats) and sample()

* conversion?

* conversion, without timestamp

* Web console changes to show Delta Lake

* Asset bug fix and tile load

* Add missing pieces to input source info, etc.

* fix stuff

* Use a different delta lake asset

* Delta lake extension dependencies

* Cleanup

* Add InputSource, module init and helper code to process delta files.

* Test init

* Checkpoint changes

* Test resources and updates

* some fixes

* move to the correct package

* More tests

* Test cleanup

* TODOs

* Test updates

* requirements and javadocs

* Adjust dependencies

* Update readme

* Bump up version

* fixup typo in deps

* forbidden api and checkstyle checks

* Trim down dependencies

* new lines

* Fixup Intellij inspections.

* Add equals() and hashCode()

* chain splits, intellij inspections

* review comments and todo placeholder

* fix up some docs

* null table path and test dependencies. Fixup broken link.

* run prettify

* Different test; fixes

* Upgrade pyspark and delta-spark to latest (3.5.0 and 3.0.0) and regenerate tests

* yank the old test resource.

* add a couple of sad path tests

* Updates to readme based on latest.

* Version support

* Extract Delta DateTime converstions to DeltaTimeUtils class and add test

* More comprehensive split tests.

* Some test renames.

* Cleanup and update instructions.

* add pruneSchema() optimization for table scans.

* Oops, missed the parquet files.

* Update default table and rename schema constants.

* Test setup and misc changes.

* Add class loader logic as the context class loader is unaware about extension classes

* change some table client creation logic.

* Add hadoop-aws, hadoop-common and related exclusions.

* Remove org.apache.hadoop:hadoop-common

* Apply suggestions from code review

Co-authored-by: Victoria Lim <vtlim@users.noreply.github.com>

* Add entry to .spelling to fix docs static check

---------

Co-authored-by: abhishekagarwal87 <1477457+abhishekagarwal87@users.noreply.github.com>
Co-authored-by: Laksh Singla <lakshsingla@gmail.com>
Co-authored-by: Victoria Lim <vtlim@users.noreply.github.com>
2024-01-30 21:53:50 -08:00
Benjamin Hopp 6177f6efd7
Fixing formatting of Iceberg Catalog Object (#15748) 2024-01-30 20:17:38 -08:00
Abhishek Radhakrishnan dbdfae3011
Fix up typo </br /> -> <br /> and adjust interpolated exception msg in InvalidNullByteFault. (#15804) 2024-01-30 12:44:51 -08:00