changes:
* fixes a bug with unnest storage adapter not preserving underlying columns dictionary uniqueness when allowing dimension selector cursor
* fixes a bug with unnest on realtime segments with empty rows incorrectly specifying index 0 as the row dictionary value
Add a shuffling based on the resultShuffleSpecFactory after a limit processor depending on the query destination. LimitFrameProcessors currently do not update the partition boosting column, so we also add the boost column to the previous stage, if one is required.
temp fix until CALCITE-6435 gets fixed (released&upgraded to)
added a custom rule (FixIncorrectInExpansionTypes) to fix-up types of the affected literals
added a testcase which will alert on upgrade
* fix equality and typed in filter behavior for numeric match values on string columns
changes:
* EqualityFilter and TypedInfilter numeric match values against string columns will now cast strings to numeric values instead of converting the numeric values directly to string for pure string equality, which is consistent with the casts which are eaten in the SQL layer, as well as classic druid behavior
* added tests to cover numeric equality matching. Double match values in particular would fail to match the string values since `1.0` would become `'1.0'` which does not match `'1'`.
Motivation:
- Improve code hygeiene
- Make `SegmentLoadDropHandler` easily extensible
Changes:
- Add `SegmentBootstrapper`
- Move code for bootstrapping segments already cached on disk and fetched from coordinator to
`SegmentBootstrapper`.
- No functional change
- Use separate executor service in `SegmentBootstrapper`
- Bind `SegmentBootstrapper` to `ManageLifecycle` explicitly in `CliBroker`, `CliHistorical` etc.
* Update examples/bin/dsql scripts to accept Python 3
Remove redundant urllib import
Translating to Python3: Changing xrange to range
Translating to Python3: Changing long to int
Translating to Python3: Change urllib2 methods, and fix encoding/decoding issues
Remove unnecessary import
Add option for Python2
Rename files
* Update examples/bin/dsql
Co-authored-by: Benedict Jin <asdf2014@apache.org>
* Resolve PR comments
Add comment in files indicating updates need to be made in both places
Update examples/bin/dsql
Co-authored-by: Benedict Jin <asdf2014@apache.org>
* Update error output when using Python 2.
Co-authored-by: Abhishek Radhakrishnan <abhishek.rb19@gmail.com>
---------
Co-authored-by: Benedict Jin <asdf2014@apache.org>
Co-authored-by: Abhishek Radhakrishnan <abhishek.rb19@gmail.com>
Previously, the segment granularity for tables in the catalog had to be defined in period format, ie `'PT1H'` , `'P1D'`, etc. This disallows a user from defining segment granularity of `'ALL'` for a table in the catalog, which may be a valid use case. This change makes it so that a user may define the segment granularity of a table in the catalog, as any string that results in a valid granularity using either the `Granularity.fromString(str)` method, or `new PeriodGranularity(new Period(value), null, null)`, and that granularity maps to a standard supported granularity, where `GranularityType.isStandard(granularity)` returns true. As a result a user may who wants to assign a catalog table's segment granularity to be hourly, may assign the segment granularity property of the table to be either `PT1H`, or `HOUR`. These are the same formats accepted at query time.
Updated javadoc for `ColumnIndexSupplier.as` to elaborate on the types of indexes callers might want to ask for from the method, as well as help implementors know what kinds of indexes they should implement to participate in filtering
* Defer more expressions in vectorized groupBy.
This patch adds a way for columns to provide GroupByVectorColumnSelectors,
which controls how the groupBy engine operates on them. This mechanism is used
by ExpressionVirtualColumn to provide an ExpressionDeferredGroupByVectorColumnSelector
that uses the inputs of an expression as the grouping key. The actual expression
evaluation is deferred until the grouped ResultRow is created.
A new context parameter "deferExpressionDimensions" allows users to control when
this deferred selector is used. The default is "fixedWidthNonNumeric", which is a
behavioral change from the prior behavior. Users can get the prior behavior by setting
this to "singleString".
* Fix style.
* Add deferExpressionDimensions to SqlExpressionBenchmark.
* Fix style.
* Fix inspections.
* Add more testing.
* Use valueOrDefault.
* Compute exprKeyBytes a bit lighter-weight.
* Add MSQ query context maxNumSegments.
- Default is MAX_INT (unbounded).
- When set and if a time chunk contains more number of segments than set in the
query context, the MSQ task will fail with TooManySegments fault.
* Fixup hashCode().
* Rename and checkpoint.
* Add some insert and replace happy and sad path tests.
* Update error msg.
* Commentary
* Adjust the default to be null (meaning no max bound on number of segments).
Also fix formatter.
* Fix CodeQL warnings and minor cleanup.
* Assert on maxNumSegments tuning config.
* Minor test cleanup.
* Use null default for the MultiStageQueryContext as well
* Review feedback
* Review feedback
* Move logic to common function getPartitionsByBucket shared by INSERT and REPLACE.
* Rename to validateNumSegmentsPerBucketOrThrow() for consistency.
* Add segmentGranularity to error message.
MSQ cannot process null bytes in string fields, and the current workaround is to remove them using the REPLACE function. 'removeNullBytes' context parameter has been added which sanitizes the input string fields by removing these null bytes.
Changes:
- Rename `UsedSegmentChecker` to `PublishedSegmentsRetriever`
- Remove deprecated single `Interval` argument from `RetrieveUsedSegmentsAction`
as it is now unused and has been deprecated since #1988
- Return `Set` of segments instead of a `Collection` from `IndexerMetadataStorageCoordinator.retrieveUsedSegments()`
* first pass
* more changes
* fix tests and formatting
* fix kinesis failing tests
* fix kafka tests
* add dimension name to float parse errors
* double and convertToType handling of dimensionName can report parse errors with dimension name
* fix checkstyle issue
* fix tests
* more cases to have better parse exception messages
* fix test
* fix tests
* partially address comments
* annotate method parameter with nullable
* address comments
* fix tests
* let float, double, long dimensionIndexer pass dimensionName down to dimensionHandlerUtils
* fix compilation error and clean up formatting
* clean up whitespace
* address feedback. undo change, pass down report parse exception for convertToType
* fix test
* update link and title
* Discard changes to website/package.json
* Apply suggestions from code review
Co-authored-by: Charles Smith <techdocsmith@gmail.com>
---------
Co-authored-by: Victoria Lim <vtlim@users.noreply.github.com>
Co-authored-by: Charles Smith <techdocsmith@gmail.com>