* Enable ability to toggle SegmentMetadata request logging on/off
* Move SegmentMetadata query log filter to FilteredRequestLogger
* Update documentation to reflect the segment metadata flag moving to the filtered request logger
* Modify patch to allow blacklist of query types to not log to request logger
* Address styling and naming requests following latest code review
* Fix indentation on multiple locations per Druid style rules
* Fix race between canHandle() and addSegment() in StorageLocation
* add comment
* Add shuffleSegmentPusher which is a dataSegmentPusher used for writing shuffle data in local storage.
* add comments
* unused import
* add comments
* fix test
* address comments
* remove <p> tag from javadoc
* address comments
* comparingLong
* Address comments
* fix test
* Refactored ResponseContext and aggregated its keys into Enum
* Added unit tests for ResponseContext and refactored the serialization
* Removed unused methods
* Fixed code style
* Fixed code style
* Fixed code style
* Made SerializationResult static
* Updated according to the PR discussion:
Renamed an argument
Updated comparator
Replaced Pair usage with Map.Entry
Added a comment about quadratic complexity
Removed boolean field with an expression
Renamed SerializationResult field
Renamed the method merge to add and renamed several context keys
Renamed field and method related to scanRowsLimit
Updated a comment
Simplified a block of code
Renamed a variable
* Added JsonProperty annotation to renamed ScanQuery field
* Extension-friendly context key implementation
* Refactored ResponseContext: updated delegate type, comments and exceptions
Reducing serialized context length by removing some of its'
collection elements
* Fixed tests
* Simplified response context truncation during serialization
* Extracted a method of removing elements from a response context and
added some comments
* Fixed typos and updated comments
* Add a cluster-wide configuration to force timeChunk lock and add a doc for segment locking
* add more test
* javadoc for missingIntervalsInOverwriteMode
* Fix test
* Address comments
* avoid spotbugs
* Add IPv4 SQL functions
New SQL functions for filtering IPv4 addresses:
- IPV4_MATCH: Check if IP address belongs to a subnet
- IPV4_PARSE: Convert string IP address to integer
- IPV4_STRINGIFY: Convert integer IP address to string
These are the SQL analogs of the druid expressions with the same name.
Filtering is more efficient when operating on IP addresses as integers
instead of strings.
* Refactor operator conversions into named constants
* Add IPv4 druid expressions
New druid expressions for filtering IPv4 addresses:
- ipv4address_match: Check if IP address belongs to a subnet
- ipv4address_parse: Convert string IP address to long
- ipv4address_stringify: Convert long IP address to string
These expressions operate on IP addresses represented as either strings
or longs, so that they can be applied to dimensions with mixed
representation of IP addresses. The filtering is more efficient when
operating on IP addresses as longs. In other words, the intended use
case is:
1) Use ipv4address_parse to convert to long at ingestion time
2) Use ipv4address_match to filter (on longs) at query time
3) Use ipv4adress_stringify to convert to (readable) string at query
time
* Fix licenses and null handling
* Simplify IPv4 expressions
* Fix tests
* Fix check for valid ipv4 address string
* GroupBy array-based result rows.
Fixes#8118; see that proposal for details.
Other than the GroupBy changes, the main other "interesting" classes are:
- ResultRow: The array-based result type.
- BaseQuery: T is no longer required to be Comparable.
- QueryToolChest: Adds "decorateObjectMapper" to enable query-aware serialization
and deserialization of result rows (necessary due to their positional nature).
- QueryResource: Uses the new decoration functionality.
- DirectDruidClient: Also uses the new decoration functionality.
- QueryMaker (in Druid SQL): Modifications to read ResultRows.
These classes weren't changed, but got some new javadocs:
- BySegmentQueryRunner
- FinalizeResultsQueryRunner
- Query
* Adjustments for TC stuff.
* Add segment timeline chart
* fix file
* Fix bugs: no data handling & scaling problems
* resolve conflict
* changed package-lock
* do not show by default
* trust the interop
* stricter type fixes
* fix sasslint
* Use partitionsSpec for all task types
* fix doc
* fix typos and revert to use isPushRequired
* address comments
* move partitionsSpec to core
* remove hadoopPartitionsSpec
* removed hard-coded Kafka key and value deserializer, leaving default deserializer as org.apache.kafka.common.serialization.ByteArrayDeserializer. Also added checks to ensure that any provided deserializer class extends org.apache.kafka.serialization.Deserializer and outputs a byte array.
* Addressed all comments from original pull request and also added a
unit test.
* Added additional test that uses "poll" to ensure that custom deserializer
works properly.
* firehose doc adjustments
* fix typo
* additional information on parser types in ingestion docs
* clarify ingest segment firehose docs, add sql firehose examples to sql extension pages
* fixit
* make sql firehose more forgiving my always constructing a MapInputRowParser from the parseSpec of whatever actual InputRowParser impl is provided, remove doc references to map based parsers
* transforms
* fix tests
* tidy up nulls
* standardize more on undefined
* updated licenses
* do not do heavy handed rendering
* reindex from druid
* tidy up
* add inline firehose
* add husky
* sass lint
* better suggestion
* fix script typo
* adjust time formats
* add missing time formats
* use term 'reindex'
* fix lodash.compact
* tidy up nulls
* standardize more on undefined
* updated licenses
* update snapshot
* do not do heavy handed rendering
* add placeholder to SQL view
* remove pointelss fragment