* Kerberos TGT will expire after some pre-determined time, this patch add relogin calls
Change-Id: I17ccb9b42aa3032de5d28c8c21e4ffbe8222b815
* exit if the first login passed
Change-Id: Ifefd5e9e0dd7d07b05cc493ab1f72415de557ec2
* Kafka Index Task that supports Incremental handoffs
- Incrementally handoff segments when they hit maxRowsPerSegment limit
- Decouple segment partitioning from Kafka partitioning, all records from consumed partitions go to a single druid segment
- Support for restoring task on middle manager restarts by check pointing end offsets for segments
* take care of review comments
* make getCurrentOffsets call async, keep track of publishing sequence, review comments
* fix setEndoffset duplicate request handling, formatting
* fix unit test
* backward compatibility
* make AppenderatorDriverMetadata backwards compatible
* add unit test
* fix deadlock between persist and push executors in AppenderatorImpl
* fix formatting
* use persist dir instead of work dir
* review comments
* fix deadlock
* actually fix deadlock
* ExpressionSelectors: Add caching selectors.
- SingleLongInputCaching selector for expressions on the __time column,
using a similar optimization to SingleScanTimeDimSelector
- SingleStringInputDimensionSelector for expressions on string columns
that return strings, using a similar optimization to ExtractionFn
based DimensionSelectors.
- SingleStringInputCaching selector for expressions on string columns
that return primitives.
Also, in the SQL planner, prefer expressions for time operations
rather than extractionFns.
* Code review comments.
i.e., aggregations like COUNT(DISTINCT CASE WHEN x THEN y END). This
patch also changes complex columns to report as nullable, which
is required for them to type-check properly when used in these kinds
of filtered aggregations.
* maxQueryTimeout property in runtime properties.
* extra line
* move withTimeoutAndMaxScatterGatherBytes method to QueryLifeCycle.
* Fix initialize method.
* remove unused import.
* doc update.
* some more details in doc about query failure..
* minor fix.
* decorating QueryRunner to set and verify context. Added by servers.
* remove whitespace.
* SQL: Improved behavior when implicitly casting strings to date/time literals.
- Handle all flavors of ISO8601 and SQL literals.
- Throw errors on other literals instead of silently transforming them to 0.
* Respect timeZone when format is null.
* use ImmutableDruidDataSource for map and set
* address comments
* unused import
* allow returning only ImmutableDruidDataSource in MetadataSegmentManager
* address comments
* remove TreeSet
* revert to use TreeSet
* use default brokerServiceName when priority is not valid
* use AtomicInteger for NodesHolder.roundRobinIndex
* revert inspectionProfiles change
* adjust TieredBrokerHostSelectorTest
* combine if statements and ensure index does not become negative
* set next index with mod if overflows
* fix codestyle
* use nextIndex
* extract the while loop to a method
* Add retries for coordinator fetch and lookup start in LookupReferencesManager
* Fix LookupConfigTest
* Address comments
* Address more comments
* And address more comments
* Address comms
* Recognize 'not found' lookups in LookupReferencesManager.tryGetLookupListFromCoordinator(), by @egor-ryashin
* IT: Switch to OpenJDK8 base image.
Also split the Docker image into a base image and a child image, and
build the base image ahead of time for efficiency's sake. Also upgrade
ZK to 3.4.10.
* Additional comments about ZK upgrades.
* Add compaction task
* added doc
* use combining aggregators
* address comments
* add support for dimensionsSpec
* fix getUniqueDims and getUniqueMetics
* find unique dimensionsSpec
* fix compilation
* add unit test
* fix test
* fix test
* test for different dimension orderings and types, and doc for type and ordering
* add control for custom ordering and type
* update doc
* fix compile
* fix compile
* add segments param
* fix serde error
* fix build
AppenderatorImpl already applies maxRowsInMemory across all sinks. So dividing by
the number of Kafka partitions is pointless and effectively makes the interpretation
of maxRowsInMemory lower than expected.
This undoes one of the two changes from #3284, which fixed the original bug twice.
In this, that's worse than fixing it once.
* Fix havingSpec on complex aggregators.
- Uses the technique from #4883 on DimFilterHavingSpec too.
- Also uses Transformers from #4890, necessitating a move of that and other
related classes from druid-server to druid-processing. They probably make
more sense there anyway.
- Adds a SQL query test.
Fixes#4957.
* Remove unused import.
* Introduce "transformSpec" at ingest-time.
It accepts a "filter" (standard query filter object) and "transforms" (a
list of objects with "name" and "expression"). These can be used to do
filtering and single-row transforms without need for a separate data
processing job.
The "expression" fields use the same expression language as other
expression-based feature.
* Remove forbidden api.
* Fix compile error.
* Fix tests.
* Some more changes.
- Add nullable annotation to Firehose.nextRow.
- Add tests for index task, realtime task, kafka task, hadoop mapper,
and ingestSegment firehose.
* Fix bad merge.
* Adjust imports.
* Adjust whitespace.
* Make Transform into an interface.
* Add missing annotation.
* Switch logger.
* Switch logger.
* Adjust test.
* Adjustment to handling for DatasourceIngestionSpec.
* Fix test.
* CR comments.
* Remove unused method.
* Add javadocs.
* More javadocs, and always decorate.
* Fix bug in TransformingStringInputRowParser.
* Fix bad merge.
* Fix ISFF tests.
* Fix DORC test.
* Fix improper handling of empty arrays in StringDimensionIndexer.
This bug was able to introduce data errors: if the input rows to an
IncrementalIndex contained entirely empty arrays and single values, then
upon persisting to disk, the empty arrays would be replaced with the
lexicographically smallest single value, rather than nulls like they
should have been.
* Style fix.
* Add tests for bitmap indexes too.
* Fix binary serialization in caching
The previous caching code just concatenated a list of objects into a
byte array -- this is actually not valid because jackson-databind uses
enumerated references to strings internally, and concatenating multiple
binary serialized objects can throw off the references.
This change uses a single JsonGenerator to serialize the object list
rather than concatenating byte arrays.
* remove unused imports