If DruidSchema started too long after the BrokerServerView, its
initialization callback would never get called, and it would sit
there not knowing about any tables.
This moves the registration of the callback into the constructor,
where it belongs.
* Implement "earlyMessageRejectionPeriod" config discussed in issue #4599
* implement the logics of this param
* Added doc of this config
* Added unit tests of it
* Update KafkaSupervisor.java
ameliorate comment
* fix format
* fix bug when rebasing
* Replace Guava Enum.getIfPresent with builtin version.
This is useful for running in Hadoop environments that use Guava 11. Some
code is also simplified.
* Code review
* Added support for where clauses to filter lookup values on ingestion.
Added a filter field to the JDBC lookups that is used to generate a
where clause so that only rows matching the filter value will be
brought into Druid. Example being filter="SOMECOLUMN=1"
* Required changes based on code review.
* Required changes based on code review.
* Added support for where clauses to filter lookup values on ingestion.
Added a filter field to the JDBC lookups that is used to generate a
where clause so that only rows matching the filter value will be
brought into Druid. Example being filter="SOMECOLUMN=1"
* Updates based on code review, mainly formatting and small refactor of
the buildLookupQuery method.
* Fixed broken buildLookupQuery method
* Removed empty line.
* Updates per review comments
* Do not remove segment that should not be moved from currentlyMovingSegments (segments are removed by callbacks or not inserted)
* Replace putIfAbsent with computeIfAbsent in DruidBalancer
* Refactoring
* Stop RemoteTaskRunner's cleanupExec using TaskMaster's lifecycle, not global injected lifecycle
* Prohibit starting Lifecycle twice; Make Lifecycle to reject addMaybeStartHandler() attempts in the process of stopping rather than entering deadlock
* Fix Lifecycle.addMaybeStartHandler()
* Remove RemoteTaskRunnerFactoryTest
* Add docs
* Language
* Address comments
* Fix RemoteTaskRunnerTestUtils
* Don't use limit push down with having spec
* Throw exception when forcing limit push down with having
* Tests for having and limit push down
* Fix pool sizes in unit test
* Add HistoricalCursor.getReadableOffset() to access unwrapped offset in selectors, when the 'main' offset if FilteredOffset (fixes#4628)
* Stack overflow test
* make BrokerQueryResource instantiation singleton
* fix druid.router.http.* handling so that they are actually used and introduce numRequestsQueued for jetty http client at router
* address comments
* address review comment
* Use ConcurrentHashMap to store segment servers or else getInventory() would need to clone the values list
* introduce unstableTimeout for segment servers
* address review comment
* add HttpServerInventoryViewConfigTest
* NPE thrown when empty/null is passes to TimeDimExtractionFn
* Add @Nullable where ever applicable
* Add @Nullable to SearchQuerySpec.apply()
* Remove unused
* Improved SQL support for floats and doubles.
- Use Druid FLOAT for SQL FLOAT, and Druid DOUBLE for SQL DOUBLE, REAL,
and DECIMAL.
- Use float* aggregators when appropriate.
- Add tests involving both float and double columns.
- Adjust documentation accordingly.
* CR comments.
* Fix braces.
* rolling upgrade order change to bring coordinator and overlord together
* mentioned merged Coordinator-Overlord in upgrade order doc
* revert autoscaling doc change
* auto scaling doc fix
* Add metrics to the native queries underpinning SQL.
This is done by factoring out the metrics and request log emitting
code from QueryResource into a new QueryLifecycle class. That class
is used by both QueryResource and the SQL DruidSchema and QueryMaker.
Also fixes a couple of bugs in QueryResource:
- RequestLogLine start time was set to `TimeUnit.NANOSECONDS.toMillis(startNs)`,
which is incorrect since absolute nanos cannot be converted to millis.
- DruidMetrics.makeRequestMetrics was called with null `query` on
unparseable queries, which led to spurious "Unable to log query"
errors.
Partial fix for #4047.
* Code style
* Remove unused imports.
* Fix tests.
* Remove unused import.
This allows the tasks to run concurrently. Additionally, rework
the partition-determining code in a couple ways:
- Use a task-id based sequenceName so concurrently running append
tasks do not clobber each others' segments.
- Make the list of shardSpecs empty when rollup is non-guaranteed, and
let allocators handle the creation of incremental shardSpecs.