* FilteredRequestLogger: Fix start/stop, invalid delegate behavior.
Fixes two bugs:
1) FilteredRequestLogger did not start/stop the delegate.
2) FilteredRequestLogger would ignore an invalid delegate type, and
instead silently substitute the "noop" logger. This was due to a larger
problem with RequestLoggerProvider setup in general; the fix here is
to remove "defaultImpl" from the RequestLoggerProvider interface, and
instead have JsonConfigurator be responsible for creating the
default implementations. It is stricter about things than the old system
was, and is only willing to make a noop logger if it doesn't see any
request logger configs. Otherwise, it'll raise a provision error.
* Remove unneeded annotations.
The documentation for Bound filter's lowerStrict/upperStrict is incorrect. It is not consistent with the examples provided and actual behaviour of the bound filter. Correct this.
* autosize processing buffers based on direct memory sizing
* remove oops, more test
* max 1gb autosize buffers, test, start of docs
* fix oops
* revert accidental change
* print buffer size in exception
* change the things
* Replace StatsD client library
The [Datadog package][1] is a StatsD compatible drop-in replacement for the
client library, but it seems to be [better maintained][2] and has support for
Datadog DogStatsD specific features, which will be made use of in a subsequent
commit.
The `count`, `time`, and `gauge` methods are actually exactly compatible with
the previous library and the modifications shouldn't be required, but EasyMock
seems to have a hard time dealing with the variable arguments added by the
DogStatsD library and causes tests to fail if no arguments are provided for the
last String vararg. Passing an empty array fixes the test failures.
[1]: https://github.com/DataDog/java-dogstatsd-client
[2]: https://github.com/tim-group/java-statsd-client/issues/37#issuecomment-248698856
* Retain dimension key information for StatsD metrics
This doesn't change behavior, but allows separating dimensions from the metric
name in subsequent commits.
There is a possible order change for values from
`dimsBuilder.build().values()`, but from the tests it looks like it doesn't
affect actual behavior and the order of user dimensions is also retained.
* Support DogStatsD style tags in statsd-emitter
Datadog [doesn't support name-encoded dimensions and uses a concept of _tags_
instead.][1] This change allows Datadog users to send the metrics without
having to encode the various dimensions in the metric names. This enables
building graphs and monitors with and without aggregation across various
dimensions from the same data.
As tests in this commit verify, the behavior remains the same for users who
don't enable the `druid.emitter.statsd.dogstatsd` configuration flag.
[1]: https://www.datadoghq.com/blog/the-power-of-tagged-metrics/#tags-decouple-collection-and-reporting
* Disable convertRange behavior for DogStatsD users
DogStatsD, unlike regular StatsD, supports floating-point values, so this
behavior is unnecessary. It would be possible to still support `convertRange`,
even with `dogstatsd` enabled, but that would mean that people using the
default mapping would have some of the gauges unnecessarily converted.
`time` is in milliseconds and doesn't support floating-point values.
Also add a "fromIndex" argument to the strpos expression function. There
are some -1 and +1 adjustment terms due to the fact that the strpos
expression behaves like Java indexOf (0-indexed), but the POSITION SQL
function is 1-indexed.
* move parquet-extensions from contrib to core, adds new hadoop parquet parser that does not convert to avro first and supports flattenSpec and int96 columns, add support for flattenSpec for parquet-avro conversion parser, much test with a bunch of files lifted from spark-sql
* fix avro flattener to support nullable primitives for auto discovery and now only supports primitive arrays instead of all arrays
* remove leftover print
* convert micro timestamp to millis
* checkstyle
* add ignore for .parquet and .parq to rat exclude
* fix legit test failure from avro flattern behavior change
* fix rebase
* add exclusions to pom to cut down on redundant jars
* refactor tests, add support for unwrapping lists for parquet-avro, review comments
* more comment
* fix oops
* tweak parquet-avro list handling
* more docs
* fix style
* grr styles
* Period load/drop/broadcast rules should include the future by default
* address comments
* adjust coordinator console and tweak docs
* address comments
* fix travis-ci
* include mysql-metadata-storage extension in distribution, but without the GPL-licensed connector library
* Install mysql connector package
* use symlinks to avoid versioning issues
* add documentation for fetching the mysql connector
* add PrefixFilteredDimensionSpec for multi-value dimensions
* add docs for PrefixFilteredDimensionSpec
* remove unnecessary null handling
* add null check to the result of NullHandling
* o- Query Response format to be based on http 'accept' header & Query Payload contenty type to be based on 'content-type' header
* o- Query Response format to be based on http 'accept' header & Query Payload contenty type to be based on 'content-type' header
o- if Accept header is absent, it defaults to Content-Type header
* Feature: Query Response format to be based on http 'accept' header & Query Payload content type to be based on 'content-type' PR #4033
Minor change to a comment - restoring to previous wording
* Feature: Query Response format to be based on http 'accept' header & Query Payload content type to be based on 'content-type' PR #4033
o- minor change to check for empty string
* Added SystemSchema with following tables (#5989)
* SEGMENTS table provides details on served and published segments
* SERVERS table provides details on data servers
* SERVERSEGMETS table is the JOIN of SEGMENTS and SERVERS
* TASKS table provides details on tasks
* Add documentation for system schema
* Fix static-analysis warnings
* Address PR comments
*Add unit tests
* Fix a test
* Try to fix a test
* Fix a bug around replica count
* rename io.druid to org.apache.druid
* Major change is to make tasks and segment queries streaming
* Made tasks/segments stream to calcite instead of storing it in memory
* Add num_rows to segments table
* Refactor JsonParserIterator
* Replace with closeable iterator
* Fix docs, make num_rows column nullable, some unit test changes
* make num_rows column type long, allow it to be null
fix a compile error after merge, add TrafficCop param to InputStreamResponseHandler
* Filter null rows for segments table from Linq4j enumerable
* change num_replicas datatype to long in segments table
* Fix some tests and address comments
* Doc updates, other PR comments
* Update tests
* Address comments
* Add auth check
* Update docs
* Refactoring
* Fix teamcity warning, change the getQueryableServer in TimelineServerView
* Fix compilation after rebase
* Use the stream API from AuthorizationUtils
* Added LeaderClient interface and NoopDruidLeaderClient class
* Revert "Added LeaderClient interface and NoopDruidLeaderClient class"
This reverts commit 100fa46e39.
* Make the naming consistent to server_segments for the join table
* Add ForbiddenException on auth check failure
* Remove static block from SystemSchema
* Try to fix a test in CalciteQueryTest due to rename of server_segments
* Fix the json output format in the coordinator API
* Add auth check in the segments API
* Add null check to avoid NPE
* Use annonymous class object instead of mock for DruidLeaderClient in SqlBenchmark
* Fix test failures, type long/BIGINT can be nullable
* Revert long nullability to fix tests
* Fix style for tests
* PR comments
* Address PR comments
* Add the missing BytesAccumulatingResponseHandler class
* Use Sequences.withBaggage in DruidPlanner
* Fix docs, add comments
* Close the iterator if hasNext returns false
* fix opentsdb emitter always be running
* check if emitter started
* add more details about consumeDelay in doc
* fix possible thread unsafe
* fix fail sending tags whose value contain colon
* 'suspend' and 'resume' support for kafka indexing service
changes:
* introduces `SuspendableSupervisorSpec` interface to describe supervisors which support suspend/resume functionality controlled through the `SupervisorManager`, which will gracefully shutdown the supervisor and it's tasks, update it's `SupervisorSpec` with either a suspended or running state, and update with the toggled spec. Spec updates are provided by `SuspendableSupervisorSpec.createSuspendedSpec` and `SuspendableSupervisorSpec.createRunningSpec` respectively.
* `KafkaSupervisorSpec` extends `SuspendableSupervisorSpec` and now supports suspend/resume functionality. The difference in behavior between 'running' and 'suspended' state is whether the supervisor will attempt to ensure that indexing tasks are or are not running respectively. Behavior is identical otherwise.
* `SupervisorResource` now provides `/druid/indexer/v1/supervisor/{id}/suspend` and `/druid/indexer/v1/supervisor/{id}/resume` which are used to suspend/resume suspendable supervisors
* Deprecated `/druid/indexer/v1/supervisor/{id}/shutdown` and moved it's functionality to `/druid/indexer/v1/supervisor/{id}/terminate` since 'shutdown' is ambiguous verbage for something that effectively stops a supervisor forever
* Added ability to get all supervisor specs from `/druid/indexer/v1/supervisor` by supplying the 'full' query parameter `/druid/indexer/v1/supervisor?full` which will return a list of json objects of the form `{"id":<id>, "spec":<SupervisorSpec>}`
* Updated overlord console ui to enable suspend/resume, and changed 'shutdown' to 'terminate'
* move overlord console status to own column in supervisor table so does not look like garbage
* spacing
* padding
* other kind of spacing
* fix rebase fail
* fix more better
* all supervisors now suspendable, updated materialized view supervisor to support suspend, more tests
* fix log
* Broker backpressure.
Adds a new property "druid.broker.http.maxQueuedBytes" and a new context
parameter "maxQueuedBytes". Both represent a maximum number of bytes queued
per query before exerting backpressure on the channel to the data server.
Fixes#4933.
* Fix query context doc.
* resolves#5898 by adding maxTotalRows to incremental publishing kafka index task and appenderator based realtime indexing task, as available in IndexTask
* address review comments
* changes due to review
* merge fail
* Rename io.druid to org.apache.druid.
* Fix META-INF files and remove some benchmark results.
* MonitorsConfig update for metrics package migration.
* Reorder some dimensions in inner queries for some reason.
* Fix protobuf tests.
* add subtotalsSpec attribute to groupBy query
* dont sent subtotalsSpec to downstream nodes from broker and other updates
* address review comment
* fix checkstyle issues after merge to master
* add docs for subtotalsSpec feature
* address doc review comments
* SQL: Support more result formats, add columns header.
- Add result formats for line-based JSON and CSV.
- Add X-Druid-Sql-Columns header with a list of all columns that
the response will contain.
- Add more comprehensive documentation on what callers should expect
when making Druid SQL queries.
* Fix some tests.
* Adjust tests.
* Adjust trailer, add types header.
* Fix trailers.
* Add PostgreSQLConnectorConfig to expose SSL configuration options for the Postgres Metadata Storage module.
* Fix checkstyle violations and add license header
* Convert properties in the postgres docs to be the full property path and fix typo
* Fix grammar in sslFactory docs
* 'shutdownAllTasks' API for a dataSource
Change-Id: I30d14390457d39e0427d23a48f4f224223dc5777
* fix api path and return
Change-Id: Ib463f31ee2c4cb168cf2697f149be845b57c42e5
* optimize implementation
Change-Id: I50a8dcd44dd9d36c9ecbfa78e103eb9bff32eab9