`sessionToken`, `accessKey` and `secretKey` must be updated atomically.
Another race is possible between the file updater and the Druid process reading the file. It could be enforced only with mandatory file locking, but file locking is advisory by default in Linux.
Not putting this to 0.13 milestone because the found bugs are not critical (one is a harmless DI config duplicate, and another is in a benchmark.
Change in `DumpSegment` is just an indentation change.
* Fix missing default config in some calls to coordinator dynamic configs.
The lack of a default config meant that if someone called an API
_without_ a default config before one _with_ a default config, then
the default value would get stuck at null instead of the intended
default value. I noticed this in a cluster where calling /druid/coordinator/v1/config
before a coordinator had fully started up would lead to NPEs during
DruidCoordinatorRuleRunner.
This patch makes the default configs consistent across all calls.
* Remove unnecessary null check.
* 1. added support for unused DateTime start parameter in getRecentlyFinishedTaskInfoSince method:
HeapMemoryTaskStorage.getRecentlyFinishedTaskInfoSince return the finished tasks by comparing TaskStuff.createdDate with the start time
2. added filtering by status complete to TaskStuff list stream in HeapMemoryTaskStorage.getNRecentlyFinishedTaskInfo method.
3. changed names of methods and parameters to present that public API method OverlordResource.getTasks return the list of completed tasks, which createdDate, not date of completion, belongs to the interval parameter.
* 1. added support for unused DateTime start parameter in getRecentlyFinishedTaskInfoSince method:
HeapMemoryTaskStorage.getRecentlyFinishedTaskInfoSince return the finished tasks by comparing TaskStuff.createdDate with the start time
2. added filtering by status complete to TaskStuff list stream in HeapMemoryTaskStorage.getNRecentlyFinishedTaskInfo method.
3. changed names of methods and parameters to present that public API method OverlordResource.getTasks return the list of completed tasks, which createdDate, not date of completion, belongs to the interval parameter.
* Fixed OverlordResourceTest to Support changed methods names
* Changed methods and parameters names to make them more obvious to understand.
* Changed String.replace() for the StringUtils.replace()(#6607)
* Fixed checkstyle error
* Add checkstyle rules about imports and empty lines between members
* Add suppressions
* Update Eclipse import order
* Add empty line
* Fix StatsDEmitter
* Replace StatsD client library
The [Datadog package][1] is a StatsD compatible drop-in replacement for the
client library, but it seems to be [better maintained][2] and has support for
Datadog DogStatsD specific features, which will be made use of in a subsequent
commit.
The `count`, `time`, and `gauge` methods are actually exactly compatible with
the previous library and the modifications shouldn't be required, but EasyMock
seems to have a hard time dealing with the variable arguments added by the
DogStatsD library and causes tests to fail if no arguments are provided for the
last String vararg. Passing an empty array fixes the test failures.
[1]: https://github.com/DataDog/java-dogstatsd-client
[2]: https://github.com/tim-group/java-statsd-client/issues/37#issuecomment-248698856
* Retain dimension key information for StatsD metrics
This doesn't change behavior, but allows separating dimensions from the metric
name in subsequent commits.
There is a possible order change for values from
`dimsBuilder.build().values()`, but from the tests it looks like it doesn't
affect actual behavior and the order of user dimensions is also retained.
* Support DogStatsD style tags in statsd-emitter
Datadog [doesn't support name-encoded dimensions and uses a concept of _tags_
instead.][1] This change allows Datadog users to send the metrics without
having to encode the various dimensions in the metric names. This enables
building graphs and monitors with and without aggregation across various
dimensions from the same data.
As tests in this commit verify, the behavior remains the same for users who
don't enable the `druid.emitter.statsd.dogstatsd` configuration flag.
[1]: https://www.datadoghq.com/blog/the-power-of-tagged-metrics/#tags-decouple-collection-and-reporting
* Disable convertRange behavior for DogStatsD users
DogStatsD, unlike regular StatsD, supports floating-point values, so this
behavior is unnecessary. It would be possible to still support `convertRange`,
even with `dogstatsd` enabled, but that would mean that people using the
default mapping would have some of the gauges unnecessarily converted.
`time` is in milliseconds and doesn't support floating-point values.
* SystemSchema: Fix data types for various fields.
- segments: start, end, partition_num
- servers: plaintext_port, tls_port
- tasks: plaintext_port, tls_port
The declared and actual types did not match, but they must or
else queries may generate ClassCastExceptions.
Also adjusted some of the code for generating values to be more
robust in the face of nulls or malformed strings.
* Fix style.
* Expressions: Fix improper supplier reuse with missing columns.
ExpressionSelectors has an optimization that skips building a Map
when there is only one input supplier. However, this optimization
should not be used in the case where the is one input supplier but
more than one input identifier (which can happen when only one
input identifier corresponds to an actual column).
Fixes#6556.
* Add underscores to statics.
* SQL: Set INFORMATION_SCHEMA catalog name to "druid".
Some third party tools ignore catalogs with empty names. So using
the name "druid" for the catalog makes integration easier.
* Update tests.
Also add a "fromIndex" argument to the strpos expression function. There
are some -1 and +1 adjustment terms due to the fact that the strpos
expression behaves like Java indexOf (0-indexed), but the POSITION SQL
function is 1-indexed.
* Optimization for expressions that hit a single long column.
There was previously a single-long-input optimization that applied only
to the time column. These have been combined together. Also adds
type-specific value caching to ExprEval, which allowed simplifying
the SingleLongInputCachingExpressionColumnValueSelector code.
* Add more benchmarks.
* Don't use LRU cache for __time.
* Simplify a bit.
* Let the cache grow.
* use BloomFilter instead of BloomKFilter since the latters test method is not threadsafe
* fix formatting
* style and forbidden api
* remove redundant hive notice entry
* add todo with note to delete copied implementation and link to related hive jira
* better fix for masks than ThreadLocal
* Use current coordinator leader instead of cached one (#6551)
Check the response status and throw exception if not OK
* Modify tests
* PR comment
* Add the correct check for status of BytesAccumulatingResponseHandler
* Move the status check into JsonParserIterator so sql query outputs meaningful message on failure
* Fix tests
* move parquet-extensions from contrib to core, adds new hadoop parquet parser that does not convert to avro first and supports flattenSpec and int96 columns, add support for flattenSpec for parquet-avro conversion parser, much test with a bunch of files lifted from spark-sql
* fix avro flattener to support nullable primitives for auto discovery and now only supports primitive arrays instead of all arrays
* remove leftover print
* convert micro timestamp to millis
* checkstyle
* add ignore for .parquet and .parq to rat exclude
* fix legit test failure from avro flattern behavior change
* fix rebase
* add exclusions to pom to cut down on redundant jars
* refactor tests, add support for unwrapping lists for parquet-avro, review comments
* more comment
* fix oops
* tweak parquet-avro list handling
* more docs
* fix style
* grr styles
* Period load/drop/broadcast rules should include the future by default
* address comments
* adjust coordinator console and tweak docs
* address comments
* fix travis-ci
This PR allows to control the fields in `RequestLogEvent`, emitted in `EmittingRequestLogger`. In our case, we want to get rid of the `intervals` fields of the query objects that are a part of `DefaultRequestLogEvent`. They are enormous (thousands of segments) and not useful.
Related to #5522, FYI @a2l007.