Druid stores timestamps down to the millisecond, so we should use
precision = 3. Setting this wrong sometimes caused milliseconds
to be ignored in timestamp literals.
Fixes#5337.
Fixes test failures reported in -
https://github.com/druid-io/druid/issues/4909
Issue is that If some test skips setting up Calcite system properties
with proper encoding and loads calcite classes that use that property,
All subsequent tests in the same JVM fails.
To reproduce the issue - ExpressionsTest and CalciteQueryTest from IDE
in this order.
A better fix would be to not use System Properties in calcite, This
will work for now.
All new Calcite Unit tests that are added need to inherit
CalciteTestBase.
* Support for disabling bitmap indexes.
Can save space for columns where bitmap indexes are pointless (like
free-form text).
* Remove import.
* Fix CompactionTaskTest.
* Update for review comments.
* Review comments, tests.
* Fix test.
* Fix missing task type in task payload API.
Apparently embedding a polymorphic object inside a Map<String, Object> is
a bit too much for Jackson to serialize properly. Fix this by using
wrapper classes.
* Fix OverlordTest casts.
* Remove import.
* Remove unused imports.
* Clarify comments.
* adding a properties endpoint in status resource
* checkstyle fixes
* more checkstyle corrections
* correcting the resource filter for properties endpoint
* adding feature of hiding sensitive properties
* checkstyle changes
* review changes for adding default hidden properties and using jackson for arrays value
* making review changes
* Support map type in orc extension.
Added getMapObject in OrcHadoopInputRowParser
Updated parse tests to parse map-type field in OrcHadoopInputRowParserTest
* changed from for-loop to foreach
* added resolution of column names when map types are exploded to several
columns. updated the document as well -- orc.md.
* Update orc.md
change from review
* Support Hadoop batch ingestion for druid-azure-extensions #5181
* Fix indentation issues
* Fix forbidden-apis violation
* Code & doc improvements for azure-extensions
* Rename version to binaryVersion where appropriate to avoid confusion
* Set default protocol to wasbs://, as recommended by the Azure docs
* Add link to Azure documentation for wasb(s):// path
* Remove any colons from the dataSegment.getVersion()
* Added test for dataSegment.getVersion colon is replaced
* Use StringUtils.format for String concatenation
* remove empty lines
* Remove unneeded StringUtils.format from log.info
* Fix early publishing to early pushing in batch indexing & refactor appenderatorDriver
* fix compile
* rename and add more javadocs
* Fix conflicts
* address comments
* revert await executors
* fix test
* opentsdb emitter extension
* doc for opentsdb emitter extension
* update opentsdb emitter doc
* add the ms unit to the constant name
* add a configurable event limit
* fix version to 0.13.0-SNAPSHOT
* using a thread to consume metric event
* rename method and parameter
* Update post-aggregations.md
I think this is more clear. I am not sure how multiplying by 100 is involved in averaging...
* Update post-aggregations.md
adding additional aggregator
* Update post-aggregations.md
* Fix two improper casts in HavingSpecMetricComparator.
Fixes two things:
1. An improper double-to-long cast when comparing double metrics to any
kind of value, which was a regression from #4883.
2. An improper double-to-long cast when comparing a long/int metric to a
double/float value: the value was cast to long/int, drawing strange
conclusions like int 100 matching a havingSpec of equalTo(100.5).
* Add comments.
* Remove extraneous comment.
* Simplify code a bit.