it fails when we issue the SegmentMetadataQuery by setting {"bySegment" : true} in context with exception -
java.lang.ClassCastException: io.druid.query.Result cannot be cast to io.druid.query.metadata.metadata.SegmentAnalysis
at io.druid.query.metadata.SegmentMetadataQueryQueryToolChest$4.compare(SegmentMetadataQueryQueryToolChest.java:222) ~[druid-processing-0.7.3-SNAPSHOT.jar:0.7.3-SNAPSHOT]
at com.google.common.collect.NullsFirstOrdering.compare(NullsFirstOrdering.java:44) ~[guava-16.0.1.jar:?]
at com.metamx.common.guava.MergeIterator$1.compare(MergeIterator.java:46) ~[java-util-0.27.0.jar:?]
at com.metamx.common.guava.MergeIterator$1.compare(MergeIterator.java:42) ~[java-util-0.27.0.jar:?]
at java.util.PriorityQueue.siftUpUsingComparator(PriorityQueue.java:649) ~[?:1.7.0_80]
which caused following regression
it fails when we issue the SegmentMetadataQuery by setting {"bySegment" : true} in context with exception -
java.lang.ClassCastException: io.druid.query.Result cannot be cast to io.druid.query.metadata.metadata.SegmentAnalysis
at io.druid.query.metadata.SegmentMetadataQueryQueryToolChest$4.compare(SegmentMetadataQueryQueryToolChest.java:222) ~[druid-processing-0.7.3-SNAPSHOT.jar:0.7.3-SNAPSHOT]
at com.google.common.collect.NullsFirstOrdering.compare(NullsFirstOrdering.java:44) ~[guava-16.0.1.jar:?]
at com.metamx.common.guava.MergeIterator$1.compare(MergeIterator.java:46) ~[java-util-0.27.0.jar:?]
at com.metamx.common.guava.MergeIterator$1.compare(MergeIterator.java:42) ~[java-util-0.27.0.jar:?]
at java.util.PriorityQueue.siftUpUsingComparator(PriorityQueue.java:649) ~[?:1.7.0_80]
* Inverted function arguments to compose postProcFn for GroupBy queries
with havingspec + limitspec.
* Replaced query.getLimitSpec() with null in GroupByQueryToolChest's
mergeGroupByResults
* Added unittest to verify functionality
that is if complex agg did not implement inputSizeFn() so
that segment metadata query shows atleast some information.
also instead of COMPLEX, return type of data stored.
renamed all [Max/Min]*.java to [DoubleMax/DoubleMin]*.java and created [Max/Min]AggregatorFactory.java which can be removed when we dont need the min/max aggregator type backward compatibility
fixes#1211
fix value matcher
more improvements
more fixes for partial null column
fix handling of dimension having only null values
fixes#1211
fix value matcher
more improvements
more fixes for partial null column
review comment
IndexMaker speedups
* About 15% speedup
Conflicts:
processing/src/main/java/io/druid/segment/IndexMaker.java
fix handling of dimension having only null values
fixes#1211
fix value matcher
more improvements
more fixes for partial null column
fix handling of dimension having only null values
fixes#1211
fix value matcher
more improvements
more fixes for partial null column
review comment
review comments
review comment
fix failing tests
review comment
fix compilation
fixes#1330,
Avoid creating Period instance as creating a Period from Long.MAX_VALUE
throws arithmetic exception.
After this query metric will emit duration in seconds instead of
minutes.
* Add some unit tests
* Add io.druid.segment.IndexMerger.reprocess for quick re-indexing of data
* Add dim-value validation to validation checker (instead of ONLY index #)
* General code refactoring to make things a little easier to read
on postAggregations which are removed in the forwarded query.
Add a unit test to replicate the issue.
Add a query that can replicate this issue into integration test.
- compression for single-value dimensions using CompressedVSizeIntsIndexedSupplier
- makes dimension compression configurable via IndexSpec
- IndexSpec also enables configuring bitmap and metric compression
Also, remove some comments that did not have enough context to actually make sense to anyone but the original author (at least, I hope they make sense to the author, I definitely don't know what was being said).
* Requires https://github.com/druid-io/druid-api/pull/37
* Requires https://github.com/metamx/java-util/pull/22
* Moves the puller logic to use a more standard workflow going through java-util helpers instead of re-writing the handlers for each impl
* General workflow goes like this: 1) LoadSpec makes sure the correct Puller is called with the correct parameters. 2) The Puller sets up general information like how to make an InputStream, how to find a file name (for .gz files for example), and when to retry. 3) CompressionUtils does most of the heavy lifting when it can
- Moves DimExtractionFn under a more generic ExtractionFn interface to
support extracting dimension values other than strings
- pushes down extractionFn to the storage adapter from query engine
- 'dimExtractionFn' parameter has been deprecated in favor of 'extractionFn'
- adds a TimeFormatExtractionFn, allowing to project the '__time' dimension
- JavascriptDimExtractionFn renamed to JavascriptExtractionFn, adding
support for any dimension value types that map directly to Javascript
- update documentation for time column extraction and related changes
* add test for same-name groupBy hyperUniques post-agg
* add test for same-name post-agg in groupby with approx histogram
* Fixes https://github.com/druid-io/druid/issues/1045
* Throws an error if post aggs and aggs do not have unique names
* Add more groupBy tests for Having filters
- Switch to using Druid parent POM
- Add required fields for Sonatype
- Common plugin versions and settings have been moved to the parent pom
- Cleanup artifacts and POMs for consistent formatting
- Remove org.hyperic.sigar dependency and update docs to reflect necessary jars to add at runtime when sigar is needed