Merge remote-tracking branch 'upstream/master'

This commit is contained in:
Glenn Nethercutt 2014-10-21 21:56:32 -04:00
commit 9fd0606a42
89 changed files with 2336 additions and 609 deletions

View File

@ -28,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.157-SNAPSHOT</version>
<version>0.6.160-SNAPSHOT</version>
</parent>
<dependencies>

View File

@ -28,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.157-SNAPSHOT</version>
<version>0.6.160-SNAPSHOT</version>
</parent>
<dependencies>

View File

@ -159,4 +159,4 @@ Uses [HyperLogLog](http://algo.inria.fr/flajolet/Publications/FlFuGaMe07.pdf) to
```json
{ "type" : "hyperUnique", "name" : <output_name>, "fieldName" : <metric_name> }
```
```

View File

@ -162,37 +162,58 @@ The indexing process has the ability to roll data up as it processes the incomin
### Partitioning specification
Segments are always partitioned based on timestamp (according to the granularitySpec) and may be further partitioned in some other way depending on partition type.
Druid supports two types of partitions spec - singleDimension and hashed.
Segments are always partitioned based on timestamp (according to the granularitySpec) and may be further partitioned in
some other way depending on partition type. Druid supports two types of partitioning strategies: "hashed" (based on the
hash of all dimensions in each row), and "dimension" (based on ranges of a single dimension).
In SingleDimension partition type data is partitioned based on the values in that dimension.
For example, data for a day may be split by the dimension "last\_name" into two segments: one with all values from A-M and one with all values from N-Z.
Hashed partitioning is recommended in most cases, as it will improve indexing performance and create more uniformly
sized data segments relative to single-dimension partitioning.
In hashed partition type, the number of partitions is determined based on the targetPartitionSize and cardinality of input set and the data is partitioned based on the hashcode of the row.
It is recommended to use Hashed partition as it is more efficient than singleDimension since it does not need to determine the dimension for creating partitions.
Hashing also gives better distribution of data resulting in equal sized partitions and improving query performance
To use this druid to automatically determine optimal partitions indexer must be given a target partition size. It can then find a good set of partition ranges on its own.
#### Configuration for disabling auto-sharding and creating Fixed number of partitions
Druid can be configured to NOT run determine partitions and create a fixed number of shards by specifying numShards in hashed partitionsSpec.
e.g This configuration will skip determining optimal partitions and always create 4 shards for every segment granular interval
#### Hash-based partitioning
```json
"partitionsSpec": {
"type": "hashed"
"numShards": 4
"type": "hashed",
"targetPartitionSize": 5000000
}
```
Hashed partitioning works by first selecting a number of segments, and then partitioning rows across those segments
according to the hash of all dimensions in each row. The number of segments is determined automatically based on the
cardinality of the input set and a target partition size.
The configuration options are:
|property|description|required?|
|--------|-----------|---------|
|type|type of partitionSpec to be used |no, default : singleDimension|
|targetPartitionSize|target number of rows to include in a partition, should be a number that targets segments of 700MB\~1GB.|yes|
|type|type of partitionSpec to be used |"hashed"|
|targetPartitionSize|target number of rows to include in a partition, should be a number that targets segments of 500MB\~1GB.|either this or numShards|
|numShards|specify the number of partitions directly, instead of a target partition size. Ingestion will run faster, since it can skip the step necessary to select a number of partitions automatically.|either this or targetPartitionSize|
#### Single-dimension partitioning
```json
"partitionsSpec": {
"type": "dimension",
"targetPartitionSize": 5000000
}
```
Single-dimension partitioning works by first selecting a dimension to partition on, and then separating that dimension
into contiguous ranges. Each segment will contain all rows with values of that dimension in that range. For example,
your segments may be partitioned on the dimension "host" using the ranges "a.example.com" to "f.example.com" and
"f.example.com" to "z.example.com". By default, the dimension to use is determined automatically, although you can
override it with a specific dimension.
The configuration options are:
|property|description|required?|
|--------|-----------|---------|
|type|type of partitionSpec to be used |"dimension"|
|targetPartitionSize|target number of rows to include in a partition, should be a number that targets segments of 500MB\~1GB.|yes|
|maxPartitionSize|maximum number of rows to include in a partition. Defaults to 50% larger than the targetPartitionSize.|no|
|partitionDimension|the dimension to partition on. Leave blank to select a dimension automatically.|no|
|assumeGrouped|assume input data has already been grouped on time and dimensions. This is faster, but can choose suboptimal partitions if the assumption is violated.|no|
|numShards|provides a way to manually override druid-auto sharding and specify the number of shards to create for each segment granular interval.It is only supported by hashed partitionSpec and targetPartitionSize must be set to -1|no|
|assumeGrouped|assume input data has already been grouped on time and dimensions. Ingestion will run faster, but can choose suboptimal partitions if the assumption is violated.|no|
### Updater job spec

View File

@ -20,9 +20,7 @@ io.druid.cli.Main server coordinator
Rules
-----
Segments are loaded and dropped from the cluster based on a set of rules. Rules indicate how segments should be assigned to different historical node tiers and how many replicants of a segment should exist in each tier. Rules may also indicate when segments should be dropped entirely from the cluster. The coordinator loads a set of rules from the database. Rules may be specific to a certain datasource and/or a default set of rules can be configured. Rules are read in order and hence the ordering of rules is important. The coordinator will cycle through all available segments and match each segment with the first rule that applies. Each segment may only match a single rule.
For more information on rules, see [Rule Configuration](Rule-Configuration.html).
Segments can be automatically loaded and dropped from the cluster based on a set of rules. For more information on rules, see [Rule Configuration](Rule-Configuration.html).
Cleaning Up Segments
--------------------

View File

@ -1,6 +1,11 @@
---
layout: doc_page
---
Druid vs. Cassandra
===================
We are not experts on Cassandra, if anything is incorrect about our portrayal, please let us know on the mailing list or via some other means. We will fix this page.
Druid is highly optimized for scans and aggregations, it supports arbitrarily deep drill downs into data sets without the need to pre-compute, and it can ingest event streams in real-time and allow users to query events as they come in. Cassandra is a great key-value store and it has some features that allow you to use it to do more interesting things than what you can do with a pure key-value store. But, it is not built for the same use cases that Druid handles, namely regularly scanning over billions of entries per query.

View File

@ -2,6 +2,10 @@
layout: doc_page
---
Druid vs Hadoop
===============
Hadoop has shown the world that its possible to house your data warehouse on commodity hardware for a fraction of the price of typical solutions. As people adopt Hadoop for their data warehousing needs, they find two things.
1. They can now query all of their data in a fairly flexible manner and answer any question they have

View File

@ -1,6 +1,10 @@
---
layout: doc_page
---
Druid vs Impala or Shark
========================
The question of Druid versus Impala or Shark basically comes down to your product requirements and what the systems were designed to do.
Druid was designed to

View File

@ -1,6 +1,10 @@
---
layout: doc_page
---
Druid vs Redshift
=================
###How does Druid compare to Redshift?
In terms of drawing a differentiation, Redshift is essentially ParAccel (Actian) which Amazon is licensing.

View File

@ -1,6 +1,11 @@
---
layout: doc_page
---
Druid vs Vertica
================
How does Druid compare to Vertica?
Vertica is similar to ParAccel/Redshift ([Druid-vs-Redshift](Druid-vs-Redshift.html)) described above in that it wasnt built for real-time streaming data ingestion and it supports full SQL.

View File

@ -19,13 +19,13 @@ Clone Druid and build it:
git clone https://github.com/metamx/druid.git druid
cd druid
git fetch --tags
git checkout druid-0.6.156
git checkout druid-0.6.159
./build.sh
```
### Downloading the DSK (Druid Standalone Kit)
[Download](http://static.druid.io/artifacts/releases/druid-services-0.6.156-bin.tar.gz) a stand-alone tarball and run it:
[Download](http://static.druid.io/artifacts/releases/druid-services-0.6.159-bin.tar.gz) a stand-alone tarball and run it:
``` bash
tar -xzf druid-services-0.X.X-bin.tar.gz

View File

@ -21,13 +21,13 @@ Duration granularities are specified as an exact duration in milliseconds and ti
They also support specifying an optional origin, which defines where to start counting time buckets from (defaults to 1970-01-01T00:00:00Z).
```
```javascript
{"type": "duration", "duration": "7200000"}
```
This chunks up every 2 hours.
```
```javascript
{"type": "duration", "duration": "3600000", "origin": "2012-01-01T00:30:00Z"}
```
@ -39,13 +39,13 @@ Period granularities are specified as arbitrary period combinations of years, mo
Time zone is optional (defaults to UTC). Origin is optional (defaults to 1970-01-01T00:00:00 in the given time zone).
```
```javascript
{"type": "period", "period": "P2D", "timeZone": "America/Los_Angeles"}
```
This will bucket by two-day chunks in the Pacific timezone.
```
```javascript
{"type": "period", "period": "P3M", "timeZone": "America/Los_Angeles", "origin": "2012-02-01T00:00:00-08:00"}
```

View File

@ -0,0 +1,360 @@
---
layout: doc_page
---
Example Production Hadoop Configuration
=======================================
The following configuration should work relatively well for Druid indexing and Hadoop. In the example, we are using Hadoop 2.4 with EC2 m1.xlarge nodes for NameNodes and cc2.8xlarge nodes for DataNodes.
### Core-site.xml
```
<configuration>
<!-- Temporary directory on HDFS (but also sometimes local!) -->
<property>
<name>hadoop.tmp.dir</name>
<value>/mnt/persistent/hadoop</value>
</property>
<!-- S3 -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://#{IP}:9000</value>
</property>
<property>
<name>fs.s3.impl</name>
<value>org.apache.hadoop.fs.s3native.NativeS3FileSystem</value>
</property>
<property>
<name>fs.s3.awsAccessKeyId</name>
<value>#{S3_ACCESS_KEY}</value>
</property>
<property>
<name>fs.s3.awsSecretAccessKey</name>
<value>#{S3_SECRET_KEY}</value>
</property>
<property>
<name>fs.s3.buffer.dir</name>
<value>/mnt/persistent/hadoop-s3n</value>
</property>
<property>
<name>fs.s3n.awsAccessKeyId</name>
<value>#{S3N_ACCESS_KEY}</value>
</property>
<property>
<name>fs.s3n.awsSecretAccessKey</name>
<value>#{S3N_SECRET_KEY}</value>
</property>
<!-- Compression -->
<property>
<name>io.compression.codecs</name>
<value>org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.Lz4Codec,org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.SnappyCodec</value>
</property>
<!-- JBOD -->
<property>
<name>io.seqfile.local.dir</name>
<value>/mnt/persistent/hadoop/io/local</value>
</property>
</configuration>
```
### Mapred-site.xml
```
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobtracker.address</name>
<value>#{JT_ADDR}:9001</value>
</property>
<property>
<name>mapreduce.jobtracker.http.address</name>
<value>#{JT_HTTP_ADDR}:9100</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>#{JH_ADDR}:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>#{JH_WEBAPP_ADDR}:19888</value>
</property>
<property>
<name>mapreduce.tasktracker.http.address</name>
<value>#{TT_ADDR}:9103</value>
</property>
<!-- Memory and concurrency tuning -->
<property>
<name>mapreduce.job.reduces</name>
<value>21</value>
</property>
<property>
<property>
<name>mapreduce.job.jvm.numtasks</name>
<value>20</value>
</property>
<property>
<name>mapreduce.map.memory.mb</name>
<value>2048</value>
</property>
<property>
<name>mapreduce.map.java.opts</name>
<value>-server -Xmx1536m -Duser.timezone=UTC -Dfile.encoding=UTF-8 -XX:+PrintGCDetails -XX:+PrintGCTimeStamps</value>
</property>
<property>
<name>mapreduce.reduce.memory.mb</name>
<value>6144</value>
</property>
<property>
<name>mapreduce.reduce.java.opts</name>
<value>-server -Xmx2560m -Duser.timezone=UTC -Dfile.encoding=UTF-8 -XX:+PrintGCDetails -XX:+PrintGCTimeStamps</value>
</property>
<property>
<name>mapreduce.reduce.shuffle.parallelcopies</name>
<value>50</value>
</property>
<property>
<name>mapreduce.reduce.shuffle.input.buffer.percent</name>
<value>0.5</value>
</property>
<property>
<name>mapreduce.task.io.sort.mb</name>
<value>256</value>
</property>
<property>
<name>mapreduce.task.io.sort.factor</name>
<value>100</value>
</property>
<property>
<name>mapreduce.jobtracker.handler.count</name>
<value>64</value>
</property>
<property>
<name>mapreduce.tasktracker.http.threads</name>
<value>20</value>
</property>
<!-- JBOD -->
<property>
<name>mapreduce.cluster.local.dir</name>
<value>/mnt/persistent/hadoop/mapred/local</value>
</property>
<!-- Job history server persistent state -->
<property>
<name>mapreduce.jobhistory.recovery.enable</name>
<value>true</value>
</property>
<property>
<name>mapreduce.jobhistory.recovery.store.class</name>
<value>org.apache.hadoop.mapreduce.v2.hs.HistoryServerFileSystemStateStoreService</value>
</property>
<property>
<name>mapreduce.jobhistory.recovery.store.fs.uri</name>
<value>file://${hadoop.tmp.dir}/mapred-jobhistory-state</value>
</property>
<!-- Compression -->
<property>
<!-- Off by default, because it breaks Druid indexing (at least, it does it druid-0.6.10+). Jobs should turn
it on if they need it. -->
<name>mapreduce.output.fileoutputformat.compress</name>
<value>false</value>
</property>
<property>
<name>mapreduce.map.output.compress</name>
<value>true</value>
</property>
<property>
<name>mapreduce.output.fileoutputformat.compress.type</name>
<value>BLOCK</value>
</property>
<property>
<name>mapreduce.map.output.compress.codec</name>
<value>org.apache.hadoop.io.compress.Lz4Codec</value>
</property>
<property>
<name>mapreduce.output.fileoutputformat.compress.codec</name>
<value>org.apache.hadoop.io.compress.GzipCodec</value>
</property>
<!-- Speculative execution would violate various assumptions we've made in our system design -->
<property>
<name>mapreduce.map.speculative</name>
<value>false</value>
</property>
<property>
<name>mapreduce.reduce.speculative</name>
<value>false</value>
</property>
<!-- Sometimes jobs take a long time to run, but really, they're okay. Examples: Long index persists,
hadoop reading lots of empty files into a single mapper. Let's increase the timeout to 30 minutes. -->
<property>
<name>mapreduce.task.timeout</name>
<value>1800000</value>
</property>
</configuration>
```
### Yarn-site.xml
```
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>#{RM_HOSTNAME}</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<property>
<name>yarn.log.server.url</name>
<value>http://#{IP_LOG_SERVER}:19888/jobhistory/logs/</value>
</property>
<property>
<name>yarn.nodemanager.hostname</name>
<value>#{IP_ADDR}</value>
</property>
<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>512</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>1024</value>
</property>
<property>
<name>yarn.nodemanager.resource.cpu-vcores</name>
<value>1</value>
</property>
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>
<!-- JBOD -->
<property>
<name>yarn.nodemanager.local-dirs</name>
<value>/mnt/persistent/hadoop/nm-local-dir</value>
</property>
<!-- ResourceManager persistent state doesn't work well in tests yet, so disable it -->
<property>
<name>yarn.resourcemanager.recovery.enabled</name>
<value>false</value>
</property>
<property>
<name>yarn.resourcemanager.store.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore</value>
</property>
<property>
<name>yarn.resourcemanager.fs.state-store.uri</name>
<value>file://${hadoop.tmp.dir}/yarn-resourcemanager-state</value>
</property>
<!-- Ability to exclude hosts -->
<property>
<name>yarn.resourcemanager.nodes.exclude-path</name>
<value>/mnt/persistent/hadoop/yarn-exclude.txt</value>
</property>
</configuration>
```
### HDFS-site.xml
```
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.namenode.datanode.registration.ip-hostname-check</name>
<value>false</value>
</property>
<property>
<name>dfs.hosts.exclude</name>
<value>/mnt/persistent/hadoop/hdfs-exclude.txt</value>
</property>
<!-- JBOD -->
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///mnt/persistent/hadoop/dfs/data</value>
</property>
</configuration>
```
### Capacity-scheduler.xml
```
<configuration>
<property>
<name>yarn.scheduler.capacity.maximum-am-resource-percent</name>
<value>0.1</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.queues</name>
<value>default</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.default.capacity</name>
<value>100</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.default.user-limit-factor</name>
<value>1</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.queues</name>
<value>default</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.default.maximum-capacity</name>
<value>100</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.default.state</name>
<value>RUNNING</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.default.acl_submit_applications</name>
<value>*</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.default.acl_administer_queue</name>
<value>*</value>
</property>
<property>
<name>yarn.scheduler.capacity.node-locality-delay</name>
<value>-1</value>
</property>
</configuration>
```

View File

@ -1,6 +1,11 @@
---
layout: doc_page
---
## What types of data does Druid support?
Druid can ingest JSON, CSV, TSV and other delimited data out of the box. Druid supports single dimension values, or multiple dimension values (an array of strings). Druid supports long and float numeric columns.
## Where do my Druid segments end up after ingestion?
Depending on what `druid.storage.type` is set to, Druid will upload segments to some [Deep Storage](Deep-Storage.html). Local disk is used as the default deep storage.
@ -21,6 +26,14 @@ druid.storage.bucket=druid
druid.storage.baseKey=sample
```
Other common reasons that hand-off fails are as follows:
1) Historical nodes are out of capacity and cannot download any more segments. You'll see exceptions in the coordinator logs if this occurs.
2) Segments are corrupt and cannot download. You'll see exceptions in your historical nodes if this occurs.
3) Deep storage is improperly configured. Make sure that your segment actually exists in deep storage and that the coordinator logs have no errors.
## How do I get HDFS to work?
Make sure to include the `druid-hdfs-storage` module as one of your extensions and set `druid.storage.type=hdfs`.
@ -35,7 +48,7 @@ You can check the coordinator console located at `<COORDINATOR_IP>:<PORT>/cluste
## My queries are returning empty results
You can check `<BROKER_IP>:<PORT>/druid/v2/datasources/<YOUR_DATASOURCE>?interval=0/3000` for the dimensions and metrics that have been created for your datasource. Make sure that the name of the aggregators you use in your query match one of these metrics. Also make sure that the query interval you specify match a valid time range where data exists. Note: the broker endpoint will only return valid results on historical segments.
You can check `<BROKER_IP>:<PORT>/druid/v2/datasources/<YOUR_DATASOURCE>?interval=0/3000` for the dimensions and metrics that have been created for your datasource. Make sure that the name of the aggregators you use in your query match one of these metrics. Also make sure that the query interval you specify match a valid time range where data exists. Note: the broker endpoint will only return valid results on historical segments and not segments served by real-time nodes.
## How can I Reindex existing data in Druid with schema changes?
@ -50,6 +63,9 @@ To do this use the IngestSegmentFirehose and run an indexer task. The IngestSegm
Typically the above will be run as a batch job to say everyday feed in a chunk of data and aggregate it.
## Real-time ingestion seems to be stuck
There are a few ways this can occur. Druid will throttle ingestion to prevent out of memory problems if the intermediate persists are taking too long or if hand-off is taking too long. If your node logs indicate certain columns are taking a very long time to build (for example, if your segment granularity is hourly, but creating a single column takes 30 minutes), you should re-evaluate your configuration or scale up your real-time ingestion.
## More information

View File

@ -8,9 +8,9 @@ The previous examples are for Kafka 7. To support Kafka 8, a couple changes need
- Update realtime node's configs for Kafka 8 extensions
- e.g.
- `druid.extensions.coordinates=[...,"io.druid.extensions:druid-kafka-seven:0.6.156",...]`
- `druid.extensions.coordinates=[...,"io.druid.extensions:druid-kafka-seven:0.6.159",...]`
- becomes
- `druid.extensions.coordinates=[...,"io.druid.extensions:druid-kafka-eight:0.6.156",...]`
- `druid.extensions.coordinates=[...,"io.druid.extensions:druid-kafka-eight:0.6.159",...]`
- Update realtime task config for changed keys
- `firehose.type`, `plumber.rejectionPolicyFactory`, and all of `firehose.consumerProps` changes.

View File

@ -57,7 +57,7 @@ druid.host=#{IP_ADDR}:8080
druid.port=8080
druid.service=druid/prod/overlord
druid.extensions.coordinates=["io.druid.extensions:druid-s3-extensions:0.6.156"]
druid.extensions.coordinates=["io.druid.extensions:druid-s3-extensions:0.6.159"]
druid.zk.service.host=#{ZK_IPs}
druid.zk.paths.base=/druid/prod
@ -139,7 +139,7 @@ druid.host=#{IP_ADDR}:8080
druid.port=8080
druid.service=druid/prod/middlemanager
druid.extensions.coordinates=["io.druid.extensions:druid-s3-extensions:0.6.156","io.druid.extensions:druid-kafka-seven:0.6.156"]
druid.extensions.coordinates=["io.druid.extensions:druid-s3-extensions:0.6.159","io.druid.extensions:druid-kafka-seven:0.6.159"]
druid.zk.service.host=#{ZK_IPs}
druid.zk.paths.base=/druid/prod
@ -286,7 +286,7 @@ druid.host=#{IP_ADDR}:8080
druid.port=8080
druid.service=druid/prod/historical
druid.extensions.coordinates=["io.druid.extensions:druid-s3-extensions:0.6.156"]
druid.extensions.coordinates=["io.druid.extensions:druid-s3-extensions:0.6.159"]
druid.zk.service.host=#{ZK_IPs}
druid.zk.paths.base=/druid/prod

View File

@ -66,7 +66,7 @@ The dataSource JSON field shown next identifies where to apply the query. In thi
"dataSource": "randSeq",
```
The granularity JSON field specifies the bucket size for values. It could be a built-in time interval like "second", "minute", "fifteen_minute", "thirty_minute", "hour" or "day". It can also be an expression like `{"type": "period", "period":"PT6m"}` meaning "6 minute buckets". See [Granularities](Granularities.html) for more information on the different options for this field. In this example, it is set to the special value "all" which means [bucket all data points together into the same time bucket]()
The granularity JSON field specifies the bucket size for values. It could be a built-in time interval like "second", "minute", "fifteen_minute", "thirty_minute", "hour" or "day". It can also be an expression like `{"type": "period", "period":"PT6m"}` meaning "6 minute buckets". See [Granularities](Granularities.html) for more information on the different options for this field. In this example, it is set to the special value "all" which means bucket all data points together into the same time bucket.
```javascript
"granularity": "all",
@ -88,7 +88,7 @@ A groupBy also requires the JSON field "aggregations" (See [Aggregations](Aggreg
],
```
You can also specify postAggregations, which are applied after data has been aggregated for the current granularity and dimensions bucket. See [Post Aggregations](Post Aggregations.html) for a detailed description. In the rand example, an arithmetic type operation (division, as specified by "fn") is performed with the result "name" of "avg_random". The "fields" field specifies the inputs from the aggregation stage to this expression. Note that identifiers corresponding to "name" JSON field inside the type "fieldAccess" are required but not used outside this expression, so they are prefixed with "dummy" for clarity:
You can also specify postAggregations, which are applied after data has been aggregated for the current granularity and dimensions bucket. See [Post Aggregations](Post-aggregations.html) for a detailed description. In the rand example, an arithmetic type operation (division, as specified by "fn") is performed with the result "name" of "avg_random". The "fields" field specifies the inputs from the aggregation stage to this expression. Note that identifiers corresponding to "name" JSON field inside the type "fieldAccess" are required but not used outside this expression, so they are prefixed with "dummy" for clarity:
```javascript
"postAggregations": [{
@ -127,13 +127,13 @@ Properties shared by all query types
|timeseries, topN, groupBy, search|filter|Specifies the filter (the "WHERE" clause in SQL) for the query. See [Filters](Filters.html)|no|
|timeseries, topN, groupBy, search|granularity|the timestamp granularity to bucket results into (i.e. "hour"). See [Granularities](Granularities.html) for more information.|no|
|timeseries, topN, groupBy|aggregations|aggregations that combine values in a bucket. See [Aggregations](Aggregations.html).|yes|
|timeseries, topN, groupBy|postAggregations|aggregations of aggregations. See [Post Aggregations](Post Aggregations.html).|yes|
|timeseries, topN, groupBy|postAggregations|aggregations of aggregations. See [Post Aggregations](Post-aggregations.html).|yes|
|groupBy|dimensions|constrains the groupings; if empty, then one value per time granularity bucket|yes|
|search|limit|maximum number of results (default is 1000), a system-level maximum can also be set via `com.metamx.query.search.maxSearchLimit`|no|
|search|searchDimensions|Dimensions to apply the search query to. If not specified, it will search through all dimensions.|no|
|search|query|The query portion of the search query. This is essentially a predicate that specifies if something matches.|yes|
Query Context
<a name="query-context"></a>Query Context
-------------
|property |default | description |

View File

@ -27,7 +27,7 @@ druid.host=localhost
druid.service=realtime
druid.port=8083
druid.extensions.coordinates=["io.druid.extensions:druid-kafka-seven:0.6.156"]
druid.extensions.coordinates=["io.druid.extensions:druid-kafka-seven:0.6.159"]
druid.zk.service.host=localhost
@ -76,7 +76,7 @@ druid.host=#{IP_ADDR}:8080
druid.port=8080
druid.service=druid/prod/realtime
druid.extensions.coordinates=["io.druid.extensions:druid-s3-extensions:0.6.156","io.druid.extensions:druid-kafka-seven:0.6.156"]
druid.extensions.coordinates=["io.druid.extensions:druid-s3-extensions:0.6.159","io.druid.extensions:druid-kafka-seven:0.6.159"]
druid.zk.service.host=#{ZK_IPs}
druid.zk.paths.base=/druid/prod

View File

@ -0,0 +1,35 @@
---
layout: doc_page
---
Recommendations
===============
# Use UTC Timezone
We recommend using UTC timezone for all your events and across on your nodes, not just for Druid, but for all data infrastructure. This can greatly mitigate potential query problems with inconsistent timezones.
# Use Lowercase Strings for Column Names
Druid is not perfect in how it handles mix-cased dimension and metric names. This will hopefully change very soon but for the time being, lower casing your column names is recommended.
# SSDs
SSDs are highly recommended for historical and real-time nodes if you are not running a cluster that is entirely in memory. SSDs can greatly mitigate the time required to page data in and out of memory.
# Provide Columns Names in Lexicographic Order
Although Druid supports schema-less ingestion of dimensions, because of [https://github.com/metamx/druid/issues/658](https://github.com/metamx/druid/issues/658), you may sometimes get bigger segments than necessary. To ensure segments are as compact as possible, providing dimension names in lexicographic order is recommended.
# Use Timeseries and TopN Queries Instead of GroupBy Where Possible
Timeseries and TopN queries are much more optimized and significantly faster than groupBy queries for their designed use cases. Issuing multiple topN or timeseries queries from your application can potentially be more efficient than a single groupBy query.
# Read FAQs
You should read common problems people have here:
1) [Ingestion-FAQ](Ingestion-FAQ.html)
2) [Performance-FAQ](Performance-FAQ.html)

133
docs/content/Router.md Normal file
View File

@ -0,0 +1,133 @@
---
layout: doc_page
---
Router Node
===========
You should only ever need the router node if you have a Druid cluster well into the terabyte range. The router node can be used to route queries to different broker nodes. By default, the broker routes queries based on how [Rules](Rules.html) are set up. For example, if 1 month of recent data is loaded into a `hot` cluster, queries that fall within the recent month can be routed to a dedicated set of brokers. Queries outside this range are routed to another set of brokers. This set up provides query isolation such that queries for more important data are not impacted by queries for less important data.
Running
-------
```
io.druid.cli.Main server router
```
Example Production Configuration
--------------------------------
In this example, we have two tiers in our production cluster: `hot` and `_default_tier`. Queries for the `hot` tier are routed through the `broker-hot` set of brokers, and queries for the `_default_tier` are routed through the `broker-cold` set of brokers. If any exceptions or network problems occur, queries are routed to the `broker-cold` set of brokers. In our example, we are running with a c3.2xlarge EC2 node.
JVM settings:
```
-server
-Xmx13g
-Xms13g
-XX:NewSize=256m
-XX:MaxNewSize=256m
-XX:+UseConcMarkSweepGC
-XX:+PrintGCDetails
-XX:+PrintGCTimeStamps
-XX:+UseLargePages
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=/mnt/galaxy/deploy/current/
-Duser.timezone=UTC
-Dfile.encoding=UTF-8
-Djava.io.tmpdir=/mnt/tmp
-Dcom.sun.management.jmxremote.port=17071
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
```
Runtime.properties:
```
druid.host=#{IP_ADDR}:8080
druid.port=8080
druid.service=druid/prod/router
druid.extensions.remoteRepositories=[]
druid.extensions.localRepository=lib
druid.extensions.coordinates=["io.druid.extensions:druid-histogram:0.6.159"]
druid.zk.service.host=#{ZK_IPs}
druid.zk.paths.base=/druid/prod
druid.discovery.curator.path=/prod/discovery
druid.processing.numThreads=1
druid.router.defaultBrokerServiceName=druid:prod:broker-cold
druid.router.coordinatorServiceName=druid:prod:coordinator
druid.router.tierToBrokerMap={"hot":"druid:prod:broker-hot","_default_tier":"druid:prod:broker-cold"}
druid.router.http.numConnections=50
druid.router.http.readTimeout=PT5M
druid.server.http.numThreads=100
druid.request.logging.type=emitter
druid.request.logging.feed=druid_requests
druid.monitoring.monitors=["com.metamx.metrics.SysMonitor","com.metamx.metrics.JvmMonitor"]
druid.emitter=http
druid.emitter.http.recipientBaseUrl=#{URL}
druid.curator.compress=true
```
Runtime Configuration
---------------------
The router module uses several of the default modules in [Configuration](Configuration.html) and has the following set of configurations as well:
|Property|Possible Values|Description|Default|
|--------|---------------|-----------|-------|
|`druid.router.defaultBrokerServiceName`|Any string.|The default broker to connect to in case service discovery fails.|"". Must be set.|
|`druid.router.tierToBrokerMap`|An ordered JSON map of tiers to broker names. The priority of brokers is based on the ordering.|Queries for a certain tier of data are routed to their appropriate broker.|{"_default": "<defaultBrokerServiceName>"}|
|`druid.router.defaultRule`|Any string.|The default rule for all datasources.|"_default"|
|`druid.router.rulesEndpoint`|Any string.|The coordinator endpoint to extract rules from.|"/druid/coordinator/v1/rules"|
|`druid.router.coordinatorServiceName`|Any string.|The service discovery name of the coordinator.|null. Must be set.|
|`druid.router.pollPeriod`|Any ISO8601 duration.|How often to poll for new rules.|PT1M|
|`druid.router.strategies`|An ordered JSON array of objects.|All custom strategies to use for routing.|[{"type":"timeBoundary"},{"type":"priority"}]|
Router Strategies
-----------------
The router has a configurable list of strategies for how it selects which brokers to route queries to. The order of the strategies matter because as soon as a strategy condition is matched, a broker is selected.
### timeBoundary
```json
{
"type":"timeBoundary"
}
```
Including this strategy means all timeBoundary queries are always routed to the highest priority broker.
### priority
```json
{
"type":"priority",
"minPriority":0,
"maxPriority":1
}
```
Queries with a priority set to less than minPriority are routed to the lowest priority broker. Queries with priority set to greater than maxPriority are routed to the highest priority broker. By default, minPriority is 0 and maxPriority is 1. Using these default values, if a query with priority 0 (the default query priority is 0) is sent, the query skips the priority selection logic.
### javascript
Allows defining arbitrary routing rules using a JavaScript function. The function is passed the configuration and the query to be executed, and returns the tier it should be routed to, or null for the default tier.
*Example*: a function that return the highest priority broker unless the given query has more than two aggregators.
```json
{
"type" : "javascript",
"function" : "function (config, query) { if (config.getTierToBrokerMap().values().size() > 0 && query.getAggregatorSpecs && query.getAggregatorSpecs().size() <= 2) { return config.getTierToBrokerMap().values().toArray()[0] } else { return config.getDefaultBrokerServiceName() } }"
}
```

View File

@ -2,12 +2,34 @@
layout: doc_page
---
# Configuring Rules for Coordinator Nodes
Rules indicate how segments should be assigned to different historical node tiers and how many replicas of a segment should exist in each tier. Rules may also indicate when segments should be dropped entirely from the cluster. The coordinator loads a set of rules from the metadata storage. Rules may be specific to a certain datasource and/or a default set of rules can be configured. Rules are read in order and hence the ordering of rules is important. The coordinator will cycle through all available segments and match each segment with the first rule that applies. Each segment may only match a single rule.
Note: It is recommended that the coordinator console is used to configure rules. However, the coordinator node does have HTTP endpoints to programmatically configure rules.
Load Rules
----------
Load rules indicate how many replicants of a segment should exist in a server tier.
Load rules indicate how many replicas of a segment should exist in a server tier.
### Forever Load Rule
Forever load rules are of the form:
```json
{
"type" : "loadForever",
"tieredReplicants": {
"hot": 1,
"_default_tier" : 1
}
}
```
* `type` - this should always be "loadByInterval"
* `tieredReplicants` - A JSON Object where the keys are the tier names and values are the number of replicas for that tier.
### Interval Load Rule
@ -16,14 +38,17 @@ Interval load rules are of the form:
```json
{
"type" : "loadByInterval",
"interval" : "2012-01-01/2013-01-01",
"tier" : "hot"
"interval": "2012-01-01/2013-01-01",
"tieredReplicants": {
"hot": 1,
"_default_tier" : 1
}
}
```
* `type` - this should always be "loadByInterval"
* `interval` - A JSON Object representing ISO-8601 Intervals
* `tier` - the configured historical node tier
* `tieredReplicants` - A JSON Object where the keys are the tier names and values are the number of replicas for that tier.
### Period Load Rule
@ -33,13 +58,16 @@ Period load rules are of the form:
{
"type" : "loadByPeriod",
"period" : "P1M",
"tier" : "hot"
"tieredReplicants": {
"hot": 1,
"_default_tier" : 1
}
}
```
* `type` - this should always be "loadByPeriod"
* `period` - A JSON Object representing ISO-8601 Periods
* `tier` - the configured historical node tier
* `tieredReplicants` - A JSON Object where the keys are the tier names and values are the number of replicas for that tier.
The interval of a segment will be compared against the specified period. The rule matches if the period overlaps the interval.
@ -48,6 +76,21 @@ Drop Rules
Drop rules indicate when segments should be dropped from the cluster.
### Forever Drop Rule
Forever drop rules are of the form:
```json
{
"type" : "dropForever"
}
```
* `type` - this should always be "dropByPeriod"
All segments that match this rule are dropped from the cluster.
### Interval Drop Rule
Interval drop rules are of the form:

View File

@ -28,7 +28,7 @@ Configuration:
-Ddruid.zk.service.host=localhost
-Ddruid.extensions.coordinates=["io.druid.extensions:druid-kafka-seven:0.6.156"]
-Ddruid.extensions.coordinates=["io.druid.extensions:druid-kafka-seven:0.6.159"]
-Ddruid.db.connector.connectURI=jdbc:mysql://localhost:3306/druid
-Ddruid.db.connector.user=druid

View File

@ -1,12 +0,0 @@
---
layout: doc_page
---
YourKit supports the Druid open source projects with its
full-featured Java Profiler.
YourKit, LLC is the creator of innovative and intelligent tools for profiling
Java and .NET applications. Take a look at YourKit's software products:
<a href="http://www.yourkit.com/java/profiler/index.jsp">YourKit Java
Profiler</a> and
<a href="http://www.yourkit.com/.net/profiler/index.jsp">YourKit .NET
Profiler</a>.

View File

@ -49,7 +49,7 @@ There are two ways to setup Druid: download a tarball, or [Build From Source](Bu
### Download a Tarball
We've built a tarball that contains everything you'll need. You'll find it [here](http://static.druid.io/artifacts/releases/druid-services-0.6.156-bin.tar.gz). Download this file to a directory of your choosing.
We've built a tarball that contains everything you'll need. You'll find it [here](http://static.druid.io/artifacts/releases/druid-services-0.6.159-bin.tar.gz). Download this file to a directory of your choosing.
You can extract the awesomeness within by issuing:
@ -60,7 +60,7 @@ tar -zxvf druid-services-*-bin.tar.gz
Not too lost so far right? That's great! If you cd into the directory:
```
cd druid-services-0.6.156
cd druid-services-0.6.159
```
You should see a bunch of files:

View File

@ -91,7 +91,7 @@ druid.service=overlord
druid.zk.service.host=localhost
druid.extensions.coordinates=["io.druid.extensions:druid-kafka-seven:0.6.156"]
druid.extensions.coordinates=["io.druid.extensions:druid-kafka-seven:0.6.159"]
druid.db.connector.connectURI=jdbc:mysql://localhost:3306/druid
druid.db.connector.user=druid

View File

@ -13,7 +13,7 @@ In this tutorial, we will set up other types of Druid nodes and external depende
If you followed the first tutorial, you should already have Druid downloaded. If not, let's go back and do that first.
You can download the latest version of druid [here](http://static.druid.io/artifacts/releases/druid-services-0.6.156-bin.tar.gz)
You can download the latest version of druid [here](http://static.druid.io/artifacts/releases/druid-services-0.6.159-bin.tar.gz)
and untar the contents within by issuing:
@ -149,7 +149,7 @@ druid.port=8081
druid.zk.service.host=localhost
druid.extensions.coordinates=["io.druid.extensions:druid-s3-extensions:0.6.156"]
druid.extensions.coordinates=["io.druid.extensions:druid-s3-extensions:0.6.159"]
# Dummy read only AWS account (used to download example data)
druid.s3.secretKey=QyyfVZ7llSiRg6Qcrql1eEUG7buFpAK6T6engr1b
@ -240,7 +240,7 @@ druid.port=8083
druid.zk.service.host=localhost
druid.extensions.coordinates=["io.druid.extensions:druid-examples:0.6.156","io.druid.extensions:druid-kafka-seven:0.6.156"]
druid.extensions.coordinates=["io.druid.extensions:druid-examples:0.6.159","io.druid.extensions:druid-kafka-seven:0.6.159"]
# Change this config to db to hand off to the rest of the Druid cluster
druid.publish.type=noop

View File

@ -37,7 +37,7 @@ There are two ways to setup Druid: download a tarball, or [Build From Source](Bu
h3. Download a Tarball
We've built a tarball that contains everything you'll need. You'll find it [here](http://static.druid.io/artifacts/releases/druid-services-0.6.156-bin.tar.gz)
We've built a tarball that contains everything you'll need. You'll find it [here](http://static.druid.io/artifacts/releases/druid-services-0.6.159-bin.tar.gz)
Download this file to a directory of your choosing.
You can extract the awesomeness within by issuing:
@ -48,7 +48,7 @@ tar zxvf druid-services-*-bin.tar.gz
Not too lost so far right? That's great! If you cd into the directory:
```
cd druid-services-0.6.156
cd druid-services-0.6.159
```
You should see a bunch of files:

View File

@ -9,7 +9,7 @@ There are two ways to setup Druid: download a tarball, or build it from source.
# Download a Tarball
We've built a tarball that contains everything you'll need. You'll find it [here](http://static.druid.io/artifacts/releases/druid-services-0.6.156-bin.tar.gz).
We've built a tarball that contains everything you'll need. You'll find it [here](http://static.druid.io/artifacts/releases/druid-services-0.6.159-bin.tar.gz).
Download this bad boy to a directory of your choosing.
You can extract the awesomeness within by issuing:

View File

@ -37,17 +37,6 @@ When Druid?
* You want to do your analysis on data as its happening (in real-time)
* You need a data store that is always available, 24x7x365, and years into the future.
Not Druid?
----------
* The amount of data you have can easily be handled by MySQL
* You're querying for individual entries or doing lookups (not analytics)
* Batch ingestion is good enough
* Canned queries are good enough
* Downtime is no big deal
Druid vs…
----------
@ -60,7 +49,7 @@ Druid vs…
About This Page
----------
The data store world is vast, confusing and constantly in flux. This page is meant to help potential evaluators decide whether Druid is a good fit for the problem one needs to solve. If anything about it is incorrect please provide that feedback on the mailing list or via some other means so we can fix it.
The data infrastructure world is vast, confusing and constantly in flux. This page is meant to help potential evaluators decide whether Druid is a good fit for the problem one needs to solve. If anything about it is incorrect please provide that feedback on the mailing list or via some other means so we can fix it.

View File

@ -17,7 +17,9 @@ h2. Getting Started
h2. Booting a Druid Cluster
* "Simple Cluster Configuration":Simple-Cluster-Configuration.html
* "Production Cluster Configuration":Production-Cluster-Configuration.html
* "Production Hadoop Configuration":Hadoop-Configuration.html
* "Rolling Cluster Updates":Rolling-Updates.html
* "Recommendations":Recommendations.html
h2. Configuration
* "Common Configuration":Configuration.html
@ -84,6 +86,7 @@ h2. Experimental
* "Geographic Queries":./GeographicQueries.html
* "Select Query":./SelectQuery.html
* "Approximate Histograms and Quantiles":./ApproxHisto.html
* "Router node":./Router.html
h2. Development
* "Versioning":./Versioning.html
@ -91,4 +94,4 @@ h2. Development
* "Libraries":./Libraries.html
h2. Misc
* "Thanks":./Thanks.html
* "Thanks":/thanks.html

View File

@ -28,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.157-SNAPSHOT</version>
<version>0.6.160-SNAPSHOT</version>
</parent>
<dependencies>

View File

@ -28,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.157-SNAPSHOT</version>
<version>0.6.160-SNAPSHOT</version>
</parent>
<dependencies>

View File

@ -27,7 +27,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.157-SNAPSHOT</version>
<version>0.6.160-SNAPSHOT</version>
</parent>
<dependencies>

View File

@ -1019,8 +1019,6 @@ public class ApproximateHistogram
* @param count current size of the heap
* @param heapIndex index of the item to be deleted
* @param values values stored in the heap
*
* @return
*/
private static int heapDelete(int[] heap, int[] reverseIndex, int count, int heapIndex, float[] values)
{

View File

@ -29,8 +29,14 @@ import com.google.common.primitives.Floats;
import com.google.common.primitives.Ints;
import io.druid.query.aggregation.Aggregator;
import io.druid.query.aggregation.AggregatorFactory;
import io.druid.query.aggregation.Aggregators;
import io.druid.query.aggregation.BufferAggregator;
import io.druid.query.aggregation.hyperloglog.HyperLogLogCollector;
import io.druid.query.aggregation.hyperloglog.HyperUniquesAggregator;
import io.druid.query.aggregation.hyperloglog.HyperUniquesBufferAggregator;
import io.druid.segment.ColumnSelectorFactory;
import io.druid.segment.FloatColumnSelector;
import io.druid.segment.ObjectColumnSelector;
import org.apache.commons.codec.binary.Base64;
import java.nio.ByteBuffer;
@ -113,7 +119,7 @@ public class ApproximateHistogramAggregatorFactory implements AggregatorFactory
@Override
public AggregatorFactory getCombiningFactory()
{
return new ApproximateHistogramAggregatorFactory(name, name, resolution, numBuckets, lowerLimit, upperLimit);
return new ApproximateHistogramFoldingAggregatorFactory(name, name, resolution, numBuckets, lowerLimit, upperLimit);
}
@Override

View File

@ -76,7 +76,8 @@ public class ApproximateHistogramFoldingAggregatorFactory extends ApproximateHis
};
}
if (ApproximateHistogram.class.isAssignableFrom(selector.classOfObject())) {
final Class cls = selector.classOfObject();
if (cls.equals(Object.class) || ApproximateHistogram.class.isAssignableFrom(cls)) {
return new ApproximateHistogramFoldingAggregator(
name,
selector,
@ -89,7 +90,7 @@ public class ApproximateHistogramFoldingAggregatorFactory extends ApproximateHis
throw new IAE(
"Incompatible type for metric[%s], expected a ApproximateHistogram, got a %s",
fieldName,
selector.classOfObject()
cls
);
}
@ -117,14 +118,15 @@ public class ApproximateHistogramFoldingAggregatorFactory extends ApproximateHis
};
}
if (ApproximateHistogram.class.isAssignableFrom(selector.classOfObject())) {
final Class cls = selector.classOfObject();
if (cls.equals(Object.class) || ApproximateHistogram.class.isAssignableFrom(cls)) {
return new ApproximateHistogramFoldingBufferAggregator(selector, resolution, lowerLimit, upperLimit);
}
throw new IAE(
"Incompatible type for metric[%s], expected a ApproximateHistogram, got a %s",
fieldName,
selector.classOfObject()
cls
);
}

View File

@ -84,5 +84,12 @@ public class Histogram
return result;
}
}
@Override
public String toString()
{
return "Histogram{" +
"breaks=" + Arrays.toString(breaks) +
", counts=" + Arrays.toString(counts) +
'}';
}
}

View File

@ -0,0 +1,215 @@
/*
* Druid - a distributed column store.
* Copyright (C) 2014 Metamarkets Group Inc.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
package io.druid.query.aggregation.histogram;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.google.common.base.Function;
import com.google.common.base.Supplier;
import com.google.common.base.Suppliers;
import com.google.common.collect.Iterables;
import com.google.common.collect.Lists;
import io.druid.collections.StupidPool;
import io.druid.data.input.Row;
import io.druid.jackson.DefaultObjectMapper;
import io.druid.query.QueryRunner;
import io.druid.query.QueryRunnerTestHelper;
import io.druid.query.aggregation.PostAggregator;
import io.druid.query.dimension.DefaultDimensionSpec;
import io.druid.query.dimension.DimensionSpec;
import io.druid.query.groupby.GroupByQuery;
import io.druid.query.groupby.GroupByQueryConfig;
import io.druid.query.groupby.GroupByQueryEngine;
import io.druid.query.groupby.GroupByQueryQueryToolChest;
import io.druid.query.groupby.GroupByQueryRunnerFactory;
import io.druid.query.groupby.GroupByQueryRunnerTestHelper;
import io.druid.query.groupby.orderby.DefaultLimitSpec;
import io.druid.query.groupby.orderby.OrderByColumnSpec;
import io.druid.segment.TestHelper;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.junit.runners.Parameterized;
import javax.annotation.Nullable;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.util.Arrays;
import java.util.Collection;
import java.util.List;
/**
*/
@RunWith(Parameterized.class)
public class ApproximateHistogramGroupByQueryTest
{
private final QueryRunner<Row> runner;
private GroupByQueryRunnerFactory factory;
@Parameterized.Parameters
public static Collection<?> constructorFeeder() throws IOException
{
final ObjectMapper mapper = new DefaultObjectMapper();
final StupidPool<ByteBuffer> pool = new StupidPool<ByteBuffer>(
new Supplier<ByteBuffer>()
{
@Override
public ByteBuffer get()
{
return ByteBuffer.allocate(1024 * 1024);
}
}
);
final GroupByQueryConfig config = new GroupByQueryConfig();
config.setMaxIntermediateRows(10000);
final Supplier<GroupByQueryConfig> configSupplier = Suppliers.ofInstance(config);
final GroupByQueryEngine engine = new GroupByQueryEngine(configSupplier, pool);
final GroupByQueryRunnerFactory factory = new GroupByQueryRunnerFactory(
engine,
QueryRunnerTestHelper.NOOP_QUERYWATCHER,
configSupplier,
new GroupByQueryQueryToolChest(configSupplier, mapper, engine)
);
GroupByQueryConfig singleThreadedConfig = new GroupByQueryConfig()
{
@Override
public boolean isSingleThreaded()
{
return true;
}
};
singleThreadedConfig.setMaxIntermediateRows(10000);
final Supplier<GroupByQueryConfig> singleThreadedConfigSupplier = Suppliers.ofInstance(singleThreadedConfig);
final GroupByQueryEngine singleThreadEngine = new GroupByQueryEngine(singleThreadedConfigSupplier, pool);
final GroupByQueryRunnerFactory singleThreadFactory = new GroupByQueryRunnerFactory(
singleThreadEngine,
QueryRunnerTestHelper.NOOP_QUERYWATCHER,
singleThreadedConfigSupplier,
new GroupByQueryQueryToolChest(singleThreadedConfigSupplier, mapper, singleThreadEngine)
);
Function<Object, Object> function = new Function<Object, Object>()
{
@Override
public Object apply(@Nullable Object input)
{
return new Object[]{factory, ((Object[]) input)[0]};
}
};
return Lists.newArrayList(
Iterables.concat(
Iterables.transform(
QueryRunnerTestHelper.makeQueryRunners(factory),
function
),
Iterables.transform(
QueryRunnerTestHelper.makeQueryRunners(singleThreadFactory),
function
)
)
);
}
public ApproximateHistogramGroupByQueryTest(GroupByQueryRunnerFactory factory, QueryRunner runner)
{
this.factory = factory;
this.runner = runner;
}
@Test
public void testGroupByWithApproximateHistogramAgg()
{
ApproximateHistogramAggregatorFactory aggFactory = new ApproximateHistogramAggregatorFactory(
"apphisto",
"index",
10,
5,
Float.NEGATIVE_INFINITY,
Float.POSITIVE_INFINITY
);
GroupByQuery query = new GroupByQuery.Builder()
.setDataSource(QueryRunnerTestHelper.dataSource)
.setGranularity(QueryRunnerTestHelper.allGran)
.setDimensions(
Arrays.<DimensionSpec>asList(
new DefaultDimensionSpec(
QueryRunnerTestHelper.providerDimension,
"proViderAlias"
)
)
)
.setInterval(QueryRunnerTestHelper.fullOnInterval)
.setLimitSpec(
new DefaultLimitSpec(
Lists.newArrayList(
new OrderByColumnSpec(
"proViderAlias",
OrderByColumnSpec.Direction.DESCENDING
)
), 1
)
)
.setAggregatorSpecs(
Lists.newArrayList(
QueryRunnerTestHelper.rowsCount,
aggFactory
)
)
.setPostAggregatorSpecs(
Arrays.<PostAggregator>asList(
new QuantilePostAggregator("quantile", "apphisto", 0.5f)
)
)
.build();
List<Row> expectedResults = Arrays.asList(
GroupByQueryRunnerTestHelper.createExpectedRow(
"1970-01-01T00:00:00.000Z",
"provideralias", "upfront",
"rows", 186L,
"quantile", 880.9881f,
"apphisto",
new Histogram(
new float[]{
214.97299194335938f,
545.9906005859375f,
877.0081787109375f,
1208.0257568359375f,
1539.0433349609375f,
1870.06103515625f
},
new double[]{
0.0, 67.53287506103516, 72.22068786621094, 31.984678268432617, 14.261756896972656
}
)
)
);
Iterable<Row> results = GroupByQueryRunnerTestHelper.runQuery(factory, runner, query);
TestHelper.assertExpectedObjects(expectedResults, results, "approx-histo");
}
}

View File

@ -52,18 +52,8 @@ import java.util.List;
import java.util.Map;
@RunWith(Parameterized.class)
public class ApproximateHistogramQueryTest
public class ApproximateHistogramTopNQueryTest
{
private final QueryRunner runner;
public ApproximateHistogramQueryTest(
QueryRunner runner
)
{
this.runner = runner;
}
@Parameterized.Parameters
public static Collection<?> constructorFeeder() throws IOException
{
@ -99,6 +89,15 @@ public class ApproximateHistogramQueryTest
return retVal;
}
private final QueryRunner runner;
public ApproximateHistogramTopNQueryTest(
QueryRunner runner
)
{
this.runner = runner;
}
@Test
public void testTopNWithApproximateHistogramAgg()
{

View File

@ -28,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.157-SNAPSHOT</version>
<version>0.6.160-SNAPSHOT</version>
</parent>
<dependencies>

View File

@ -41,6 +41,7 @@ import com.metamx.common.logger.Logger;
import io.druid.common.utils.JodaUtils;
import io.druid.data.input.InputRow;
import io.druid.data.input.impl.StringInputRowParser;
import io.druid.granularity.QueryGranularity;
import io.druid.guice.GuiceInjectors;
import io.druid.guice.JsonConfigProvider;
import io.druid.guice.annotations.Self;
@ -172,6 +173,7 @@ public class HadoopDruidIndexerConfig
private volatile PathSpec pathSpec;
private volatile Map<DateTime,ShardSpecLookup> shardSpecLookups = Maps.newHashMap();
private volatile Map<ShardSpec, HadoopyShardSpec> hadoopShardSpecLookup = Maps.newHashMap();
private final QueryGranularity rollupGran;
@JsonCreator
public HadoopDruidIndexerConfig(
@ -203,6 +205,7 @@ public class HadoopDruidIndexerConfig
hadoopShardSpecLookup.put(hadoopyShardSpec.getActualSpec(), hadoopyShardSpec);
}
}
this.rollupGran = schema.getDataSchema().getGranularitySpec().getQueryGranularity();
}
@JsonProperty
@ -326,7 +329,7 @@ public class HadoopDruidIndexerConfig
return Optional.absent();
}
final ShardSpec actualSpec = shardSpecLookups.get(timeBucket.get().getStart()).getShardSpec(inputRow);
final ShardSpec actualSpec = shardSpecLookups.get(timeBucket.get().getStart()).getShardSpec(rollupGran.truncate(inputRow.getTimestampFromEpoch()), inputRow);
final HadoopyShardSpec hadoopyShardSpec = hadoopShardSpecLookup.get(actualSpec);
return Optional.of(

View File

@ -20,15 +20,34 @@
package io.druid.indexer;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.google.api.client.util.Lists;
import com.google.common.base.Optional;
import com.google.common.base.Throwables;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableMap;
import com.metamx.common.Granularity;
import io.druid.data.input.InputRow;
import io.druid.data.input.MapBasedInputRow;
import io.druid.data.input.impl.JSONDataSpec;
import io.druid.data.input.impl.TimestampSpec;
import io.druid.granularity.QueryGranularity;
import io.druid.indexer.rollup.DataRollupSpec;
import io.druid.jackson.DefaultObjectMapper;
import io.druid.query.aggregation.AggregatorFactory;
import io.druid.segment.indexing.DataSchema;
import io.druid.segment.indexing.granularity.UniformGranularitySpec;
import io.druid.timeline.partition.HashBasedNumberedShardSpec;
import org.apache.hadoop.fs.LocalFileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hdfs.DistributedFileSystem;
import org.joda.time.DateTime;
import org.joda.time.Interval;
import org.junit.Assert;
import org.junit.Test;
import java.util.Arrays;
import java.util.List;
/**
*/
public class HadoopDruidIndexerConfigTest
@ -125,4 +144,68 @@ public class HadoopDruidIndexerConfigTest
);
}
@Test
public void testHashedBucketSelection() {
List<HadoopyShardSpec> specs = Lists.newArrayList();
final int partitionCount = 10;
for (int i = 0; i < partitionCount; i++) {
specs.add(new HadoopyShardSpec(new HashBasedNumberedShardSpec(i, partitionCount, new DefaultObjectMapper()), i));
}
HadoopIngestionSpec spec = new HadoopIngestionSpec(
null, null, null,
"foo",
new TimestampSpec("timestamp", "auto"),
new JSONDataSpec(ImmutableList.of("foo"), null),
new UniformGranularitySpec(
Granularity.HOUR,
QueryGranularity.MINUTE,
ImmutableList.of(new Interval("2010-01-01/P1D")),
Granularity.HOUR
),
null,
null,
null,
null,
null,
false,
true,
ImmutableMap.of(new DateTime("2010-01-01T01:00:00"), specs),
false,
new DataRollupSpec(ImmutableList.<AggregatorFactory>of(), QueryGranularity.MINUTE),
null,
false,
ImmutableMap.of("foo", "bar"),
false,
null,
null,
null,
null,
null,
null
);
HadoopDruidIndexerConfig config = HadoopDruidIndexerConfig.fromSchema(spec);
final List<String> dims = Arrays.asList("diM1", "dIM2");
final ImmutableMap<String, Object> values = ImmutableMap.<String, Object>of(
"Dim1",
"1",
"DiM2",
"2",
"dim1",
"3",
"dim2",
"4"
);
final long timestamp = new DateTime("2010-01-01T01:00:01").getMillis();
final Bucket expectedBucket = config.getBucket(new MapBasedInputRow(timestamp, dims, values)).get();
final long nextBucketTimestamp = QueryGranularity.MINUTE.next(QueryGranularity.MINUTE.truncate(timestamp));
// check that all rows having same set of dims and truncated timestamp hash to same bucket
for (int i = 0; timestamp + i < nextBucketTimestamp; i++) {
Assert.assertEquals(
expectedBucket.partitionNum,
config.getBucket(new MapBasedInputRow(timestamp + i, dims, values)).get().partitionNum
);
}
}
}

View File

@ -28,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.157-SNAPSHOT</version>
<version>0.6.160-SNAPSHOT</version>
</parent>
<dependencies>

View File

@ -407,14 +407,14 @@ public class IndexTask extends AbstractFixedIntervalTask
final int myRowFlushBoundary = rowFlushBoundary > 0
? rowFlushBoundary
: toolbox.getConfig().getDefaultRowFlushBoundary();
final QueryGranularity rollupGran = ingestionSchema.getDataSchema().getGranularitySpec().getQueryGranularity();
try {
plumber.startJob();
while (firehose.hasMore()) {
final InputRow inputRow = firehose.nextRow();
if (shouldIndex(shardSpec, interval, inputRow)) {
if (shouldIndex(shardSpec, interval, inputRow, rollupGran)) {
int numRows = plumber.add(inputRow);
if (numRows == -1) {
throw new ISE(
@ -469,13 +469,15 @@ public class IndexTask extends AbstractFixedIntervalTask
*
* @return true or false
*/
private boolean shouldIndex(
private static boolean shouldIndex(
final ShardSpec shardSpec,
final Interval interval,
final InputRow inputRow
final InputRow inputRow,
final QueryGranularity rollupGran
)
{
return interval.contains(inputRow.getTimestampFromEpoch()) && shardSpec.isInChunk(inputRow);
return interval.contains(inputRow.getTimestampFromEpoch())
&& shardSpec.isInChunk(rollupGran.truncate(inputRow.getTimestampFromEpoch()), inputRow);
}
public static class IndexIngestionSpec extends IngestionSpec<IndexIOConfig, IndexTuningConfig>

View File

@ -32,7 +32,7 @@ public class ImmutableZkWorker
{
private final Worker worker;
private final int currCapacityUsed;
private final Set<String> availabilityGroups;
private final ImmutableSet<String> availabilityGroups;
public ImmutableZkWorker(Worker worker, int currCapacityUsed, Set<String> availabilityGroups)
{

View File

@ -164,7 +164,7 @@ public class RemoteTaskRunner implements TaskRunner, TaskLogStreamer
@Override
public void childEvent(CuratorFramework client, final PathChildrenCacheEvent event) throws Exception
{
Worker worker;
final Worker worker;
switch (event.getType()) {
case CHILD_ADDED:
worker = jsonMapper.readValue(
@ -198,6 +198,14 @@ public class RemoteTaskRunner implements TaskRunner, TaskLogStreamer
}
);
break;
case CHILD_UPDATED:
worker = jsonMapper.readValue(
event.getData().getData(),
Worker.class
);
updateWorker(worker);
break;
case CHILD_REMOVED:
worker = jsonMapper.readValue(
event.getData().getData(),
@ -745,6 +753,24 @@ public class RemoteTaskRunner implements TaskRunner, TaskLogStreamer
}
}
/**
* We allow workers to change their own capacities and versions. They cannot change their own hosts or ips without
* dropping themselves and re-announcing.
*/
private void updateWorker(final Worker worker)
{
final ZkWorker zkWorker = zkWorkers.get(worker.getHost());
if (zkWorker != null) {
log.info("Worker[%s] updated its announcement from[%s] to[%s].", worker.getHost(), zkWorker.getWorker(), worker);
zkWorker.setWorker(worker);
} else {
log.warn(
"WTF, worker[%s] updated its announcement but we didn't have a ZkWorker for it. Ignoring.",
worker.getHost()
);
}
}
/**
* When a ephemeral worker node disappears from ZK, incomplete running tasks will be retried by
* the logic in the status listener. We still have to make sure there are no tasks assigned

View File

@ -22,11 +22,11 @@ package io.druid.indexing.overlord;
import com.fasterxml.jackson.annotation.JsonProperty;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.google.common.base.Function;
import com.google.common.base.Preconditions;
import com.google.common.base.Throwables;
import com.google.common.collect.Lists;
import com.google.common.collect.Maps;
import com.google.common.collect.Sets;
import io.druid.indexing.common.task.Task;
import io.druid.indexing.worker.TaskAnnouncement;
import io.druid.indexing.worker.Worker;
import org.apache.curator.framework.recipes.cache.ChildData;
@ -46,15 +46,15 @@ import java.util.concurrent.atomic.AtomicReference;
*/
public class ZkWorker implements Closeable
{
private final Worker worker;
private final PathChildrenCache statusCache;
private final Function<ChildData, TaskAnnouncement> cacheConverter;
private AtomicReference<Worker> worker;
private AtomicReference<DateTime> lastCompletedTaskTime = new AtomicReference<DateTime>(new DateTime());
public ZkWorker(Worker worker, PathChildrenCache statusCache, final ObjectMapper jsonMapper)
{
this.worker = worker;
this.worker = new AtomicReference<>(worker);
this.statusCache = statusCache;
this.cacheConverter = new Function<ChildData, TaskAnnouncement>()
{
@ -84,7 +84,7 @@ public class ZkWorker implements Closeable
@JsonProperty("worker")
public Worker getWorker()
{
return worker;
return worker.get();
}
@JsonProperty("runningTasks")
@ -137,30 +137,28 @@ public class ZkWorker implements Closeable
return getRunningTasks().containsKey(taskId);
}
public boolean isAtCapacity()
{
return getCurrCapacityUsed() >= worker.getCapacity();
}
public boolean isValidVersion(String minVersion)
{
return worker.getVersion().compareTo(minVersion) >= 0;
return worker.get().getVersion().compareTo(minVersion) >= 0;
}
public boolean canRunTask(Task task)
public void setWorker(Worker newWorker)
{
return (worker.getCapacity() - getCurrCapacityUsed() >= task.getTaskResource().getRequiredCapacity()
&& !getAvailabilityGroups().contains(task.getTaskResource().getAvailabilityGroup()));
final Worker oldWorker = worker.get();
Preconditions.checkArgument(newWorker.getHost().equals(oldWorker.getHost()), "Cannot change Worker host");
Preconditions.checkArgument(newWorker.getIp().equals(oldWorker.getIp()), "Cannot change Worker ip");
worker.set(newWorker);
}
public void setLastCompletedTaskTime(DateTime completedTaskTime)
{
lastCompletedTaskTime.getAndSet(completedTaskTime);
lastCompletedTaskTime.set(completedTaskTime);
}
public ImmutableZkWorker toImmutable()
{
return new ImmutableZkWorker(worker, getCurrCapacityUsed(), getAvailabilityGroups());
return new ImmutableZkWorker(worker.get(), getCurrCapacityUsed(), getAvailabilityGroups());
}
@Override

View File

@ -361,6 +361,29 @@ public class RemoteTaskRunnerTest
Assert.assertEquals(TaskStatus.Status.FAILED, status.getStatusCode());
}
@Test
public void testWorkerDisabled() throws Exception
{
doSetup();
final ListenableFuture<TaskStatus> result = remoteTaskRunner.run(task);
Assert.assertTrue(taskAnnounced(task.getId()));
mockWorkerRunningTask(task);
Assert.assertTrue(workerRunningTask(task.getId()));
// Disable while task running
disableWorker();
// Continue test
mockWorkerCompleteSuccessfulTask(task);
Assert.assertTrue(workerCompletedTask(result));
Assert.assertEquals(task.getId(), result.get().getId());
Assert.assertEquals(TaskStatus.Status.SUCCESS, result.get().getStatusCode());
// Confirm RTR thinks the worker is disabled.
Assert.assertEquals("", Iterables.getOnlyElement(remoteTaskRunner.getWorkers()).getWorker().getVersion());
}
private void doSetup() throws Exception
{
makeWorker();
@ -405,6 +428,14 @@ public class RemoteTaskRunnerTest
);
}
private void disableWorker() throws Exception
{
cf.setData().forPath(
announcementsPath,
jsonMapper.writeValueAsBytes(new Worker(worker.getHost(), worker.getIp(), worker.getCapacity(), ""))
);
}
private boolean taskAnnounced(final String taskId)
{
return pathExists(joiner.join(tasksPath, taskId));

View File

@ -28,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.157-SNAPSHOT</version>
<version>0.6.160-SNAPSHOT</version>
</parent>
<dependencies>

View File

@ -28,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.157-SNAPSHOT</version>
<version>0.6.160-SNAPSHOT</version>
</parent>
<dependencies>

15
pom.xml
View File

@ -23,7 +23,7 @@
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<packaging>pom</packaging>
<version>0.6.157-SNAPSHOT</version>
<version>0.6.160-SNAPSHOT</version>
<name>druid</name>
<description>druid</description>
<scm>
@ -39,9 +39,9 @@
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<metamx.java-util.version>0.26.7</metamx.java-util.version>
<metamx.java-util.version>0.26.9</metamx.java-util.version>
<apache.curator.version>2.6.0</apache.curator.version>
<druid.api.version>0.2.10</druid.api.version>
<druid.api.version>0.2.14.1</druid.api.version>
</properties>
<modules>
@ -89,7 +89,7 @@
<dependency>
<groupId>com.metamx</groupId>
<artifactId>bytebuffer-collections</artifactId>
<version>0.0.2</version>
<version>0.0.4</version>
</dependency>
<dependency>
<groupId>com.metamx</groupId>
@ -194,7 +194,7 @@
<dependency>
<groupId>it.uniroma3.mat</groupId>
<artifactId>extendedset</artifactId>
<version>1.3.4</version>
<version>1.3.7</version>
</dependency>
<dependency>
<groupId>com.google.guava</groupId>
@ -256,6 +256,11 @@
<artifactId>jackson-jaxrs-json-provider</artifactId>
<version>2.2.3</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.jaxrs</groupId>
<artifactId>jackson-jaxrs-smile-provider</artifactId>
<version>2.2.3</version>
</dependency>
<dependency>
<groupId>org.codehaus.jackson</groupId>
<artifactId>jackson-core-asl</artifactId>

View File

@ -28,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.157-SNAPSHOT</version>
<version>0.6.160-SNAPSHOT</version>
</parent>
<dependencies>

View File

@ -63,12 +63,12 @@ public class CardinalityAggregatorFactory implements AggregatorFactory
public CardinalityAggregatorFactory(
@JsonProperty("name") String name,
@JsonProperty("fieldNames") final List<String> fieldNames,
@JsonProperty("byRow") final Boolean byRow
@JsonProperty("byRow") final boolean byRow
)
{
this.name = name;
this.fieldNames = fieldNames;
this.byRow = byRow == null ? false : byRow;
this.byRow = byRow;
}
@Override
@ -203,6 +203,12 @@ public class CardinalityAggregatorFactory implements AggregatorFactory
return fieldNames;
}
@JsonProperty
public boolean isByRow()
{
return byRow;
}
@Override
public byte[] getCacheKey()
{

View File

@ -200,9 +200,6 @@ public abstract class HyperLogLogCollector implements Comparable<HyperLogLogColl
/**
* Checks if the payload for the given ByteBuffer is sparse or not.
* The given buffer must be positioned at getPayloadBytePosition() prior to calling isSparse
*
* @param buffer
* @return
*/
private static boolean isSparse(ByteBuffer buffer)
{
@ -636,8 +633,6 @@ public abstract class HyperLogLogCollector implements Comparable<HyperLogLogColl
* @param position The position into the byte buffer, this position represents two "registers"
* @param offsetDiff The difference in offset between the byteToAdd and the current HyperLogLogCollector
* @param byteToAdd The byte to merge into the current HyperLogLogCollector
*
* @return
*/
private static int mergeAndStoreByteRegister(
final ByteBuffer storageBuffer,

View File

@ -50,6 +50,7 @@ import io.druid.query.groupby.orderby.NoopLimitSpec;
import io.druid.query.groupby.orderby.OrderByColumnSpec;
import io.druid.query.spec.LegacySegmentSpec;
import io.druid.query.spec.QuerySegmentSpec;
import org.joda.time.Interval;
import java.util.List;
import java.util.Map;
@ -344,7 +345,22 @@ public class GroupByQuery extends BaseQuery<Row>
return this;
}
public Builder setInterval(Object interval)
public Builder setInterval(QuerySegmentSpec interval)
{
return setQuerySegmentSpec(interval);
}
public Builder setInterval(List<Interval> intervals)
{
return setQuerySegmentSpec(new LegacySegmentSpec(intervals));
}
public Builder setInterval(Interval interval)
{
return setQuerySegmentSpec(new LegacySegmentSpec(interval));
}
public Builder setInterval(String interval)
{
return setQuerySegmentSpec(new LegacySegmentSpec(interval));
}

View File

@ -49,8 +49,6 @@ public interface HavingSpec
* @param row A Row of data that may contain aggregated values
*
* @return true if the given row satisfies the having spec. False otherwise.
*
* @see Row
*/
public boolean eval(Row row);

View File

@ -173,7 +173,7 @@ public class DefaultLimitSpec implements LimitSpec
public String apply(Row input)
{
// Multi-value dimensions have all been flattened at this point;
final List<String> dimList = input.getDimension(dimension);
final List<String> dimList = input.getDimension(dimension.toLowerCase());
return dimList.isEmpty() ? null : dimList.get(0);
}
}

View File

@ -173,8 +173,6 @@ public class CompressedFloatsIndexedSupplier implements Supplier<IndexedFloats>
/**
* For testing. Do not depend on unless you like things breaking.
*
* @return
*/
GenericIndexed<ResourceHolder<FloatBuffer>> getBaseFloatBuffers()
{

View File

@ -184,7 +184,6 @@ public class CompressedLongsIndexedSupplier implements Supplier<IndexedLongs>
/**
* For testing. Do not use unless you like things breaking
* @return
*/
GenericIndexed<ResourceHolder<LongBuffer>> getBaseLongBuffers()
{

View File

@ -673,10 +673,9 @@ public class IncrementalIndex implements Iterable<Row>
falseIdsReverse = biMap.inverse();
}
/**
* Returns the interned String value to allow fast comparisons using `==` instead of `.equals()`
* @see io.druid.segment.incremental.IncrementalIndexStorageAdapter.EntryHolderValueMatcherFactory#makeValueMatcher(String, String)
*/
// Returns the interned String value to allow fast comparisons using `==` instead of `.equals()`
// see io.druid.segment.incremental.IncrementalIndexStorageAdapter.EntryHolderValueMatcherFactory#makeValueMatcher(String, String)
public String get(String value)
{
return value == null ? null : poorMansInterning.get(value);

View File

@ -532,10 +532,8 @@ public class IncrementalIndexStorageAdapter implements StorageAdapter
}
for (String dimVal : dims[dimIndex]) {
/**
* using == here instead of .equals() to speed up lookups made possible by
* {@link io.druid.segment.incremental.IncrementalIndex.DimDim#poorMansInterning}
*/
// using == here instead of .equals() to speed up lookups made possible by
// io.druid.segment.incremental.IncrementalIndex.DimDim#poorMansInterning
if (id == dimVal) {
return true;
}

View File

@ -19,16 +19,20 @@
package io.druid.query.aggregation.cardinality;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.google.common.base.Function;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.Iterables;
import com.google.common.collect.Iterators;
import com.google.common.collect.Lists;
import com.google.common.collect.Maps;
import io.druid.jackson.DefaultObjectMapper;
import io.druid.query.aggregation.Aggregator;
import io.druid.query.aggregation.AggregatorFactory;
import io.druid.query.aggregation.BufferAggregator;
import io.druid.segment.DimensionSelector;
import io.druid.segment.data.IndexedInts;
import junit.framework.Assert;
import org.junit.Assert;
import org.junit.Test;
import javax.annotation.Nullable;
@ -378,4 +382,15 @@ public class CardinalityAggregatorTest
0.05
);
}
@Test
public void testSerde() throws Exception
{
CardinalityAggregatorFactory factory = new CardinalityAggregatorFactory("billy", ImmutableList.of("b", "a", "c"), true);
ObjectMapper objectMapper = new DefaultObjectMapper();
Assert.assertEquals(
factory,
objectMapper.readValue(objectMapper.writeValueAsString(factory), AggregatorFactory.class)
);
}
}

View File

@ -21,27 +21,22 @@ package io.druid.query.groupby;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.google.common.base.Function;
import com.google.common.base.Preconditions;
import com.google.common.base.Supplier;
import com.google.common.base.Suppliers;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.Iterables;
import com.google.common.collect.Lists;
import com.google.common.collect.Maps;
import com.google.common.collect.Ordering;
import com.metamx.common.guava.Sequence;
import com.metamx.common.guava.Sequences;
import io.druid.collections.StupidPool;
import io.druid.data.input.MapBasedRow;
import io.druid.data.input.Row;
import io.druid.granularity.PeriodGranularity;
import io.druid.granularity.QueryGranularity;
import io.druid.jackson.DefaultObjectMapper;
import io.druid.query.FinalizeResultsQueryRunner;
import io.druid.query.Query;
import io.druid.query.QueryRunner;
import io.druid.query.QueryRunnerTestHelper;
import io.druid.query.QueryToolChest;
import io.druid.query.aggregation.AggregatorFactory;
import io.druid.query.aggregation.DoubleSumAggregatorFactory;
import io.druid.query.aggregation.JavaScriptAggregatorFactory;
@ -84,7 +79,6 @@ import java.util.Arrays;
import java.util.Collection;
import java.util.Comparator;
import java.util.List;
import java.util.Map;
@RunWith(Parameterized.class)
public class GroupByQueryRunnerTest
@ -195,28 +189,28 @@ public class GroupByQueryRunnerTest
.build();
List<Row> expectedResults = Arrays.asList(
createExpectedRow("2011-04-01", "alias", "automotive", "rows", 1L, "idx", 135L),
createExpectedRow("2011-04-01", "alias", "business", "rows", 1L, "idx", 118L),
createExpectedRow("2011-04-01", "alias", "entertainment", "rows", 1L, "idx", 158L),
createExpectedRow("2011-04-01", "alias", "health", "rows", 1L, "idx", 120L),
createExpectedRow("2011-04-01", "alias", "mezzanine", "rows", 3L, "idx", 2870L),
createExpectedRow("2011-04-01", "alias", "news", "rows", 1L, "idx", 121L),
createExpectedRow("2011-04-01", "alias", "premium", "rows", 3L, "idx", 2900L),
createExpectedRow("2011-04-01", "alias", "technology", "rows", 1L, "idx", 78L),
createExpectedRow("2011-04-01", "alias", "travel", "rows", 1L, "idx", 119L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "automotive", "rows", 1L, "idx", 135L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "business", "rows", 1L, "idx", 118L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "entertainment", "rows", 1L, "idx", 158L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "health", "rows", 1L, "idx", 120L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "mezzanine", "rows", 3L, "idx", 2870L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "news", "rows", 1L, "idx", 121L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "premium", "rows", 3L, "idx", 2900L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "technology", "rows", 1L, "idx", 78L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "travel", "rows", 1L, "idx", 119L),
createExpectedRow("2011-04-02", "alias", "automotive", "rows", 1L, "idx", 147L),
createExpectedRow("2011-04-02", "alias", "business", "rows", 1L, "idx", 112L),
createExpectedRow("2011-04-02", "alias", "entertainment", "rows", 1L, "idx", 166L),
createExpectedRow("2011-04-02", "alias", "health", "rows", 1L, "idx", 113L),
createExpectedRow("2011-04-02", "alias", "mezzanine", "rows", 3L, "idx", 2447L),
createExpectedRow("2011-04-02", "alias", "news", "rows", 1L, "idx", 114L),
createExpectedRow("2011-04-02", "alias", "premium", "rows", 3L, "idx", 2505L),
createExpectedRow("2011-04-02", "alias", "technology", "rows", 1L, "idx", 97L),
createExpectedRow("2011-04-02", "alias", "travel", "rows", 1L, "idx", 126L)
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "automotive", "rows", 1L, "idx", 147L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "business", "rows", 1L, "idx", 112L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "entertainment", "rows", 1L, "idx", 166L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "health", "rows", 1L, "idx", 113L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "mezzanine", "rows", 3L, "idx", 2447L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "news", "rows", 1L, "idx", 114L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "premium", "rows", 3L, "idx", 2505L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "technology", "rows", 1L, "idx", 97L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "travel", "rows", 1L, "idx", 126L)
);
Iterable<Row> results = runQuery(query);
Iterable<Row> results = GroupByQueryRunnerTestHelper.runQuery(factory, runner, query);
TestHelper.assertExpectedObjects(expectedResults, results, "");
}
@ -237,7 +231,7 @@ public class GroupByQueryRunnerTest
.build();
List<Row> expectedResults = Arrays.asList(
createExpectedRow(
GroupByQueryRunnerTestHelper.createExpectedRow(
"2011-04-01",
"rows",
26L,
@ -246,7 +240,7 @@ public class GroupByQueryRunnerTest
)
);
Iterable<Row> results = runQuery(query);
Iterable<Row> results = GroupByQueryRunnerTestHelper.runQuery(factory, runner, query);
TestHelper.assertExpectedObjects(expectedResults, results, "");
}
@ -267,7 +261,7 @@ public class GroupByQueryRunnerTest
.build();
List<Row> expectedResults = Arrays.asList(
createExpectedRow(
GroupByQueryRunnerTestHelper.createExpectedRow(
"2011-04-01",
"rows",
26L,
@ -276,7 +270,7 @@ public class GroupByQueryRunnerTest
)
);
Iterable<Row> results = runQuery(query);
Iterable<Row> results = GroupByQueryRunnerTestHelper.runQuery(factory, runner, query);
TestHelper.assertExpectedObjects(expectedResults, results, "");
}
@ -306,26 +300,26 @@ public class GroupByQueryRunnerTest
.build();
List<Row> expectedResults = Arrays.asList(
createExpectedRow("2011-04-01", "alias", "a", "rows", 1L, "idx", 135L),
createExpectedRow("2011-04-01", "alias", "b", "rows", 1L, "idx", 118L),
createExpectedRow("2011-04-01", "alias", "e", "rows", 1L, "idx", 158L),
createExpectedRow("2011-04-01", "alias", "h", "rows", 1L, "idx", 120L),
createExpectedRow("2011-04-01", "alias", "m", "rows", 3L, "idx", 2870L),
createExpectedRow("2011-04-01", "alias", "n", "rows", 1L, "idx", 121L),
createExpectedRow("2011-04-01", "alias", "p", "rows", 3L, "idx", 2900L),
createExpectedRow("2011-04-01", "alias", "t", "rows", 2L, "idx", 197L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "a", "rows", 1L, "idx", 135L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "b", "rows", 1L, "idx", 118L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "e", "rows", 1L, "idx", 158L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "h", "rows", 1L, "idx", 120L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "m", "rows", 3L, "idx", 2870L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "n", "rows", 1L, "idx", 121L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "p", "rows", 3L, "idx", 2900L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "t", "rows", 2L, "idx", 197L),
createExpectedRow("2011-04-02", "alias", "a", "rows", 1L, "idx", 147L),
createExpectedRow("2011-04-02", "alias", "b", "rows", 1L, "idx", 112L),
createExpectedRow("2011-04-02", "alias", "e", "rows", 1L, "idx", 166L),
createExpectedRow("2011-04-02", "alias", "h", "rows", 1L, "idx", 113L),
createExpectedRow("2011-04-02", "alias", "m", "rows", 3L, "idx", 2447L),
createExpectedRow("2011-04-02", "alias", "n", "rows", 1L, "idx", 114L),
createExpectedRow("2011-04-02", "alias", "p", "rows", 3L, "idx", 2505L),
createExpectedRow("2011-04-02", "alias", "t", "rows", 2L, "idx", 223L)
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "a", "rows", 1L, "idx", 147L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "b", "rows", 1L, "idx", 112L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "e", "rows", 1L, "idx", 166L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "h", "rows", 1L, "idx", 113L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "m", "rows", 3L, "idx", 2447L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "n", "rows", 1L, "idx", 114L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "p", "rows", 3L, "idx", 2505L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "t", "rows", 2L, "idx", 223L)
);
Iterable<Row> results = runQuery(query);
Iterable<Row> results = GroupByQueryRunnerTestHelper.runQuery(factory, runner, query);
TestHelper.assertExpectedObjects(expectedResults, results, "");
}
@ -364,28 +358,28 @@ public class GroupByQueryRunnerTest
.build();
List<Row> expectedResults = Arrays.asList(
createExpectedRow(new DateTime("2011-03-31", tz), "alias", "automotive", "rows", 1L, "idx", 135L),
createExpectedRow(new DateTime("2011-03-31", tz), "alias", "business", "rows", 1L, "idx", 118L),
createExpectedRow(new DateTime("2011-03-31", tz), "alias", "entertainment", "rows", 1L, "idx", 158L),
createExpectedRow(new DateTime("2011-03-31", tz), "alias", "health", "rows", 1L, "idx", 120L),
createExpectedRow(new DateTime("2011-03-31", tz), "alias", "mezzanine", "rows", 3L, "idx", 2870L),
createExpectedRow(new DateTime("2011-03-31", tz), "alias", "news", "rows", 1L, "idx", 121L),
createExpectedRow(new DateTime("2011-03-31", tz), "alias", "premium", "rows", 3L, "idx", 2900L),
createExpectedRow(new DateTime("2011-03-31", tz), "alias", "technology", "rows", 1L, "idx", 78L),
createExpectedRow(new DateTime("2011-03-31", tz), "alias", "travel", "rows", 1L, "idx", 119L),
GroupByQueryRunnerTestHelper.createExpectedRow(new DateTime("2011-03-31", tz), "alias", "automotive", "rows", 1L, "idx", 135L),
GroupByQueryRunnerTestHelper.createExpectedRow(new DateTime("2011-03-31", tz), "alias", "business", "rows", 1L, "idx", 118L),
GroupByQueryRunnerTestHelper.createExpectedRow(new DateTime("2011-03-31", tz), "alias", "entertainment", "rows", 1L, "idx", 158L),
GroupByQueryRunnerTestHelper.createExpectedRow(new DateTime("2011-03-31", tz), "alias", "health", "rows", 1L, "idx", 120L),
GroupByQueryRunnerTestHelper.createExpectedRow(new DateTime("2011-03-31", tz), "alias", "mezzanine", "rows", 3L, "idx", 2870L),
GroupByQueryRunnerTestHelper.createExpectedRow(new DateTime("2011-03-31", tz), "alias", "news", "rows", 1L, "idx", 121L),
GroupByQueryRunnerTestHelper.createExpectedRow(new DateTime("2011-03-31", tz), "alias", "premium", "rows", 3L, "idx", 2900L),
GroupByQueryRunnerTestHelper.createExpectedRow(new DateTime("2011-03-31", tz), "alias", "technology", "rows", 1L, "idx", 78L),
GroupByQueryRunnerTestHelper.createExpectedRow(new DateTime("2011-03-31", tz), "alias", "travel", "rows", 1L, "idx", 119L),
createExpectedRow(new DateTime("2011-04-01", tz), "alias", "automotive", "rows", 1L, "idx", 147L),
createExpectedRow(new DateTime("2011-04-01", tz), "alias", "business", "rows", 1L, "idx", 112L),
createExpectedRow(new DateTime("2011-04-01", tz), "alias", "entertainment", "rows", 1L, "idx", 166L),
createExpectedRow(new DateTime("2011-04-01", tz), "alias", "health", "rows", 1L, "idx", 113L),
createExpectedRow(new DateTime("2011-04-01", tz), "alias", "mezzanine", "rows", 3L, "idx", 2447L),
createExpectedRow(new DateTime("2011-04-01", tz), "alias", "news", "rows", 1L, "idx", 114L),
createExpectedRow(new DateTime("2011-04-01", tz), "alias", "premium", "rows", 3L, "idx", 2505L),
createExpectedRow(new DateTime("2011-04-01", tz), "alias", "technology", "rows", 1L, "idx", 97L),
createExpectedRow(new DateTime("2011-04-01", tz), "alias", "travel", "rows", 1L, "idx", 126L)
GroupByQueryRunnerTestHelper.createExpectedRow(new DateTime("2011-04-01", tz), "alias", "automotive", "rows", 1L, "idx", 147L),
GroupByQueryRunnerTestHelper.createExpectedRow(new DateTime("2011-04-01", tz), "alias", "business", "rows", 1L, "idx", 112L),
GroupByQueryRunnerTestHelper.createExpectedRow(new DateTime("2011-04-01", tz), "alias", "entertainment", "rows", 1L, "idx", 166L),
GroupByQueryRunnerTestHelper.createExpectedRow(new DateTime("2011-04-01", tz), "alias", "health", "rows", 1L, "idx", 113L),
GroupByQueryRunnerTestHelper.createExpectedRow(new DateTime("2011-04-01", tz), "alias", "mezzanine", "rows", 3L, "idx", 2447L),
GroupByQueryRunnerTestHelper.createExpectedRow(new DateTime("2011-04-01", tz), "alias", "news", "rows", 1L, "idx", 114L),
GroupByQueryRunnerTestHelper.createExpectedRow(new DateTime("2011-04-01", tz), "alias", "premium", "rows", 3L, "idx", 2505L),
GroupByQueryRunnerTestHelper.createExpectedRow(new DateTime("2011-04-01", tz), "alias", "technology", "rows", 1L, "idx", 97L),
GroupByQueryRunnerTestHelper.createExpectedRow(new DateTime("2011-04-01", tz), "alias", "travel", "rows", 1L, "idx", 126L)
);
Iterable<Row> results = runQuery(query);
Iterable<Row> results = GroupByQueryRunnerTestHelper.runQuery(factory, runner, query);
TestHelper.assertExpectedObjects(expectedResults, results, "");
}
@ -427,30 +421,30 @@ public class GroupByQueryRunnerTest
);
List<Row> expectedResults = Arrays.asList(
createExpectedRow("2011-04-01", "alias", "automotive", "rows", 2L, "idx", 269L),
createExpectedRow("2011-04-01", "alias", "business", "rows", 2L, "idx", 217L),
createExpectedRow("2011-04-01", "alias", "entertainment", "rows", 2L, "idx", 319L),
createExpectedRow("2011-04-01", "alias", "health", "rows", 2L, "idx", 216L),
createExpectedRow("2011-04-01", "alias", "mezzanine", "rows", 6L, "idx", 4420L),
createExpectedRow("2011-04-01", "alias", "news", "rows", 2L, "idx", 221L),
createExpectedRow("2011-04-01", "alias", "premium", "rows", 6L, "idx", 4416L),
createExpectedRow("2011-04-01", "alias", "technology", "rows", 2L, "idx", 177L),
createExpectedRow("2011-04-01", "alias", "travel", "rows", 2L, "idx", 243L)
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "automotive", "rows", 2L, "idx", 269L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "business", "rows", 2L, "idx", 217L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "entertainment", "rows", 2L, "idx", 319L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "health", "rows", 2L, "idx", 216L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "mezzanine", "rows", 6L, "idx", 4420L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "news", "rows", 2L, "idx", 221L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "premium", "rows", 6L, "idx", 4416L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "technology", "rows", 2L, "idx", 177L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "travel", "rows", 2L, "idx", 243L)
);
TestHelper.assertExpectedObjects(expectedResults, runner.run(fullQuery), "direct");
TestHelper.assertExpectedObjects(expectedResults, mergedRunner.run(fullQuery), "merged");
List<Row> allGranExpectedResults = Arrays.asList(
createExpectedRow("2011-04-02", "alias", "automotive", "rows", 2L, "idx", 269L),
createExpectedRow("2011-04-02", "alias", "business", "rows", 2L, "idx", 217L),
createExpectedRow("2011-04-02", "alias", "entertainment", "rows", 2L, "idx", 319L),
createExpectedRow("2011-04-02", "alias", "health", "rows", 2L, "idx", 216L),
createExpectedRow("2011-04-02", "alias", "mezzanine", "rows", 6L, "idx", 4420L),
createExpectedRow("2011-04-02", "alias", "news", "rows", 2L, "idx", 221L),
createExpectedRow("2011-04-02", "alias", "premium", "rows", 6L, "idx", 4416L),
createExpectedRow("2011-04-02", "alias", "technology", "rows", 2L, "idx", 177L),
createExpectedRow("2011-04-02", "alias", "travel", "rows", 2L, "idx", 243L)
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "automotive", "rows", 2L, "idx", 269L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "business", "rows", 2L, "idx", 217L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "entertainment", "rows", 2L, "idx", 319L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "health", "rows", 2L, "idx", 216L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "mezzanine", "rows", 6L, "idx", 4420L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "news", "rows", 2L, "idx", 221L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "premium", "rows", 6L, "idx", 4416L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "technology", "rows", 2L, "idx", 177L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "travel", "rows", 2L, "idx", 243L)
);
TestHelper.assertExpectedObjects(allGranExpectedResults, runner.run(allGranQuery), "direct");
@ -484,15 +478,15 @@ public class GroupByQueryRunnerTest
final GroupByQuery fullQuery = builder.build();
List<Row> expectedResults = Arrays.asList(
createExpectedRow("2011-04-01", "alias", "automotive", "rows", 2L, "idx", 269L),
createExpectedRow("2011-04-01", "alias", "business", "rows", 2L, "idx", 217L),
createExpectedRow("2011-04-01", "alias", "entertainment", "rows", 2L, "idx", 319L),
createExpectedRow("2011-04-01", "alias", "health", "rows", 2L, "idx", 216L),
createExpectedRow("2011-04-01", "alias", "mezzanine", "rows", 6L, "idx", 4420L),
createExpectedRow("2011-04-01", "alias", "news", "rows", 2L, "idx", 221L),
createExpectedRow("2011-04-01", "alias", "premium", "rows", 6L, "idx", 4416L),
createExpectedRow("2011-04-01", "alias", "technology", "rows", 2L, "idx", 177L),
createExpectedRow("2011-04-01", "alias", "travel", "rows", 2L, "idx", 243L)
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "automotive", "rows", 2L, "idx", 269L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "business", "rows", 2L, "idx", 217L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "entertainment", "rows", 2L, "idx", 319L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "health", "rows", 2L, "idx", 216L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "mezzanine", "rows", 6L, "idx", 4420L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "news", "rows", 2L, "idx", 221L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "premium", "rows", 6L, "idx", 4416L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "technology", "rows", 2L, "idx", 177L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "travel", "rows", 2L, "idx", 243L)
);
QueryRunner<Row> mergeRunner = factory.getToolchest().mergeResults(runner);
@ -559,15 +553,15 @@ public class GroupByQueryRunnerTest
};
List<Row> allResults = Arrays.asList(
createExpectedRow("2011-04-01", "alias", "automotive", "rows", 2L, "idx", 269L),
createExpectedRow("2011-04-01", "alias", "business", "rows", 2L, "idx", 217L),
createExpectedRow("2011-04-01", "alias", "entertainment", "rows", 2L, "idx", 319L),
createExpectedRow("2011-04-01", "alias", "health", "rows", 2L, "idx", 216L),
createExpectedRow("2011-04-01", "alias", "mezzanine", "rows", 6L, "idx", 4420L),
createExpectedRow("2011-04-01", "alias", "news", "rows", 2L, "idx", 221L),
createExpectedRow("2011-04-01", "alias", "premium", "rows", 6L, "idx", 4416L),
createExpectedRow("2011-04-01", "alias", "technology", "rows", 2L, "idx", 177L),
createExpectedRow("2011-04-01", "alias", "travel", "rows", 2L, "idx", 243L)
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "automotive", "rows", 2L, "idx", 269L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "business", "rows", 2L, "idx", 217L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "entertainment", "rows", 2L, "idx", 319L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "health", "rows", 2L, "idx", 216L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "mezzanine", "rows", 6L, "idx", 4420L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "news", "rows", 2L, "idx", 221L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "premium", "rows", 6L, "idx", 4416L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "technology", "rows", 2L, "idx", 177L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "travel", "rows", 2L, "idx", 243L)
);
List<List<Row>> expectedResults = Lists.newArrayList(
@ -642,15 +636,15 @@ public class GroupByQueryRunnerTest
final GroupByQuery query = builder.build();
List<Row> expectedResults = Arrays.asList(
createExpectedRow("2011-04-01", "alias", "travel", "rows", 2L, "idx", 243L),
createExpectedRow("2011-04-01", "alias", "technology", "rows", 2L, "idx", 177L),
createExpectedRow("2011-04-01", "alias", "news", "rows", 2L, "idx", 221L),
createExpectedRow("2011-04-01", "alias", "health", "rows", 2L, "idx", 216L),
createExpectedRow("2011-04-01", "alias", "entertainment", "rows", 2L, "idx", 319L),
createExpectedRow("2011-04-01", "alias", "business", "rows", 2L, "idx", 217L),
createExpectedRow("2011-04-01", "alias", "automotive", "rows", 2L, "idx", 269L),
createExpectedRow("2011-04-01", "alias", "premium", "rows", 6L, "idx", 4416L),
createExpectedRow("2011-04-01", "alias", "mezzanine", "rows", 6L, "idx", 4420L)
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "travel", "rows", 2L, "idx", 243L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "technology", "rows", 2L, "idx", 177L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "news", "rows", 2L, "idx", 221L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "health", "rows", 2L, "idx", 216L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "entertainment", "rows", 2L, "idx", 319L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "business", "rows", 2L, "idx", 217L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "automotive", "rows", 2L, "idx", 269L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "premium", "rows", 6L, "idx", 4416L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "mezzanine", "rows", 6L, "idx", 4420L)
);
QueryRunner<Row> mergeRunner = factory.getToolchest().mergeResults(runner);
@ -682,15 +676,15 @@ public class GroupByQueryRunnerTest
final GroupByQuery query = builder.build();
List<Row> expectedResults = Arrays.asList(
createExpectedRow("2011-04-01", "alias", "premium", "rows", 6L, "idx", 4416L),
createExpectedRow("2011-04-01", "alias", "mezzanine", "rows", 6L, "idx", 4420L),
createExpectedRow("2011-04-01", "alias", "travel", "rows", 2L, "idx", 243L),
createExpectedRow("2011-04-01", "alias", "technology", "rows", 2L, "idx", 177L),
createExpectedRow("2011-04-01", "alias", "news", "rows", 2L, "idx", 221L),
createExpectedRow("2011-04-01", "alias", "health", "rows", 2L, "idx", 216L),
createExpectedRow("2011-04-01", "alias", "entertainment", "rows", 2L, "idx", 319L),
createExpectedRow("2011-04-01", "alias", "business", "rows", 2L, "idx", 217L),
createExpectedRow("2011-04-01", "alias", "automotive", "rows", 2L, "idx", 269L)
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "premium", "rows", 6L, "idx", 4416L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "mezzanine", "rows", 6L, "idx", 4420L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "travel", "rows", 2L, "idx", 243L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "technology", "rows", 2L, "idx", 177L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "news", "rows", 2L, "idx", 221L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "health", "rows", 2L, "idx", 216L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "entertainment", "rows", 2L, "idx", 319L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "business", "rows", 2L, "idx", 217L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "automotive", "rows", 2L, "idx", 269L)
);
QueryRunner<Row> mergeRunner = factory.getToolchest().mergeResults(runner);
@ -721,15 +715,15 @@ public class GroupByQueryRunnerTest
final GroupByQuery query = builder.build();
List<Row> expectedResults = Arrays.asList(
createExpectedRow("2011-04-01", "alias", "mezzanine", "rows", 6L, "idx", 4423.6533203125D),
createExpectedRow("2011-04-01", "alias", "premium", "rows", 6L, "idx", 4418.61865234375D),
createExpectedRow("2011-04-01", "alias", "entertainment", "rows", 2L, "idx", 319.94403076171875D),
createExpectedRow("2011-04-01", "alias", "automotive", "rows", 2L, "idx", 270.3977966308594D),
createExpectedRow("2011-04-01", "alias", "travel", "rows", 2L, "idx", 243.65843200683594D),
createExpectedRow("2011-04-01", "alias", "news", "rows", 2L, "idx", 222.20980834960938D),
createExpectedRow("2011-04-01", "alias", "business", "rows", 2L, "idx", 218.7224884033203D),
createExpectedRow("2011-04-01", "alias", "health", "rows", 2L, "idx", 216.97836303710938D),
createExpectedRow("2011-04-01", "alias", "technology", "rows", 2L, "idx", 178.24917602539062D)
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "mezzanine", "rows", 6L, "idx", 4423.6533203125D),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "premium", "rows", 6L, "idx", 4418.61865234375D),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "entertainment", "rows", 2L, "idx", 319.94403076171875D),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "automotive", "rows", 2L, "idx", 270.3977966308594D),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "travel", "rows", 2L, "idx", 243.65843200683594D),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "news", "rows", 2L, "idx", 222.20980834960938D),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "business", "rows", 2L, "idx", 218.7224884033203D),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "health", "rows", 2L, "idx", 216.97836303710938D),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "technology", "rows", 2L, "idx", 178.24917602539062D)
);
QueryRunner<Row> mergeRunner = factory.getToolchest().mergeResults(runner);
@ -739,13 +733,55 @@ public class GroupByQueryRunnerTest
);
}
@Test
public void testGroupByWithMixedCasingOrdering()
{
GroupByQuery query = new GroupByQuery.Builder()
.setDataSource(QueryRunnerTestHelper.dataSource)
.setGranularity(QueryRunnerTestHelper.allGran)
.setDimensions(
Arrays.<DimensionSpec>asList(
new DefaultDimensionSpec(
QueryRunnerTestHelper.providerDimension,
"ProviderAlias"
)
)
)
.setInterval(QueryRunnerTestHelper.fullOnInterval)
.setLimitSpec(
new DefaultLimitSpec(
Lists.newArrayList(
new OrderByColumnSpec(
"providerALIAS",
OrderByColumnSpec.Direction.DESCENDING
)
), 3
)
)
.setAggregatorSpecs(
Lists.<AggregatorFactory>newArrayList(
QueryRunnerTestHelper.rowsCount
)
)
.build();
List<Row> expectedResults = Arrays.asList(
GroupByQueryRunnerTestHelper.createExpectedRow("1970-01-01T00:00:00.000Z", "provideralias", "upfront", "rows", 186L),
GroupByQueryRunnerTestHelper.createExpectedRow("1970-01-01T00:00:00.000Z", "provideralias", "total_market", "rows", 186L),
GroupByQueryRunnerTestHelper.createExpectedRow("1970-01-01T00:00:00.000Z", "provideralias", "spot", "rows", 837L)
);
Iterable<Row> results = GroupByQueryRunnerTestHelper.runQuery(factory, runner, query);
TestHelper.assertExpectedObjects(expectedResults, results, "order-limit");
}
@Test
public void testHavingSpec()
{
List<Row> expectedResults = Arrays.asList(
createExpectedRow("2011-04-01", "alias", "business", "rows", 2L, "idx", 217L),
createExpectedRow("2011-04-01", "alias", "mezzanine", "rows", 6L, "idx", 4420L),
createExpectedRow("2011-04-01", "alias", "premium", "rows", 6L, "idx", 4416L)
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "business", "rows", 2L, "idx", 217L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "mezzanine", "rows", 6L, "idx", 4420L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "premium", "rows", 6L, "idx", 4416L)
);
GroupByQuery.Builder builder = GroupByQuery
@ -811,7 +847,7 @@ public class GroupByQueryRunnerTest
final GroupByQuery query = builder.build();
List<Row> expectedResults = Arrays.asList(
createExpectedRow("2011-04-01", "quality", "automotive", "rows", 2L)
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "quality", "automotive", "rows", 2L)
);
final GroupByQueryEngine engine = new GroupByQueryEngine(
@ -855,15 +891,15 @@ public class GroupByQueryRunnerTest
final GroupByQuery query = builder.build();
List<Row> expectedResults = Arrays.asList(
createExpectedRow("2011-04-01", "quality", "automotive", "rows", 2L),
createExpectedRow("2011-04-01", "quality", "business", "rows", 2L),
createExpectedRow("2011-04-01", "quality", "entertainment", "rows", 2L),
createExpectedRow("2011-04-01", "quality", "health", "rows", 2L),
createExpectedRow("2011-04-01", "quality", "mezzanine", "rows", 6L),
createExpectedRow("2011-04-01", "quality", "news", "rows", 2L),
createExpectedRow("2011-04-01", "quality", "premium", "rows", 6L),
createExpectedRow("2011-04-01", "quality", "technology", "rows", 2L),
createExpectedRow("2011-04-01", "quality", "travel", "rows", 2L)
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "quality", "automotive", "rows", 2L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "quality", "business", "rows", 2L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "quality", "entertainment", "rows", 2L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "quality", "health", "rows", 2L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "quality", "mezzanine", "rows", 6L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "quality", "news", "rows", 2L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "quality", "premium", "rows", 6L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "quality", "technology", "rows", 2L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "quality", "travel", "rows", 2L)
);
TestHelper.assertExpectedObjects(expectedResults, runner.run(query), "normal");
@ -908,15 +944,15 @@ public class GroupByQueryRunnerTest
final GroupByQuery query = builder.build();
List<Row> expectedResults = Arrays.asList(
createExpectedRow("2011-04-01", "quality", "automotive", "rows", 2L),
createExpectedRow("2011-04-01", "quality", "business", "rows", 2L),
createExpectedRow("2011-04-01", "quality", "entertainment", "rows", 2L),
createExpectedRow("2011-04-01", "quality", "health", "rows", 2L),
createExpectedRow("2011-04-01", "quality", "mezzanine", "rows", 6L),
createExpectedRow("2011-04-01", "quality", "news", "rows", 2L),
createExpectedRow("2011-04-01", "quality", "premium", "rows", 6L),
createExpectedRow("2011-04-01", "quality", "technology", "rows", 2L),
createExpectedRow("2011-04-01", "quality", "travel", "rows", 2L)
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "quality", "automotive", "rows", 2L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "quality", "business", "rows", 2L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "quality", "entertainment", "rows", 2L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "quality", "health", "rows", 2L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "quality", "mezzanine", "rows", 6L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "quality", "news", "rows", 2L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "quality", "premium", "rows", 6L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "quality", "technology", "rows", 2L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "quality", "travel", "rows", 2L)
);
TestHelper.assertExpectedObjects(expectedResults, runner.run(query), "normal");
@ -976,29 +1012,29 @@ public class GroupByQueryRunnerTest
.build();
List<Row> expectedResults = Arrays.asList(
createExpectedRow("2011-04-01", "alias", "automotive", "rows", 1L, "idx", 135L),
createExpectedRow("2011-04-01", "alias", "business", "rows", 1L, "idx", 118L),
createExpectedRow("2011-04-01", "alias", "entertainment", "rows", 1L, "idx", 158L),
createExpectedRow("2011-04-01", "alias", "health", "rows", 1L, "idx", 120L),
createExpectedRow("2011-04-01", "alias", "mezzanine", "rows", 3L, "idx", 2870L),
createExpectedRow("2011-04-01", "alias", "news", "rows", 1L, "idx", 121L),
createExpectedRow("2011-04-01", "alias", "premium", "rows", 3L, "idx", 2900L),
createExpectedRow("2011-04-01", "alias", "technology", "rows", 1L, "idx", 78L),
createExpectedRow("2011-04-01", "alias", "travel", "rows", 1L, "idx", 119L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "automotive", "rows", 1L, "idx", 135L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "business", "rows", 1L, "idx", 118L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "entertainment", "rows", 1L, "idx", 158L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "health", "rows", 1L, "idx", 120L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "mezzanine", "rows", 3L, "idx", 2870L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "news", "rows", 1L, "idx", 121L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "premium", "rows", 3L, "idx", 2900L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "technology", "rows", 1L, "idx", 78L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "travel", "rows", 1L, "idx", 119L),
createExpectedRow("2011-04-02", "alias", "automotive", "rows", 1L, "idx", 147L),
createExpectedRow("2011-04-02", "alias", "business", "rows", 1L, "idx", 112L),
createExpectedRow("2011-04-02", "alias", "entertainment", "rows", 1L, "idx", 166L),
createExpectedRow("2011-04-02", "alias", "health", "rows", 1L, "idx", 113L),
createExpectedRow("2011-04-02", "alias", "mezzanine", "rows", 3L, "idx", 2447L),
createExpectedRow("2011-04-02", "alias", "news", "rows", 1L, "idx", 114L),
createExpectedRow("2011-04-02", "alias", "premium", "rows", 3L, "idx", 2505L),
createExpectedRow("2011-04-02", "alias", "technology", "rows", 1L, "idx", 97L),
createExpectedRow("2011-04-02", "alias", "travel", "rows", 1L, "idx", 126L)
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "automotive", "rows", 1L, "idx", 147L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "business", "rows", 1L, "idx", 112L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "entertainment", "rows", 1L, "idx", 166L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "health", "rows", 1L, "idx", 113L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "mezzanine", "rows", 3L, "idx", 2447L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "news", "rows", 1L, "idx", 114L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "premium", "rows", 3L, "idx", 2505L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "technology", "rows", 1L, "idx", 97L),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "travel", "rows", 1L, "idx", 126L)
);
// Subqueries are handled by the ToolChest
Iterable<Row> results = runQuery(query);
Iterable<Row> results = GroupByQueryRunnerTestHelper.runQuery(factory, runner, query);
TestHelper.assertExpectedObjects(expectedResults, results, "");
}
@ -1032,11 +1068,11 @@ public class GroupByQueryRunnerTest
.build();
List<Row> expectedResults = Arrays.asList(
createExpectedRow("2011-04-01", "idx", 2900.0),
createExpectedRow("2011-04-02", "idx", 2505.0)
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "idx", 2900.0),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "idx", 2505.0)
);
Iterable<Row> results = runQuery(query);
Iterable<Row> results = GroupByQueryRunnerTestHelper.runQuery(factory, runner, query);
TestHelper.assertExpectedObjects(expectedResults, results, "");
}
@ -1070,10 +1106,10 @@ public class GroupByQueryRunnerTest
.build();
List<Row> expectedResults = Arrays.asList(
createExpectedRow("2011-04-02", "idx", 2505.0)
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "idx", 2505.0)
);
Iterable<Row> results = runQuery(query);
Iterable<Row> results = GroupByQueryRunnerTestHelper.runQuery(factory, runner, query);
TestHelper.assertExpectedObjects(expectedResults, results, "");
}
@ -1106,7 +1142,7 @@ public class GroupByQueryRunnerTest
.setGranularity(QueryRunnerTestHelper.dayGran)
.build();
Iterable<Row> results = runQuery(query);
Iterable<Row> results = GroupByQueryRunnerTestHelper.runQuery(factory, runner, query);
Assert.assertFalse(results.iterator().hasNext());
}
@ -1165,29 +1201,29 @@ public class GroupByQueryRunnerTest
.build();
List<Row> expectedResults = Arrays.asList(
createExpectedRow("2011-04-01", "alias", "automotive", "rows", 1L, "idx", 11135.0),
createExpectedRow("2011-04-01", "alias", "business", "rows", 1L, "idx", 11118.0),
createExpectedRow("2011-04-01", "alias", "entertainment", "rows", 1L, "idx", 11158.0),
createExpectedRow("2011-04-01", "alias", "health", "rows", 1L, "idx", 11120.0),
createExpectedRow("2011-04-01", "alias", "mezzanine", "rows", 3L, "idx", 13870.0),
createExpectedRow("2011-04-01", "alias", "news", "rows", 1L, "idx", 11121.0),
createExpectedRow("2011-04-01", "alias", "premium", "rows", 3L, "idx", 13900.0),
createExpectedRow("2011-04-01", "alias", "technology", "rows", 1L, "idx", 11078.0),
createExpectedRow("2011-04-01", "alias", "travel", "rows", 1L, "idx", 11119.0),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "automotive", "rows", 1L, "idx", 11135.0),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "business", "rows", 1L, "idx", 11118.0),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "entertainment", "rows", 1L, "idx", 11158.0),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "health", "rows", 1L, "idx", 11120.0),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "mezzanine", "rows", 3L, "idx", 13870.0),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "news", "rows", 1L, "idx", 11121.0),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "premium", "rows", 3L, "idx", 13900.0),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "technology", "rows", 1L, "idx", 11078.0),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "travel", "rows", 1L, "idx", 11119.0),
createExpectedRow("2011-04-02", "alias", "automotive", "rows", 1L, "idx", 11147.0),
createExpectedRow("2011-04-02", "alias", "business", "rows", 1L, "idx", 11112.0),
createExpectedRow("2011-04-02", "alias", "entertainment", "rows", 1L, "idx", 11166.0),
createExpectedRow("2011-04-02", "alias", "health", "rows", 1L, "idx", 11113.0),
createExpectedRow("2011-04-02", "alias", "mezzanine", "rows", 3L, "idx", 13447.0),
createExpectedRow("2011-04-02", "alias", "news", "rows", 1L, "idx", 11114.0),
createExpectedRow("2011-04-02", "alias", "premium", "rows", 3L, "idx", 13505.0),
createExpectedRow("2011-04-02", "alias", "technology", "rows", 1L, "idx", 11097.0),
createExpectedRow("2011-04-02", "alias", "travel", "rows", 1L, "idx", 11126.0)
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "automotive", "rows", 1L, "idx", 11147.0),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "business", "rows", 1L, "idx", 11112.0),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "entertainment", "rows", 1L, "idx", 11166.0),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "health", "rows", 1L, "idx", 11113.0),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "mezzanine", "rows", 3L, "idx", 13447.0),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "news", "rows", 1L, "idx", 11114.0),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "premium", "rows", 3L, "idx", 13505.0),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "technology", "rows", 1L, "idx", 11097.0),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "travel", "rows", 1L, "idx", 11126.0)
);
// Subqueries are handled by the ToolChest
Iterable<Row> results = runQuery(query);
Iterable<Row> results = GroupByQueryRunnerTestHelper.runQuery(factory, runner, query);
TestHelper.assertExpectedObjects(expectedResults, results, "");
}
@ -1265,27 +1301,27 @@ public class GroupByQueryRunnerTest
.build();
List<Row> expectedResults = Arrays.asList(
createExpectedRow("2011-04-01", "alias", "automotive", "rows", 1L, "idx", 11135.0),
createExpectedRow("2011-04-01", "alias", "business", "rows", 1L, "idx", 11118.0),
createExpectedRow("2011-04-01", "alias", "entertainment", "rows", 1L, "idx", 11158.0),
createExpectedRow("2011-04-01", "alias", "health", "rows", 1L, "idx", 11120.0),
createExpectedRow("2011-04-01", "alias", "news", "rows", 1L, "idx", 11121.0),
createExpectedRow("2011-04-01", "alias", "technology", "rows", 1L, "idx", 11078.0),
createExpectedRow("2011-04-01", "alias", "travel", "rows", 1L, "idx", 11119.0),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "automotive", "rows", 1L, "idx", 11135.0),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "business", "rows", 1L, "idx", 11118.0),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "entertainment", "rows", 1L, "idx", 11158.0),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "health", "rows", 1L, "idx", 11120.0),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "news", "rows", 1L, "idx", 11121.0),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "technology", "rows", 1L, "idx", 11078.0),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "travel", "rows", 1L, "idx", 11119.0),
createExpectedRow("2011-04-02", "alias", "automotive", "rows", 1L, "idx", 11147.0),
createExpectedRow("2011-04-02", "alias", "business", "rows", 1L, "idx", 11112.0),
createExpectedRow("2011-04-02", "alias", "entertainment", "rows", 1L, "idx", 11166.0),
createExpectedRow("2011-04-02", "alias", "health", "rows", 1L, "idx", 11113.0),
createExpectedRow("2011-04-02", "alias", "mezzanine", "rows", 3L, "idx", 13447.0),
createExpectedRow("2011-04-02", "alias", "news", "rows", 1L, "idx", 11114.0),
createExpectedRow("2011-04-02", "alias", "premium", "rows", 3L, "idx", 13505.0),
createExpectedRow("2011-04-02", "alias", "technology", "rows", 1L, "idx", 11097.0),
createExpectedRow("2011-04-02", "alias", "travel", "rows", 1L, "idx", 11126.0)
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "automotive", "rows", 1L, "idx", 11147.0),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "business", "rows", 1L, "idx", 11112.0),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "entertainment", "rows", 1L, "idx", 11166.0),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "health", "rows", 1L, "idx", 11113.0),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "mezzanine", "rows", 3L, "idx", 13447.0),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "news", "rows", 1L, "idx", 11114.0),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "premium", "rows", 3L, "idx", 13505.0),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "technology", "rows", 1L, "idx", 11097.0),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-02", "alias", "travel", "rows", 1L, "idx", 11126.0)
);
// Subqueries are handled by the ToolChest
Iterable<Row> results = runQuery(query);
Iterable<Row> results = GroupByQueryRunnerTestHelper.runQuery(factory, runner, query);
TestHelper.assertExpectedObjects(expectedResults, results, "");
}
@ -1382,7 +1418,7 @@ public class GroupByQueryRunnerTest
.build();
List<Row> expectedResults = Arrays.asList(
createExpectedRow(
GroupByQueryRunnerTestHelper.createExpectedRow(
"2011-04-01",
"alias",
"travel",
@ -1393,7 +1429,7 @@ public class GroupByQueryRunnerTest
"js_outer_agg",
123.92274475097656
),
createExpectedRow(
GroupByQueryRunnerTestHelper.createExpectedRow(
"2011-04-01",
"alias",
"technology",
@ -1404,7 +1440,7 @@ public class GroupByQueryRunnerTest
"js_outer_agg",
82.62254333496094
),
createExpectedRow(
GroupByQueryRunnerTestHelper.createExpectedRow(
"2011-04-01",
"alias",
"news",
@ -1415,7 +1451,7 @@ public class GroupByQueryRunnerTest
"js_outer_agg",
125.58358001708984
),
createExpectedRow(
GroupByQueryRunnerTestHelper.createExpectedRow(
"2011-04-01",
"alias",
"health",
@ -1426,7 +1462,7 @@ public class GroupByQueryRunnerTest
"js_outer_agg",
124.13470458984375
),
createExpectedRow(
GroupByQueryRunnerTestHelper.createExpectedRow(
"2011-04-01",
"alias",
"entertainment",
@ -1440,7 +1476,7 @@ public class GroupByQueryRunnerTest
);
// Subqueries are handled by the ToolChest
Iterable<Row> results = runQuery(query);
Iterable<Row> results = GroupByQueryRunnerTestHelper.runQuery(factory, runner, query);
TestHelper.assertExpectedObjects(expectedResults, results, "");
}
@ -1478,50 +1514,19 @@ public class GroupByQueryRunnerTest
.build();
List<Row> expectedResults = Arrays.asList(
createExpectedRow("2011-04-01", "alias", "automotive", "rows", 2L, "idx", 282L, "uniq", 1.0002442201269182),
createExpectedRow("2011-04-01", "alias", "business", "rows", 2L, "idx", 230L, "uniq", 1.0002442201269182),
createExpectedRow("2011-04-01", "alias", "entertainment", "rows", 2L, "idx", 324L, "uniq", 1.0002442201269182),
createExpectedRow("2011-04-01", "alias", "health", "rows", 2L, "idx", 233L, "uniq", 1.0002442201269182),
createExpectedRow("2011-04-01", "alias", "mezzanine", "rows", 6L, "idx", 5317L, "uniq", 1.0002442201269182),
createExpectedRow("2011-04-01", "alias", "news", "rows", 2L, "idx", 235L, "uniq", 1.0002442201269182),
createExpectedRow("2011-04-01", "alias", "premium", "rows", 6L, "idx", 5405L, "uniq", 1.0002442201269182),
createExpectedRow("2011-04-01", "alias", "technology", "rows", 2L, "idx", 175L, "uniq", 1.0002442201269182),
createExpectedRow("2011-04-01", "alias", "travel", "rows", 2L, "idx", 245L, "uniq", 1.0002442201269182)
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "automotive", "rows", 2L, "idx", 282L, "uniq", 1.0002442201269182),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "business", "rows", 2L, "idx", 230L, "uniq", 1.0002442201269182),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "entertainment", "rows", 2L, "idx", 324L, "uniq", 1.0002442201269182),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "health", "rows", 2L, "idx", 233L, "uniq", 1.0002442201269182),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "mezzanine", "rows", 6L, "idx", 5317L, "uniq", 1.0002442201269182),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "news", "rows", 2L, "idx", 235L, "uniq", 1.0002442201269182),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "premium", "rows", 6L, "idx", 5405L, "uniq", 1.0002442201269182),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "technology", "rows", 2L, "idx", 175L, "uniq", 1.0002442201269182),
GroupByQueryRunnerTestHelper.createExpectedRow("2011-04-01", "alias", "travel", "rows", 2L, "idx", 245L, "uniq", 1.0002442201269182)
);
// Subqueries are handled by the ToolChest
Iterable<Row> results = runQuery(query);
Iterable<Row> results = GroupByQueryRunnerTestHelper.runQuery(factory, runner, query);
TestHelper.assertExpectedObjects(expectedResults, results, "");
}
private Iterable<Row> runQuery(GroupByQuery query)
{
QueryToolChest toolChest = factory.getToolchest();
QueryRunner theRunner = new FinalizeResultsQueryRunner<>(
toolChest.mergeResults(toolChest.preMergeQueryDecoration(runner)),
toolChest
);
Sequence<Row> queryResult = theRunner.run(query);
return Sequences.toList(queryResult, Lists.<Row>newArrayList());
}
private Row createExpectedRow(final String timestamp, Object... vals)
{
return createExpectedRow(new DateTime(timestamp), vals);
}
private Row createExpectedRow(final DateTime timestamp, Object... vals)
{
Preconditions.checkArgument(vals.length % 2 == 0);
Map<String, Object> theVals = Maps.newHashMap();
for (int i = 0; i < vals.length; i += 2) {
theVals.put(vals[i].toString(), vals[i + 1]);
}
DateTime ts = new DateTime(timestamp);
return new MapBasedRow(ts, theVals);
}
}

View File

@ -0,0 +1,72 @@
/*
* Druid - a distributed column store.
* Copyright (C) 2012, 2013 Metamarkets Group Inc.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
package io.druid.query.groupby;
import com.google.common.base.Preconditions;
import com.google.common.collect.Lists;
import com.google.common.collect.Maps;
import com.metamx.common.guava.Sequence;
import com.metamx.common.guava.Sequences;
import io.druid.data.input.MapBasedRow;
import io.druid.data.input.Row;
import io.druid.query.FinalizeResultsQueryRunner;
import io.druid.query.QueryRunner;
import io.druid.query.QueryRunnerFactory;
import io.druid.query.QueryToolChest;
import org.joda.time.DateTime;
import java.util.Map;
/**
*/
public class GroupByQueryRunnerTestHelper
{
public static Iterable<Row> runQuery(QueryRunnerFactory factory, QueryRunner runner, GroupByQuery query)
{
QueryToolChest toolChest = factory.getToolchest();
QueryRunner theRunner = new FinalizeResultsQueryRunner<>(
toolChest.mergeResults(toolChest.preMergeQueryDecoration(runner)),
toolChest
);
Sequence<Row> queryResult = theRunner.run(query);
return Sequences.toList(queryResult, Lists.<Row>newArrayList());
}
public static Row createExpectedRow(final String timestamp, Object... vals)
{
return createExpectedRow(new DateTime(timestamp), vals);
}
public static Row createExpectedRow(final DateTime timestamp, Object... vals)
{
Preconditions.checkArgument(vals.length % 2 == 0);
Map<String, Object> theVals = Maps.newHashMap();
for (int i = 0; i < vals.length; i += 2) {
theVals.put(vals[i].toString(), vals[i + 1]);
}
DateTime ts = new DateTime(timestamp);
return new MapBasedRow(ts, theVals);
}
}

View File

@ -1,5 +1,6 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>io.druid.extensions</groupId>
<artifactId>druid-rabbitmq</artifactId>
@ -9,7 +10,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.157-SNAPSHOT</version>
<version>0.6.160-SNAPSHOT</version>
</parent>
<dependencies>
@ -39,5 +40,11 @@
<artifactId>commons-cli</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>io.druid</groupId>
<artifactId>druid-processing</artifactId>
<version>${project.parent.version}</version>
<scope>test</scope>
</dependency>
</dependencies>
</project>

View File

@ -19,12 +19,12 @@
package io.druid.firehose.rabbitmq;
import com.fasterxml.jackson.annotation.JsonCreator;
import com.fasterxml.jackson.annotation.JsonProperty;
import com.google.common.collect.Maps;
import com.rabbitmq.client.ConnectionFactory;
import com.rabbitmq.client.LongString;
import java.net.URISyntaxException;
import java.security.KeyManagementException;
import java.security.NoSuchAlgorithmException;
import java.util.Map;
/**
@ -33,140 +33,229 @@ import java.util.Map;
*/
public class JacksonifiedConnectionFactory extends ConnectionFactory
{
public static JacksonifiedConnectionFactory makeDefaultConnectionFactory() throws Exception
{
return new JacksonifiedConnectionFactory(null, 0, null, null, null, null, 0, 0, 0, 0, null);
}
private static Map<String, Object> getSerializableClientProperties(final Map<String, Object> clientProperties)
{
return Maps.transformEntries(
clientProperties,
new Maps.EntryTransformer<String, Object, Object>()
{
@Override
public Object transformEntry(String key, Object value)
{
if (value instanceof LongString) {
return value.toString();
}
return value;
}
}
);
}
private final String host;
private final int port;
private final String username;
private final String password;
private final String virtualHost;
private final String uri;
private final int requestedChannelMax;
private final int requestedFrameMax;
private final int requestedHeartbeat;
private final int connectionTimeout;
private final Map<String, Object> clientProperties;
@JsonCreator
public JacksonifiedConnectionFactory(
@JsonProperty("host") String host,
@JsonProperty("port") int port,
@JsonProperty("username") String username,
@JsonProperty("password") String password,
@JsonProperty("virtualHost") String virtualHost,
@JsonProperty("uri") String uri,
@JsonProperty("requestedChannelMax") int requestedChannelMax,
@JsonProperty("requestedFrameMax") int requestedFrameMax,
@JsonProperty("requestedHeartbeat") int requestedHeartbeat,
@JsonProperty("connectionTimeout") int connectionTimeout,
@JsonProperty("clientProperties") Map<String, Object> clientProperties
) throws Exception
{
super();
this.host = host == null ? super.getHost() : host;
this.port = port == 0 ? super.getPort() : port;
this.username = username == null ? super.getUsername() : username;
this.password = password == null ? super.getPassword() : password;
this.virtualHost = virtualHost == null ? super.getVirtualHost() : virtualHost;
this.uri = uri;
this.requestedChannelMax = requestedChannelMax == 0 ? super.getRequestedChannelMax() : requestedChannelMax;
this.requestedFrameMax = requestedFrameMax == 0 ? super.getRequestedFrameMax() : requestedFrameMax;
this.requestedHeartbeat = requestedHeartbeat == 0 ? super.getRequestedHeartbeat() : requestedHeartbeat;
this.connectionTimeout = connectionTimeout == 0 ? super.getConnectionTimeout() : connectionTimeout;
this.clientProperties = clientProperties == null ? super.getClientProperties() : clientProperties;
super.setHost(this.host);
super.setPort(this.port);
super.setUsername(this.username);
super.setPassword(this.password);
super.setVirtualHost(this.virtualHost);
if (this.uri != null) {
super.setUri(this.uri);
}
super.setRequestedChannelMax(this.requestedChannelMax);
super.setRequestedFrameMax(this.requestedFrameMax);
super.setRequestedHeartbeat(this.requestedHeartbeat);
super.setConnectionTimeout(this.connectionTimeout);
super.setClientProperties(this.clientProperties);
}
@Override
@JsonProperty
public String getHost()
{
return super.getHost();
}
@Override
public void setHost(String host)
{
super.setHost(host);
return host;
}
@Override
@JsonProperty
public int getPort()
{
return super.getPort();
return port;
}
@Override
public void setPort(int port)
{
super.setPort(port);
}
@Override
@JsonProperty
public String getUsername()
{
return super.getUsername();
}
@Override
public void setUsername(String username)
{
super.setUsername(username);
return username;
}
@Override
@JsonProperty
public String getPassword()
{
return super.getPassword();
}
@Override
public void setPassword(String password)
{
super.setPassword(password);
return password;
}
@Override
@JsonProperty
public String getVirtualHost()
{
return super.getVirtualHost();
return virtualHost;
}
@Override
public void setVirtualHost(String virtualHost)
{
super.setVirtualHost(virtualHost);
}
@Override
@JsonProperty
public void setUri(String uriString) throws URISyntaxException, NoSuchAlgorithmException, KeyManagementException
public String getUri()
{
super.setUri(uriString);
return uri;
}
@Override
@JsonProperty
public int getRequestedChannelMax()
{
return super.getRequestedChannelMax();
}
@Override
public void setRequestedChannelMax(int requestedChannelMax)
{
super.setRequestedChannelMax(requestedChannelMax);
return requestedChannelMax;
}
@Override
@JsonProperty
public int getRequestedFrameMax()
{
return super.getRequestedFrameMax();
}
@Override
public void setRequestedFrameMax(int requestedFrameMax)
{
super.setRequestedFrameMax(requestedFrameMax);
return requestedFrameMax;
}
@Override
@JsonProperty
public int getRequestedHeartbeat()
{
return super.getRequestedHeartbeat();
}
@Override
public void setConnectionTimeout(int connectionTimeout)
{
super.setConnectionTimeout(connectionTimeout);
return requestedHeartbeat;
}
@Override
@JsonProperty
public int getConnectionTimeout()
{
return super.getConnectionTimeout();
return connectionTimeout;
}
@JsonProperty("clientProperties")
public Map<String, Object> getSerializableClientProperties()
{
return getSerializableClientProperties(clientProperties);
}
@Override
public void setRequestedHeartbeat(int requestedHeartbeat)
public boolean equals(Object o)
{
super.setRequestedHeartbeat(requestedHeartbeat);
if (this == o) {
return true;
}
if (o == null || getClass() != o.getClass()) {
return false;
}
JacksonifiedConnectionFactory that = (JacksonifiedConnectionFactory) o;
if (connectionTimeout != that.connectionTimeout) {
return false;
}
if (port != that.port) {
return false;
}
if (requestedChannelMax != that.requestedChannelMax) {
return false;
}
if (requestedFrameMax != that.requestedFrameMax) {
return false;
}
if (requestedHeartbeat != that.requestedHeartbeat) {
return false;
}
if (clientProperties != null
? !Maps.difference(
getSerializableClientProperties(clientProperties),
getSerializableClientProperties(that.clientProperties)
).areEqual()
: that.clientProperties != null) {
return false;
}
if (host != null ? !host.equals(that.host) : that.host != null) {
return false;
}
if (password != null ? !password.equals(that.password) : that.password != null) {
return false;
}
if (uri != null ? !uri.equals(that.uri) : that.uri != null) {
return false;
}
if (username != null ? !username.equals(that.username) : that.username != null) {
return false;
}
if (virtualHost != null ? !virtualHost.equals(that.virtualHost) : that.virtualHost != null) {
return false;
}
return true;
}
@Override
@JsonProperty
public Map<String, Object> getClientProperties()
public int hashCode()
{
return super.getClientProperties();
}
@Override
public void setClientProperties(Map<String, Object> clientProperties)
{
super.setClientProperties(clientProperties);
int result = host != null ? host.hashCode() : 0;
result = 31 * result + port;
result = 31 * result + (username != null ? username.hashCode() : 0);
result = 31 * result + (password != null ? password.hashCode() : 0);
result = 31 * result + (virtualHost != null ? virtualHost.hashCode() : 0);
result = 31 * result + (uri != null ? uri.hashCode() : 0);
result = 31 * result + requestedChannelMax;
result = 31 * result + requestedFrameMax;
result = 31 * result + requestedHeartbeat;
result = 31 * result + connectionTimeout;
result = 31 * result + (clientProperties != null ? clientProperties.hashCode() : 0);
return result;
}
}

View File

@ -29,7 +29,7 @@ import java.util.List;
/**
*/
public class RabbitMQDruidModule implements DruidModule
public class RabbitMQDruidModule implements DruidModule
{
@Override
public List<? extends Module> getJacksonModules()

View File

@ -19,6 +19,7 @@
package io.druid.firehose.rabbitmq;
import com.fasterxml.jackson.annotation.JsonCreator;
import com.fasterxml.jackson.annotation.JsonProperty;
/**
@ -26,17 +27,50 @@ import com.fasterxml.jackson.annotation.JsonProperty;
*/
public class RabbitMQFirehoseConfig
{
private String queue = null;
private String exchange = null;
private String routingKey = null;
private boolean durable = false;
private boolean exclusive = false;
private boolean autoDelete = false;
// Lyra (auto reconnect) properties
private int maxRetries = 100;
private int retryIntervalSeconds = 2;
private long maxDurationSeconds = 5 * 60;
private static final int defaultMaxRetries = 100;
private static final int defaultRetryIntervalSeconds = 2;
private static final long defaultMaxDurationSeconds = 5 * 60;
public static RabbitMQFirehoseConfig makeDefaultConfig()
{
return new RabbitMQFirehoseConfig(null, null, null, false, false, false, 0, 0, 0);
}
private final String queue;
private final String exchange;
private final String routingKey;
private final boolean durable;
private final boolean exclusive;
private final boolean autoDelete;
private final int maxRetries;
private final int retryIntervalSeconds;
private final long maxDurationSeconds;
@JsonCreator
public RabbitMQFirehoseConfig(
@JsonProperty("queue") String queue,
@JsonProperty("exchange") String exchange,
@JsonProperty("routingKey") String routingKey,
@JsonProperty("durable") boolean durable,
@JsonProperty("exclusive") boolean exclusive,
@JsonProperty("autoDelete") boolean autoDelete,
@JsonProperty("maxRetries") int maxRetries,
@JsonProperty("retryIntervalSeconds") int retryIntervalSeconds,
@JsonProperty("maxDurationSeconds") long maxDurationSeconds
)
{
this.queue = queue;
this.exchange = exchange;
this.routingKey = routingKey;
this.durable = durable;
this.exclusive = exclusive;
this.autoDelete = autoDelete;
this.maxRetries = maxRetries == 0 ? defaultMaxRetries : maxRetries;
this.retryIntervalSeconds = retryIntervalSeconds == 0 ? defaultRetryIntervalSeconds : retryIntervalSeconds;
this.maxDurationSeconds = maxDurationSeconds == 0 ? defaultMaxDurationSeconds : maxDurationSeconds;
}
@JsonProperty
public String getQueue()
@ -44,90 +78,109 @@ public class RabbitMQFirehoseConfig
return queue;
}
public void setQueue(String queue)
{
this.queue = queue;
}
@JsonProperty
public String getExchange()
{
return exchange;
}
public void setExchange(String exchange)
{
this.exchange = exchange;
}
@JsonProperty
public String getRoutingKey()
{
return routingKey;
}
public void setRoutingKey(String routingKey)
{
this.routingKey = routingKey;
}
@JsonProperty
public boolean isDurable()
{
return durable;
}
public void setDurable(boolean durable)
{
this.durable = durable;
}
@JsonProperty
public boolean isExclusive()
{
return exclusive;
}
public void setExclusive(boolean exclusive)
{
this.exclusive = exclusive;
}
@JsonProperty
public boolean isAutoDelete()
{
return autoDelete;
}
public void setAutoDelete(boolean autoDelete)
{
this.autoDelete = autoDelete;
}
@JsonProperty
public int getMaxRetries() {
public int getMaxRetries()
{
return maxRetries;
}
public void setMaxRetries(int maxRetries) {
this.maxRetries = maxRetries;
}
@JsonProperty
public int getRetryIntervalSeconds() {
public int getRetryIntervalSeconds()
{
return retryIntervalSeconds;
}
public void setRetryIntervalSeconds(int retryIntervalSeconds) {
this.retryIntervalSeconds = retryIntervalSeconds;
}
@JsonProperty
public long getMaxDurationSeconds() {
public long getMaxDurationSeconds()
{
return maxDurationSeconds;
}
public void setMaxDurationSeconds(int maxDurationSeconds) {
this.maxDurationSeconds = maxDurationSeconds;
@Override
public boolean equals(Object o)
{
if (this == o) {
return true;
}
if (o == null || getClass() != o.getClass()) {
return false;
}
RabbitMQFirehoseConfig that = (RabbitMQFirehoseConfig) o;
if (autoDelete != that.autoDelete) {
return false;
}
if (durable != that.durable) {
return false;
}
if (exclusive != that.exclusive) {
return false;
}
if (maxDurationSeconds != that.maxDurationSeconds) {
return false;
}
if (maxRetries != that.maxRetries) {
return false;
}
if (retryIntervalSeconds != that.retryIntervalSeconds) {
return false;
}
if (exchange != null ? !exchange.equals(that.exchange) : that.exchange != null) {
return false;
}
if (queue != null ? !queue.equals(that.queue) : that.queue != null) {
return false;
}
if (routingKey != null ? !routingKey.equals(that.routingKey) : that.routingKey != null) {
return false;
}
return true;
}
@Override
public int hashCode()
{
int result = queue != null ? queue.hashCode() : 0;
result = 31 * result + (exchange != null ? exchange.hashCode() : 0);
result = 31 * result + (routingKey != null ? routingKey.hashCode() : 0);
result = 31 * result + (durable ? 1 : 0);
result = 31 * result + (exclusive ? 1 : 0);
result = 31 * result + (autoDelete ? 1 : 0);
result = 31 * result + maxRetries;
result = 31 * result + retryIntervalSeconds;
result = 31 * result + (int) (maxDurationSeconds ^ (maxDurationSeconds >>> 32));
return result;
}
}

View File

@ -26,7 +26,6 @@ import com.metamx.common.logger.Logger;
import com.rabbitmq.client.AMQP;
import com.rabbitmq.client.Channel;
import com.rabbitmq.client.Connection;
import com.rabbitmq.client.ConnectionFactory;
import com.rabbitmq.client.ConsumerCancelledException;
import com.rabbitmq.client.DefaultConsumer;
import com.rabbitmq.client.Envelope;
@ -50,14 +49,14 @@ import java.util.concurrent.LinkedBlockingQueue;
/**
* A FirehoseFactory for RabbitMQ.
*
* <p/>
* It will receive it's configuration through the realtime.spec file and expects to find a
* consumerProps element in the firehose definition with values for a number of configuration options.
* Below is a complete example for a RabbitMQ firehose configuration with some explanation. Options
* that have defaults can be skipped but options with no defaults must be specified with the exception
* of the URI property. If the URI property is set, it will override any other property that was also
* set.
*
* <p/>
* File: <em>realtime.spec</em>
* <pre>
* "firehose" : {
@ -89,7 +88,7 @@ import java.util.concurrent.LinkedBlockingQueue;
* }
* },
* </pre>
*
* <p/>
* <b>Limitations:</b> This implementation will not attempt to reconnect to the MQ broker if the
* connection to it is lost. Furthermore it does not support any automatic failover on high availability
* RabbitMQ clusters. This is not supported by the underlying AMQP client library and while the behavior
@ -97,7 +96,7 @@ import java.util.concurrent.LinkedBlockingQueue;
* the RabbitMQ cluster that sets the "ha-mode" and "ha-sync-mode" properly on the queue that this
* Firehose connects to, messages should survive an MQ broker node failure and be delivered once a
* connection to another node is set up.
*
* <p/>
* For more information on RabbitMQ high availability please see:
* <a href="http://www.rabbitmq.com/ha.html">http://www.rabbitmq.com/ha.html</a>.
*/
@ -105,27 +104,36 @@ public class RabbitMQFirehoseFactory implements FirehoseFactory<StringInputRowPa
{
private static final Logger log = new Logger(RabbitMQFirehoseFactory.class);
@JsonProperty
private final RabbitMQFirehoseConfig config;
@JsonProperty
private final StringInputRowParser parser;
@JsonProperty
private final ConnectionFactory connectionFactory;
private final JacksonifiedConnectionFactory connectionFactory;
@JsonCreator
public RabbitMQFirehoseFactory(
@JsonProperty("connection") JacksonifiedConnectionFactory connectionFactory,
@JsonProperty("config") RabbitMQFirehoseConfig config,
@JsonProperty("parser") StringInputRowParser parser
)
) throws Exception
{
this.connectionFactory = connectionFactory;
this.config = config;
this.connectionFactory = connectionFactory == null
? JacksonifiedConnectionFactory.makeDefaultConnectionFactory()
: connectionFactory;
this.config = config == null ? RabbitMQFirehoseConfig.makeDefaultConfig() : config;
this.parser = parser;
}
@JsonProperty
public RabbitMQFirehoseConfig getConfig()
{
return config;
}
@JsonProperty
public JacksonifiedConnectionFactory getConnectionFactory()
{
return connectionFactory;
}
@Override
public Firehose connect(StringInputRowParser firehoseParser) throws IOException
{
@ -270,6 +278,7 @@ public class RabbitMQFirehoseFactory implements FirehoseFactory<StringInputRowPa
};
}
@JsonProperty
@Override
public ByteBufferInputRowParser getParser()
{
@ -280,34 +289,43 @@ public class RabbitMQFirehoseFactory implements FirehoseFactory<StringInputRowPa
{
private final BlockingQueue<Delivery> _queue;
public QueueingConsumer(Channel ch) {
public QueueingConsumer(Channel ch)
{
this(ch, new LinkedBlockingQueue<Delivery>());
}
public QueueingConsumer(Channel ch, BlockingQueue<Delivery> q) {
public QueueingConsumer(Channel ch, BlockingQueue<Delivery> q)
{
super(ch);
this._queue = q;
}
@Override public void handleShutdownSignal(String consumerTag, ShutdownSignalException sig) {
@Override
public void handleShutdownSignal(String consumerTag, ShutdownSignalException sig)
{
_queue.clear();
}
@Override public void handleCancel(String consumerTag) throws IOException {
@Override
public void handleCancel(String consumerTag) throws IOException
{
_queue.clear();
}
@Override public void handleDelivery(String consumerTag,
Envelope envelope,
AMQP.BasicProperties properties,
byte[] body)
throws IOException
@Override
public void handleDelivery(
String consumerTag,
Envelope envelope,
AMQP.BasicProperties properties,
byte[] body
)
throws IOException
{
this._queue.add(new Delivery(envelope, properties, body));
}
public Delivery nextDelivery()
throws InterruptedException, ShutdownSignalException, ConsumerCancelledException
throws InterruptedException, ShutdownSignalException, ConsumerCancelledException
{
return _queue.take();
}

View File

@ -0,0 +1,139 @@
/*
* Druid - a distributed column store.
* Copyright (C) 2012, 2013 Metamarkets Group Inc.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
package io.druid.examples.rabbitmq;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.Lists;
import com.rabbitmq.client.ConnectionFactory;
import io.druid.data.input.impl.DimensionsSpec;
import io.druid.data.input.impl.JSONParseSpec;
import io.druid.data.input.impl.SpatialDimensionSchema;
import io.druid.data.input.impl.StringInputRowParser;
import io.druid.data.input.impl.TimestampSpec;
import io.druid.firehose.rabbitmq.JacksonifiedConnectionFactory;
import io.druid.firehose.rabbitmq.RabbitMQFirehoseConfig;
import io.druid.firehose.rabbitmq.RabbitMQFirehoseFactory;
import io.druid.jackson.DefaultObjectMapper;
import org.junit.Assert;
import org.junit.Test;
import java.util.Arrays;
/**
*/
public class RabbitMQFirehoseFactoryTest
{
private static final ObjectMapper mapper = new DefaultObjectMapper();
@Test
public void testSerde() throws Exception
{
RabbitMQFirehoseConfig config = new RabbitMQFirehoseConfig(
"test",
"test2",
"test3",
true,
true,
true,
5,
10,
20
);
JacksonifiedConnectionFactory connectionFactory = new JacksonifiedConnectionFactory(
"foo",
9978,
"user",
"pw",
"host",
null,
5,
10,
11,
12,
ImmutableMap.<String, Object>of("hi", "bye")
);
RabbitMQFirehoseFactory factory = new RabbitMQFirehoseFactory(
connectionFactory,
config,
new StringInputRowParser(
new JSONParseSpec(
new TimestampSpec("timestamp", "auto"),
new DimensionsSpec(
Arrays.asList("dim"),
Lists.<String>newArrayList(),
Lists.<SpatialDimensionSchema>newArrayList()
)
),
null, null, null, null
)
);
byte[] bytes = mapper.writeValueAsBytes(factory);
RabbitMQFirehoseFactory factory2 = mapper.readValue(bytes, RabbitMQFirehoseFactory.class);
byte[] bytes2 = mapper.writeValueAsBytes(factory2);
Assert.assertArrayEquals(bytes, bytes2);
Assert.assertEquals(factory.getConfig(), factory2.getConfig());
Assert.assertEquals(factory.getConnectionFactory(), factory2.getConnectionFactory());
}
@Test
public void testDefaultSerde() throws Exception
{
RabbitMQFirehoseConfig config = RabbitMQFirehoseConfig.makeDefaultConfig();
JacksonifiedConnectionFactory connectionFactory = JacksonifiedConnectionFactory.makeDefaultConnectionFactory();
RabbitMQFirehoseFactory factory = new RabbitMQFirehoseFactory(
connectionFactory,
config,
new StringInputRowParser(
new JSONParseSpec(
new TimestampSpec("timestamp", "auto"),
new DimensionsSpec(
Arrays.asList("dim"),
Lists.<String>newArrayList(),
Lists.<SpatialDimensionSchema>newArrayList()
)
),
null, null, null, null
)
);
byte[] bytes = mapper.writeValueAsBytes(factory);
RabbitMQFirehoseFactory factory2 = mapper.readValue(bytes, RabbitMQFirehoseFactory.class);
byte[] bytes2 = mapper.writeValueAsBytes(factory2);
Assert.assertArrayEquals(bytes, bytes2);
Assert.assertEquals(factory.getConfig(), factory2.getConfig());
Assert.assertEquals(factory.getConnectionFactory(), factory2.getConnectionFactory());
Assert.assertEquals(300, factory2.getConfig().getMaxDurationSeconds());
Assert.assertEquals(ConnectionFactory.DEFAULT_HOST, factory2.getConnectionFactory().getHost());
Assert.assertEquals(ConnectionFactory.DEFAULT_USER, factory2.getConnectionFactory().getUsername());
Assert.assertEquals(ConnectionFactory.DEFAULT_AMQP_PORT, factory2.getConnectionFactory().getPort());
}
}

View File

@ -28,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.157-SNAPSHOT</version>
<version>0.6.160-SNAPSHOT</version>
</parent>
<dependencies>

View File

@ -28,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.157-SNAPSHOT</version>
<version>0.6.160-SNAPSHOT</version>
</parent>
<dependencies>
@ -77,6 +77,10 @@
<groupId>com.fasterxml.jackson.jaxrs</groupId>
<artifactId>jackson-jaxrs-json-provider</artifactId>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.jaxrs</groupId>
<artifactId>jackson-jaxrs-smile-provider</artifactId>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.dataformat</groupId>
<artifactId>jackson-dataformat-smile</artifactId>

View File

@ -25,7 +25,6 @@ import com.google.common.base.Function;
import com.google.common.base.Supplier;
import com.google.common.base.Throwables;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.Iterables;
import com.google.common.collect.Iterators;
import com.google.common.collect.Lists;
import com.google.common.collect.Maps;
@ -261,14 +260,22 @@ public class CachingClusteredClient<T> implements QueryRunner<T>
Ordering.natural().onResultOf(Pair.<DateTime, Sequence<T>>lhsFn())
);
final Sequence<Sequence<T>> seq = Sequences.simple(
Iterables.transform(listOfSequences, Pair.<DateTime, Sequence<T>>rhsFn())
);
if (strategy == null) {
return toolChest.mergeSequences(seq);
} else {
return strategy.mergeSequences(seq);
final List<Sequence<T>> orderedSequences = Lists.newLinkedList();
DateTime unorderedStart = null;
List<Sequence<T>> unordered = Lists.newLinkedList();
for (Pair<DateTime, Sequence<T>> sequencePair : listOfSequences) {
if (unorderedStart != null && unorderedStart.getMillis() != sequencePair.lhs.getMillis()) {
orderedSequences.add(toolChest.mergeSequencesUnordered(Sequences.simple(unordered)));
unordered = Lists.newLinkedList();
}
unorderedStart = sequencePair.lhs;
unordered.add(sequencePair.rhs);
}
if(!unordered.isEmpty()) {
orderedSequences.add(toolChest.mergeSequencesUnordered(Sequences.simple(unordered)));
}
return toolChest.mergeSequences(Sequences.simple(orderedSequences));
}
private void addSequencesFromCache(ArrayList<Pair<DateTime, Sequence<T>>> listOfSequences)
@ -332,7 +339,9 @@ public class CachingClusteredClient<T> implements QueryRunner<T>
if (!server.isAssignable() || !populateCache || isBySegment) {
resultSeqToAdd = clientQueryable.run(query.withQuerySegmentSpec(segmentSpec));
} else {
resultSeqToAdd = toolChest.mergeSequences(
// this could be more efficient, since we only need to reorder results
// for batches of segments with the same segment start time.
resultSeqToAdd = toolChest.mergeSequencesUnordered(
Sequences.map(
clientQueryable.run(rewrittenQuery.withQuerySegmentSpec(segmentSpec)),
new Function<Object, Sequence<T>>()

View File

@ -37,7 +37,6 @@ import org.apache.curator.framework.recipes.cache.PathChildrenCacheListener;
import org.apache.curator.utils.ZKPaths;
import java.io.IOException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
/**
@ -212,6 +211,11 @@ public abstract class BaseZkCoordinator implements DataSegmentChangeHandler
}
}
public boolean isStarted()
{
return started;
}
public abstract void loadLocalCache();
public abstract DataSegmentChangeHandler getDataSegmentChangeHandler();

View File

@ -169,7 +169,7 @@ public class ZkCoordinator extends BaseZkCoordinator
catch (IOException e) {
throw new SegmentLoadingException(e, "Failed to announce segment[%s]", segment.getIdentifier());
}
};
}
}
catch (SegmentLoadingException e) {
log.makeAlert(e, "Failed to load segment for dataSource")

View File

@ -0,0 +1,32 @@
package io.druid.server.http;
import com.google.common.collect.ImmutableMap;
import io.druid.server.coordination.ZkCoordinator;
import javax.inject.Inject;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import javax.ws.rs.core.Response;
@Path("/druid/historical/v1")
public class HistoricalResource
{
private final ZkCoordinator coordinator;
@Inject
public HistoricalResource(
ZkCoordinator coordinator
)
{
this.coordinator = coordinator;
}
@GET
@Path("/loadstatus")
@Produces("application/json")
public Response getLoadStatus()
{
return Response.ok(ImmutableMap.of("cacheInitialized", coordinator.isStarted())).build();
}
}

View File

@ -0,0 +1,106 @@
/*
* Druid - a distributed column store.
* Copyright (C) 2014 Metamarkets Group Inc.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
package io.druid.server.router;
import com.fasterxml.jackson.annotation.JsonCreator;
import com.fasterxml.jackson.annotation.JsonProperty;
import com.google.common.base.Optional;
import com.google.common.base.Preconditions;
import com.google.common.base.Throwables;
import io.druid.query.Query;
import javax.script.Compilable;
import javax.script.Invocable;
import javax.script.ScriptEngine;
import javax.script.ScriptEngineManager;
import javax.script.ScriptException;
public class JavaScriptTieredBrokerSelectorStrategy implements TieredBrokerSelectorStrategy
{
public static interface SelectorFunction
{
public String apply(TieredBrokerConfig config, Query query);
}
private final SelectorFunction fnSelector;
private final String function;
@JsonCreator
public JavaScriptTieredBrokerSelectorStrategy(@JsonProperty("function") String fn)
{
Preconditions.checkNotNull(fn, "function must not be null");
final ScriptEngine engine = new ScriptEngineManager().getEngineByName("javascript");
try {
((Compilable)engine).compile("var apply = " + fn).eval();
} catch(ScriptException e) {
Throwables.propagate(e);
}
this.function = fn;
this.fnSelector = ((Invocable)engine).getInterface(SelectorFunction.class);
}
@Override
public Optional<String> getBrokerServiceName(
TieredBrokerConfig config, Query query
)
{
return Optional.fromNullable(fnSelector.apply(config, query));
}
@JsonProperty
public String getFunction()
{
return function;
}
@Override
public boolean equals(Object o)
{
if (this == o) {
return true;
}
if (o == null || getClass() != o.getClass()) {
return false;
}
JavaScriptTieredBrokerSelectorStrategy that = (JavaScriptTieredBrokerSelectorStrategy) o;
if (!function.equals(that.function)) {
return false;
}
return true;
}
@Override
public int hashCode()
{
return function.hashCode();
}
@Override
public String toString()
{
return "JavascriptTieredBrokerSelectorStrategy{" +
"function='" + function + '\'' +
'}';
}
}

View File

@ -29,7 +29,8 @@ import io.druid.query.Query;
@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, property = "type")
@JsonSubTypes(value = {
@JsonSubTypes.Type(name = "timeBoundary", value = TimeBoundaryTieredBrokerSelectorStrategy.class),
@JsonSubTypes.Type(name = "priority", value = PriorityTieredBrokerSelectorStrategy.class)
@JsonSubTypes.Type(name = "priority", value = PriorityTieredBrokerSelectorStrategy.class),
@JsonSubTypes.Type(name = "javascript", value = JavaScriptTieredBrokerSelectorStrategy.class)
})
public interface TieredBrokerSelectorStrategy

View File

@ -49,14 +49,14 @@ public class HashBasedNumberedShardSpec extends NumberedShardSpec
}
@Override
public boolean isInChunk(InputRow inputRow)
public boolean isInChunk(long timestamp, InputRow inputRow)
{
return (((long) hash(inputRow)) - getPartitionNum()) % getPartitions() == 0;
return (((long) hash(timestamp, inputRow)) - getPartitionNum()) % getPartitions() == 0;
}
protected int hash(InputRow inputRow)
protected int hash(long timestamp, InputRow inputRow)
{
final List<Object> groupKey = Rows.toGroupKey(inputRow.getTimestampFromEpoch(), inputRow);
final List<Object> groupKey = Rows.toGroupKey(timestamp, inputRow);
try {
return hashFunction.hashBytes(jsonMapper.writeValueAsBytes(groupKey)).asInt();
}
@ -80,9 +80,9 @@ public class HashBasedNumberedShardSpec extends NumberedShardSpec
return new ShardSpecLookup()
{
@Override
public ShardSpec getShardSpec(InputRow row)
public ShardSpec getShardSpec(long timestamp, InputRow row)
{
int index = Math.abs(hash(row) % getPartitions());
int index = Math.abs(hash(timestamp, row) % getPartitions());
return shardSpecs.get(index);
}
};

View File

@ -50,7 +50,7 @@ public class LinearShardSpec implements ShardSpec
return new ShardSpecLookup()
{
@Override
public ShardSpec getShardSpec(InputRow row)
public ShardSpec getShardSpec(long timestamp, InputRow row)
{
return shardSpecs.get(0);
}
@ -63,7 +63,7 @@ public class LinearShardSpec implements ShardSpec
}
@Override
public boolean isInChunk(InputRow inputRow) {
public boolean isInChunk(long timestamp, InputRow inputRow) {
return true;
}

View File

@ -60,7 +60,7 @@ public class NumberedShardSpec implements ShardSpec
return new ShardSpecLookup()
{
@Override
public ShardSpec getShardSpec(InputRow row)
public ShardSpec getShardSpec(long timestamp, InputRow row)
{
return shardSpecs.get(0);
}
@ -80,7 +80,7 @@ public class NumberedShardSpec implements ShardSpec
}
@Override
public boolean isInChunk(InputRow inputRow)
public boolean isInChunk(long timestamp, InputRow inputRow)
{
return true;
}

View File

@ -100,10 +100,10 @@ public class SingleDimensionShardSpec implements ShardSpec
return new ShardSpecLookup()
{
@Override
public ShardSpec getShardSpec(InputRow row)
public ShardSpec getShardSpec(long timestamp, InputRow row)
{
for (ShardSpec spec : shardSpecs) {
if (spec.isInChunk(row)) {
if (spec.isInChunk(timestamp, row)) {
return spec;
}
}
@ -124,7 +124,7 @@ public class SingleDimensionShardSpec implements ShardSpec
}
@Override
public boolean isInChunk(InputRow inputRow)
public boolean isInChunk(long timestamp, InputRow inputRow)
{
final List<String> values = inputRow.getDimension(dimension);

View File

@ -309,6 +309,63 @@ public class CachingClusteredClientTest
);
}
@Test
public void testTimeseriesMergingOutOfOrderPartitions() throws Exception
{
final Druids.TimeseriesQueryBuilder builder = Druids.newTimeseriesQueryBuilder()
.dataSource(DATA_SOURCE)
.intervals(SEG_SPEC)
.filters(DIM_FILTER)
.granularity(GRANULARITY)
.aggregators(AGGS)
.postAggregators(POST_AGGS)
.context(CONTEXT);
QueryRunner runner = new FinalizeResultsQueryRunner(client, new TimeseriesQueryQueryToolChest(new QueryConfig()));
testQueryCaching(
runner,
builder.build(),
new Interval("2011-01-05/2011-01-10"),
makeTimeResults(
new DateTime("2011-01-05T02"), 80, 100,
new DateTime("2011-01-06T02"), 420, 520,
new DateTime("2011-01-07T02"), 12, 2194,
new DateTime("2011-01-08T02"), 59, 201,
new DateTime("2011-01-09T02"), 181, 52
),
new Interval("2011-01-05/2011-01-10"),
makeTimeResults(
new DateTime("2011-01-05T00"), 85, 102,
new DateTime("2011-01-06T00"), 412, 521,
new DateTime("2011-01-07T00"), 122, 21894,
new DateTime("2011-01-08T00"), 5, 20,
new DateTime("2011-01-09T00"), 18, 521
)
);
TestHelper.assertExpectedResults(
makeRenamedTimeResults(
new DateTime("2011-01-05T00"), 85, 102,
new DateTime("2011-01-05T02"), 80, 100,
new DateTime("2011-01-06T00"), 412, 521,
new DateTime("2011-01-06T02"), 420, 520,
new DateTime("2011-01-07T00"), 122, 21894,
new DateTime("2011-01-07T02"), 12, 2194,
new DateTime("2011-01-08T00"), 5, 20,
new DateTime("2011-01-08T02"), 59, 201,
new DateTime("2011-01-09T00"), 18, 521,
new DateTime("2011-01-09T02"), 181, 52
),
runner.run(
builder.intervals("2011-01-05/2011-01-10")
.aggregators(RENAMED_AGGS)
.postAggregators(RENAMED_POST_AGGS)
.build()
)
);
}
@Test
@SuppressWarnings("unchecked")
public void testTimeseriesCachingTimeZone() throws Exception

View File

@ -123,7 +123,10 @@ public class RealtimePlumberSchoolTest
@Override
public ParseSpec getParseSpec()
{
return new JSONParseSpec(new TimestampSpec("timestamp", "auto"), new DimensionsSpec(null, null, null));
return new JSONParseSpec(
new TimestampSpec("timestamp", "auto"),
new DimensionsSpec(null, null, null)
);
}
@Override

View File

@ -0,0 +1,117 @@
/*
* Druid - a distributed column store.
* Copyright (C) 2014 Metamarkets Group Inc.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
package io.druid.server.router;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.google.common.base.Optional;
import com.google.common.collect.ImmutableList;
import io.druid.jackson.DefaultObjectMapper;
import io.druid.query.Druids;
import io.druid.query.aggregation.AggregatorFactory;
import io.druid.query.aggregation.CountAggregatorFactory;
import io.druid.query.aggregation.DoubleSumAggregatorFactory;
import io.druid.query.aggregation.LongSumAggregatorFactory;
import org.junit.Assert;
import org.junit.Test;
import java.util.LinkedHashMap;
public class JavaScriptTieredBrokerSelectorStrategyTest
{
final TieredBrokerSelectorStrategy jsStrategy = new JavaScriptTieredBrokerSelectorStrategy(
"function (config, query) { if (config.getTierToBrokerMap().values().size() > 0 && query.getAggregatorSpecs && query.getAggregatorSpecs().size() <= 2) { return config.getTierToBrokerMap().values().toArray()[0] } else { return config.getDefaultBrokerServiceName() } }"
);
@Test
public void testSerde() throws Exception
{
ObjectMapper mapper = new DefaultObjectMapper();
Assert.assertEquals(
jsStrategy,
mapper.readValue(
mapper.writeValueAsString(jsStrategy),
JavaScriptTieredBrokerSelectorStrategy.class
)
);
}
@Test
public void testGetBrokerServiceName() throws Exception
{
final LinkedHashMap<String, String> tierBrokerMap = new LinkedHashMap<>();
tierBrokerMap.put("fast", "druid/fastBroker");
tierBrokerMap.put("slow", "druid/broker");
final TieredBrokerConfig tieredBrokerConfig = new TieredBrokerConfig()
{
@Override
public String getDefaultBrokerServiceName()
{
return "druid/broker";
}
@Override
public LinkedHashMap<String, String> getTierToBrokerMap()
{
return tierBrokerMap;
}
};
final Druids.TimeseriesQueryBuilder queryBuilder = Druids.newTimeseriesQueryBuilder().dataSource("test")
.intervals("2014/2015")
.aggregators(
ImmutableList.<AggregatorFactory>of(
new CountAggregatorFactory("count")
)
);
Assert.assertEquals(
Optional.of("druid/fastBroker"),
jsStrategy.getBrokerServiceName(
tieredBrokerConfig,
queryBuilder.build()
)
);
Assert.assertEquals(
Optional.of("druid/broker"),
jsStrategy.getBrokerServiceName(
tieredBrokerConfig,
Druids.newTimeBoundaryQueryBuilder().dataSource("test").bound("maxTime").build()
)
);
Assert.assertEquals(
Optional.of("druid/broker"),
jsStrategy.getBrokerServiceName(
tieredBrokerConfig,
queryBuilder.aggregators(
ImmutableList.of(
new CountAggregatorFactory("count"),
new LongSumAggregatorFactory("longSum", "a"),
new DoubleSumAggregatorFactory("doubleSum", "b")
)
).build()
)
);
}
}

View File

@ -127,7 +127,7 @@ public class HashBasedNumberedShardSpecTest
public boolean assertExistsInOneSpec(List<ShardSpec> specs, InputRow row)
{
for (ShardSpec spec : specs) {
if (spec.isInChunk(row)) {
if (spec.isInChunk(row.getTimestampFromEpoch(), row)) {
return true;
}
}
@ -145,7 +145,7 @@ public class HashBasedNumberedShardSpecTest
}
@Override
protected int hash(InputRow inputRow)
protected int hash(long timestamp, InputRow inputRow)
{
return inputRow.hashCode();
}
@ -208,4 +208,5 @@ public class HashBasedNumberedShardSpecTest
return 0;
}
}
}

View File

@ -111,7 +111,7 @@ public class SingleDimensionShardSpecTest
}
)
);
Assert.assertEquals(String.format("spec[%s], row[%s]", spec, inputRow), pair.lhs, spec.isInChunk(inputRow));
Assert.assertEquals(String.format("spec[%s], row[%s]", spec, inputRow), pair.lhs, spec.isInChunk(inputRow.getTimestampFromEpoch(), inputRow));
}
}
}

View File

@ -27,7 +27,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.157-SNAPSHOT</version>
<version>0.6.160-SNAPSHOT</version>
</parent>
<dependencies>

View File

@ -38,6 +38,7 @@ import io.druid.query.QuerySegmentWalker;
import io.druid.server.QueryResource;
import io.druid.server.coordination.ServerManager;
import io.druid.server.coordination.ZkCoordinator;
import io.druid.server.http.HistoricalResource;
import io.druid.server.initialization.JettyServerInitializer;
import io.druid.server.metrics.MetricsModule;
import org.eclipse.jetty.server.Server;
@ -68,6 +69,8 @@ public class CliHistorical extends ServerRunnable
@Override
public void configure(Binder binder)
{
// register Server before binding ZkCoordinator to ensure HTTP endpoints are available immediately
LifecycleModule.register(binder, Server.class);
binder.bind(ServerManager.class).in(LazySingleton.class);
binder.bind(ZkCoordinator.class).in(ManageLifecycle.class);
binder.bind(QuerySegmentWalker.class).to(ServerManager.class).in(LazySingleton.class);
@ -75,10 +78,10 @@ public class CliHistorical extends ServerRunnable
binder.bind(NodeTypeConfig.class).toInstance(new NodeTypeConfig("historical"));
binder.bind(JettyServerInitializer.class).to(QueryJettyServerInitializer.class).in(LazySingleton.class);
Jerseys.addResource(binder, QueryResource.class);
Jerseys.addResource(binder, HistoricalResource.class);
LifecycleModule.register(binder, QueryResource.class);
LifecycleModule.register(binder, HistoricalResource.class);
LifecycleModule.register(binder, ZkCoordinator.class);
LifecycleModule.register(binder, Server.class);
binder.bind(Cache.class).toProvider(CacheProvider.class).in(ManageLifecycle.class);
JsonConfigProvider.bind(binder, "druid.historical.cache", CacheProvider.class);