mirror of https://github.com/apache/druid.git
Merge branch 'master' into demo-paper
This commit is contained in:
commit
f832426a16
2
build.sh
2
build.sh
|
@ -30,4 +30,4 @@ echo "For examples, see: "
|
|||
echo " "
|
||||
ls -1 examples/*/*sh
|
||||
echo " "
|
||||
echo "See also http://druid.io/docs/0.6.69"
|
||||
echo "See also http://druid.io/docs/0.6.73"
|
||||
|
|
|
@ -28,7 +28,7 @@
|
|||
<parent>
|
||||
<groupId>io.druid</groupId>
|
||||
<artifactId>druid</artifactId>
|
||||
<version>0.6.70-SNAPSHOT</version>
|
||||
<version>0.6.74-SNAPSHOT</version>
|
||||
</parent>
|
||||
|
||||
<dependencies>
|
||||
|
|
|
@ -28,7 +28,7 @@
|
|||
<parent>
|
||||
<groupId>io.druid</groupId>
|
||||
<artifactId>druid</artifactId>
|
||||
<version>0.6.70-SNAPSHOT</version>
|
||||
<version>0.6.74-SNAPSHOT</version>
|
||||
</parent>
|
||||
|
||||
<dependencies>
|
||||
|
|
|
@ -106,7 +106,7 @@ The broker module uses several of the default modules in [Configuration](Configu
|
|||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.broker.cache.sizeInBytes`|Maximum size of the cache. If this is zero, cache is disabled.|0|
|
||||
|`druid.broker.cache.sizeInBytes`|Maximum size of the cache. If this is zero, cache is disabled.|10485760 (10MB)|
|
||||
|`druid.broker.cache.initialSize`|The initial size of the cache in bytes.|500000|
|
||||
|`druid.broker.cache.logEvictionCount`|If this is non-zero, there will be an eviction of entries.|0|
|
||||
|
||||
|
|
|
@ -193,7 +193,7 @@ This module is required by nodes that can serve queries.
|
|||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.query.chunkPeriod`|Long interval queries may be broken into shorter interval queries.|P1M|
|
||||
|`druid.query.chunkPeriod`|Long interval queries may be broken into shorter interval queries.|0|
|
||||
|
||||
#### GroupBy Query Config
|
||||
|
||||
|
|
|
@ -20,7 +20,7 @@ io.druid.cli.Main server coordinator
|
|||
Rules
|
||||
-----
|
||||
|
||||
Segments are loaded and dropped from the cluster based on a set of rules. Rules indicate how segments should be assigned to different historical node tiers and how many replicants of a segment should exist in each tier. Rules may also indicate when segments should be dropped entirely from the cluster. The coordinator loads a set of rules from the database. Rules may be specific to a certain datasource and/or a default set of rules can be configured. Rules are read in order and hence the ordering of rules is important. The coordinator will cycle through all available segments and match each segment with the first rule that applies. Each segment may only match a single rule
|
||||
Segments are loaded and dropped from the cluster based on a set of rules. Rules indicate how segments should be assigned to different historical node tiers and how many replicants of a segment should exist in each tier. Rules may also indicate when segments should be dropped entirely from the cluster. The coordinator loads a set of rules from the database. Rules may be specific to a certain datasource and/or a default set of rules can be configured. Rules are read in order and hence the ordering of rules is important. The coordinator will cycle through all available segments and match each segment with the first rule that applies. Each segment may only match a single rule.
|
||||
|
||||
For more information on rules, see [Rule Configuration](Rule-Configuration.html).
|
||||
|
||||
|
@ -136,4 +136,4 @@ FAQ
|
|||
|
||||
No. If the Druid coordinator is not started up, no new segments will be loaded in the cluster and outdated segments will not be dropped. However, the coordinator node can be started up at any time, and after a configurable delay, will start running coordinator tasks.
|
||||
|
||||
This also means that if you have a working cluster and all of your coordinators die, the cluster will continue to function, it just won’t experience any changes to its data topology.
|
||||
This also means that if you have a working cluster and all of your coordinators die, the cluster will continue to function, it just won’t experience any changes to its data topology.
|
||||
|
|
|
@ -19,13 +19,13 @@ Clone Druid and build it:
|
|||
git clone https://github.com/metamx/druid.git druid
|
||||
cd druid
|
||||
git fetch --tags
|
||||
git checkout druid-0.6.69
|
||||
git checkout druid-0.6.73
|
||||
./build.sh
|
||||
```
|
||||
|
||||
### Downloading the DSK (Druid Standalone Kit)
|
||||
|
||||
[Download](http://static.druid.io/artifacts/releases/druid-services-0.6.69-bin.tar.gz) a stand-alone tarball and run it:
|
||||
[Download](http://static.druid.io/artifacts/releases/druid-services-0.6.73-bin.tar.gz) a stand-alone tarball and run it:
|
||||
|
||||
``` bash
|
||||
tar -xzf druid-services-0.X.X-bin.tar.gz
|
||||
|
|
|
@ -2,13 +2,13 @@
|
|||
layout: doc_page
|
||||
---
|
||||
# Aggregation Granularity
|
||||
The granularity field determines how data gets bucketed across the time dimension, i.e how it gets aggregated by hour, day, minute, etc.
|
||||
The granularity field determines how data gets bucketed across the time dimension, or how it gets aggregated by hour, day, minute, etc.
|
||||
|
||||
It can be specified either as a string for simple granularities or as an object for arbitrary granularities.
|
||||
|
||||
### Simple Granularities
|
||||
|
||||
Simple granularities are specified as a string and bucket timestamps by their UTC time (i.e. days start at 00:00 UTC).
|
||||
Simple granularities are specified as a string and bucket timestamps by their UTC time (e.g., days start at 00:00 UTC).
|
||||
|
||||
Supported granularity strings are: `all`, `none`, `minute`, `fifteen_minute`, `thirty_minute`, `hour` and `day`
|
||||
|
||||
|
@ -35,25 +35,21 @@ This chunks up every hour on the half-hour.
|
|||
|
||||
### Period Granularities
|
||||
|
||||
Period granularities are specified as arbitrary period combinations of years, months, weeks, hours, minutes and seconds (e.g. P2W, P3M, PT1H30M, PT0.750S) in ISO8601 format.
|
||||
Period granularities are specified as arbitrary period combinations of years, months, weeks, hours, minutes and seconds (e.g. P2W, P3M, PT1H30M, PT0.750S) in ISO8601 format. They support specifying a time zone which determines where period boundaries start as well as the timezone of the returned timestamps. By default, years start on the first of January, months start on the first of the month and weeks start on Mondays unless an origin is specified.
|
||||
|
||||
They support specifying a time zone which determines where period boundaries start and also determines the timezone of the returned timestamps.
|
||||
|
||||
By default years start on the first of January, months start on the first of the month and weeks start on Mondays unless an origin is specified.
|
||||
|
||||
Time zone is optional (defaults to UTC)
|
||||
Origin is optional (defaults to 1970-01-01T00:00:00 in the given time zone)
|
||||
Time zone is optional (defaults to UTC). Origin is optional (defaults to 1970-01-01T00:00:00 in the given time zone).
|
||||
|
||||
```
|
||||
{"type": "period", "period": "P2D", "timeZone": "America/Los_Angeles"}
|
||||
```
|
||||
|
||||
This will bucket by two day chunks in the Pacific timezone.
|
||||
This will bucket by two-day chunks in the Pacific timezone.
|
||||
|
||||
```
|
||||
{"type": "period", "period": "P3M", "timeZone": "America/Los_Angeles", "origin": "2012-02-01T00:00:00-08:00"}
|
||||
```
|
||||
|
||||
This will bucket by 3 month chunks in the Pacific timezone where the three-month quarters are defined as starting from February.
|
||||
This will bucket by 3-month chunks in the Pacific timezone where the three-month quarters are defined as starting from February.
|
||||
|
||||
Supported time zones: timezone support is provided by the [Joda Time library](http://www.joda.org), which uses the standard IANA time zones. [Joda Time supported timezones](http://joda-time.sourceforge.net/timezones.html)
|
||||
#### Supported Time Zones
|
||||
Timezone support is provided by the [Joda Time library](http://www.joda.org), which uses the standard IANA time zones. See the [Joda Time supported timezones](http://joda-time.sourceforge.net/timezones.html).
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
layout: doc_page
|
||||
---
|
||||
# groupBy Queries
|
||||
These types of queries take a groupBy query object and return an array of JSON objects where each object represents a grouping asked for by the query. Note: If you only want to do straight aggreagates for some time range, we highly recommend using [TimeseriesQueries](TimeseriesQuery.html) instead. The performance will be substantially better.
|
||||
These types of queries take a groupBy query object and return an array of JSON objects where each object represents a grouping asked for by the query. Note: If you only want to do straight aggregates for some time range, we highly recommend using [TimeseriesQueries](TimeseriesQuery.html) instead. The performance will be substantially better.
|
||||
An example groupBy query object is shown below:
|
||||
|
||||
``` json
|
||||
|
@ -87,4 +87,4 @@ To pull it all together, the above query would return *n\*m* data points, up to
|
|||
},
|
||||
...
|
||||
]
|
||||
```
|
||||
```
|
|
@ -66,7 +66,7 @@ druid.host=#{IP_ADDR}:8080
|
|||
druid.port=8080
|
||||
druid.service=druid/prod/indexer
|
||||
|
||||
druid.extensions.coordinates=["io.druid.extensions:druid-s3-extensions:0.6.69"]
|
||||
druid.extensions.coordinates=["io.druid.extensions:druid-s3-extensions:0.6.73"]
|
||||
|
||||
druid.zk.service.host=#{ZK_IPs}
|
||||
druid.zk.paths.base=/druid/prod
|
||||
|
@ -115,7 +115,7 @@ druid.host=#{IP_ADDR}:8080
|
|||
druid.port=8080
|
||||
druid.service=druid/prod/worker
|
||||
|
||||
druid.extensions.coordinates=["io.druid.extensions:druid-s3-extensions:0.6.69","io.druid.extensions:druid-kafka-seven:0.6.69"]
|
||||
druid.extensions.coordinates=["io.druid.extensions:druid-s3-extensions:0.6.73","io.druid.extensions:druid-kafka-seven:0.6.73"]
|
||||
|
||||
druid.zk.service.host=#{ZK_IPs}
|
||||
druid.zk.paths.base=/druid/prod
|
||||
|
|
|
@ -22,7 +22,7 @@ druid.storage.baseKey=sample
|
|||
```
|
||||
|
||||
## I don't see my Druid segments on my historical nodes
|
||||
You can check the coordinator console located at <COORDINATOR_IP>:<PORT>/cluster.html. Make sure that your segments have actually loaded on [historical nodes](Historical.html). If your segments are not present, check the coordinator logs for messages about capacity of replication errors. One reason that segments are not downloaded is because historical nodes have maxSizes that are too small, making them incapable of downloading more data. You can change that with (for example):
|
||||
You can check the coordinator console located at `<COORDINATOR_IP>:<PORT>/cluster.html`. Make sure that your segments have actually loaded on [historical nodes](Historical.html). If your segments are not present, check the coordinator logs for messages about capacity of replication errors. One reason that segments are not downloaded is because historical nodes have maxSizes that are too small, making them incapable of downloading more data. You can change that with (for example):
|
||||
|
||||
```
|
||||
-Ddruid.segmentCache.locations=[{"path":"/tmp/druid/storageLocation","maxSize":"500000000000"}]
|
||||
|
@ -31,7 +31,7 @@ You can check the coordinator console located at <COORDINATOR_IP>:<PORT>/cluster
|
|||
|
||||
## My queries are returning empty results
|
||||
|
||||
You can check <BROKER_IP>:<PORT>/druid/v2/datasources/<YOUR_DATASOURCE> for the dimensions and metrics that have been created for your datasource. Make sure that the name of the aggregators you use in your query match one of these metrics. Also make sure that the query interval you specify match a valid time range where data exists.
|
||||
You can check `<BROKER_IP>:<PORT>/druid/v2/datasources/<YOUR_DATASOURCE>` for the dimensions and metrics that have been created for your datasource. Make sure that the name of the aggregators you use in your query match one of these metrics. Also make sure that the query interval you specify match a valid time range where data exists.
|
||||
|
||||
## More information
|
||||
|
||||
|
|
|
@ -27,7 +27,7 @@ druid.host=localhost
|
|||
druid.service=realtime
|
||||
druid.port=8083
|
||||
|
||||
druid.extensions.coordinates=["io.druid.extensions:druid-kafka-seven:0.6.69"]
|
||||
druid.extensions.coordinates=["io.druid.extensions:druid-kafka-seven:0.6.73"]
|
||||
|
||||
|
||||
druid.zk.service.host=localhost
|
||||
|
@ -76,7 +76,7 @@ druid.host=#{IP_ADDR}:8080
|
|||
druid.port=8080
|
||||
druid.service=druid/prod/realtime
|
||||
|
||||
druid.extensions.coordinates=["io.druid.extensions:druid-s3-extensions:0.6.69","io.druid.extensions:druid-kafka-seven:0.6.69"]
|
||||
druid.extensions.coordinates=["io.druid.extensions:druid-s3-extensions:0.6.73","io.druid.extensions:druid-kafka-seven:0.6.73"]
|
||||
|
||||
druid.zk.service.host=#{ZK_IPs}
|
||||
druid.zk.paths.base=/druid/prod
|
||||
|
|
|
@ -37,7 +37,7 @@ There are several main parts to a search query:
|
|||
|intervals|A JSON Object representing ISO-8601 Intervals. This defines the time ranges to run the query over.|yes|
|
||||
|searchDimensions|The dimensions to run the search over. Excluding this means the search is run over all dimensions.|no|
|
||||
|query|See [SearchQuerySpec](SearchQuerySpec.html).|yes|
|
||||
|sort|How the results of the search should sorted. Two possible types here are "lexicographic" and "strlen".|yes|
|
||||
|sort|How the results of the search should be sorted. Two possible types here are "lexicographic" and "strlen".|yes|
|
||||
|context|An additional JSON Object which can be used to specify certain flags.|no|
|
||||
|
||||
The format of the result is:
|
||||
|
|
|
@ -15,7 +15,7 @@ Segment metadata queries return per segment information about:
|
|||
{
|
||||
"queryType":"segmentMetadata",
|
||||
"dataSource":"sample_datasource",
|
||||
"intervals":["2013-01-01/2014-01-01"],
|
||||
"intervals":["2013-01-01/2014-01-01"]
|
||||
}
|
||||
```
|
||||
|
||||
|
|
|
@ -9,7 +9,7 @@ There are several different types of tasks.
|
|||
Segment Creation Tasks
|
||||
----------------------
|
||||
|
||||
#### Index Task
|
||||
### Index Task
|
||||
|
||||
The Index Task is a simpler variation of the Index Hadoop task that is designed to be used for smaller data sets. The task executes within the indexing service and does not require an external Hadoop setup to use. The grammar of the index task is as follows:
|
||||
|
||||
|
@ -51,15 +51,15 @@ The Index Task is a simpler variation of the Index Hadoop task that is designed
|
|||
|--------|-----------|---------|
|
||||
|type|The task type, this should always be "index".|yes|
|
||||
|id|The task ID.|no|
|
||||
|granularitySpec|See [granularitySpec](Tasks.html)|yes|
|
||||
|spatialDimensions|Dimensions to build spatial indexes over. See [Spatial-Indexing](Spatial-Indexing.html)|no|
|
||||
|granularitySpec|Specifies the segment chunks that the task will process. `type` is always "uniform"; `gran` sets the granularity of the chunks ("DAY" means all segments containing timestamps in the same day, while `intervals` sets the interval that the chunks will cover.|yes|
|
||||
|spatialDimensions|Dimensions to build spatial indexes over. See [Geographic Queries](GeographicQueries.html).|no|
|
||||
|aggregators|The metrics to aggregate in the data set. For more info, see [Aggregations](Aggregations.html)|yes|
|
||||
|indexGranularity|The rollup granularity for timestamps.|no|
|
||||
|targetPartitionSize|Used in sharding. Determines how many rows are in each segment.|no|
|
||||
|firehose|The input source of data. For more info, see [Firehose](Firehose.html)|yes|
|
||||
|rowFlushBoundary|Used in determining when intermediate persist should occur to disk.|no|
|
||||
|
||||
#### Index Hadoop Task
|
||||
### Index Hadoop Task
|
||||
|
||||
The Hadoop Index Task is used to index larger data sets that require the parallelization and processing power of a Hadoop cluster.
|
||||
|
||||
|
@ -79,11 +79,11 @@ The Hadoop Index Task is used to index larger data sets that require the paralle
|
|||
|
||||
The Hadoop Index Config submitted as part of an Hadoop Index Task is identical to the Hadoop Index Config used by the `HadoopBatchIndexer` except that three fields must be omitted: `segmentOutputPath`, `workingPath`, `updaterJobSpec`. The Indexing Service takes care of setting these fields internally.
|
||||
|
||||
##### Using your own Hadoop distribution
|
||||
#### Using your own Hadoop distribution
|
||||
|
||||
Druid is compiled against Apache hadoop-core 1.0.3. However, if you happen to use a different flavor of hadoop that is API compatible with hadoop-core 1.0.3, you should only have to change the hadoopCoordinates property to point to the maven artifact used by your distribution.
|
||||
|
||||
##### Resolving dependency conflicts running HadoopIndexTask
|
||||
#### Resolving dependency conflicts running HadoopIndexTask
|
||||
|
||||
Currently, the HadoopIndexTask creates a single classpath to run the HadoopDruidIndexerJob, which can lead to version conflicts between various dependencies of Druid, extension modules, and Hadoop's own dependencies.
|
||||
|
||||
|
@ -91,7 +91,7 @@ The Hadoop index task will put Druid's dependencies first on the classpath, foll
|
|||
|
||||
If you are having trouble with any extensions in HadoopIndexTask, it may be the case that Druid, or one of its dependencies, depends on a different version of a library than what you are using as part of your extensions, but Druid's version overrides the one in your extension. In that case you probably want to build your own Druid version and override the offending library by adding an explicit dependency to the pom.xml of each druid sub-module that depends on it.
|
||||
|
||||
#### Realtime Index Task
|
||||
### Realtime Index Task
|
||||
|
||||
The indexing service can also run real-time tasks. These tasks effectively transform a middle manager into a real-time node. We introduced real-time tasks as a way to programmatically add new real-time data sources without needing to manually add nodes. The grammar for the real-time task is as follows:
|
||||
|
||||
|
@ -169,7 +169,7 @@ For schema, fireDepartmentConfig, windowPeriod, segmentGranularity, and rejectio
|
|||
Segment Merging Tasks
|
||||
---------------------
|
||||
|
||||
#### Append Task
|
||||
### Append Task
|
||||
|
||||
Append tasks append a list of segments together into a single segment (one after the other). The grammar is:
|
||||
|
||||
|
@ -181,7 +181,7 @@ Append tasks append a list of segments together into a single segment (one after
|
|||
}
|
||||
```
|
||||
|
||||
#### Merge Task
|
||||
### Merge Task
|
||||
|
||||
Merge tasks merge a list of segments together. Any common timestamps are merged. The grammar is:
|
||||
|
||||
|
@ -196,7 +196,7 @@ Merge tasks merge a list of segments together. Any common timestamps are merged.
|
|||
Segment Destroying Tasks
|
||||
------------------------
|
||||
|
||||
#### Delete Task
|
||||
### Delete Task
|
||||
|
||||
Delete tasks create empty segments with no data. The grammar is:
|
||||
|
||||
|
@ -208,7 +208,7 @@ Delete tasks create empty segments with no data. The grammar is:
|
|||
}
|
||||
```
|
||||
|
||||
#### Kill Task
|
||||
### Kill Task
|
||||
|
||||
Kill tasks delete all information about a segment and removes it from deep storage. Killable segments must be disabled (used==0) in the Druid segment table. The available grammar is:
|
||||
|
||||
|
@ -223,7 +223,7 @@ Kill tasks delete all information about a segment and removes it from deep stora
|
|||
Misc. Tasks
|
||||
-----------
|
||||
|
||||
#### Version Converter Task
|
||||
### Version Converter Task
|
||||
|
||||
These tasks convert segments from an existing older index version to the latest index version. The available grammar is:
|
||||
|
||||
|
@ -237,7 +237,7 @@ These tasks convert segments from an existing older index version to the latest
|
|||
}
|
||||
```
|
||||
|
||||
#### Noop Task
|
||||
### Noop Task
|
||||
|
||||
These tasks start, sleep for a time and are used only for testing. The available grammar is:
|
||||
|
||||
|
|
|
@ -9,6 +9,7 @@ TopN queries return a sorted set of results for the values in a given dimension
|
|||
A topN query object looks like:
|
||||
|
||||
```json
|
||||
{
|
||||
"queryType": "topN",
|
||||
"dataSource": "sample_data",
|
||||
"dimension": "sample_dim",
|
||||
|
|
|
@ -49,7 +49,7 @@ There are two ways to setup Druid: download a tarball, or [Build From Source](Bu
|
|||
|
||||
### Download a Tarball
|
||||
|
||||
We've built a tarball that contains everything you'll need. You'll find it [here](http://static.druid.io/artifacts/releases/druid-services-0.6.69-bin.tar.gz). Download this file to a directory of your choosing.
|
||||
We've built a tarball that contains everything you'll need. You'll find it [here](http://static.druid.io/artifacts/releases/druid-services-0.6.73-bin.tar.gz). Download this file to a directory of your choosing.
|
||||
|
||||
You can extract the awesomeness within by issuing:
|
||||
|
||||
|
@ -60,7 +60,7 @@ tar -zxvf druid-services-*-bin.tar.gz
|
|||
Not too lost so far right? That's great! If you cd into the directory:
|
||||
|
||||
```
|
||||
cd druid-services-0.6.69
|
||||
cd druid-services-0.6.73
|
||||
```
|
||||
|
||||
You should see a bunch of files:
|
||||
|
|
|
@ -13,7 +13,7 @@ In this tutorial, we will set up other types of Druid nodes and external depende
|
|||
|
||||
If you followed the first tutorial, you should already have Druid downloaded. If not, let's go back and do that first.
|
||||
|
||||
You can download the latest version of druid [here](http://static.druid.io/artifacts/releases/druid-services-0.6.69-bin.tar.gz)
|
||||
You can download the latest version of druid [here](http://static.druid.io/artifacts/releases/druid-services-0.6.73-bin.tar.gz)
|
||||
|
||||
and untar the contents within by issuing:
|
||||
|
||||
|
@ -149,7 +149,7 @@ druid.port=8081
|
|||
|
||||
druid.zk.service.host=localhost
|
||||
|
||||
druid.extensions.coordinates=["io.druid.extensions:druid-s3-extensions:0.6.69"]
|
||||
druid.extensions.coordinates=["io.druid.extensions:druid-s3-extensions:0.6.73"]
|
||||
|
||||
# Dummy read only AWS account (used to download example data)
|
||||
druid.s3.secretKey=QyyfVZ7llSiRg6Qcrql1eEUG7buFpAK6T6engr1b
|
||||
|
@ -240,7 +240,7 @@ druid.port=8083
|
|||
|
||||
druid.zk.service.host=localhost
|
||||
|
||||
druid.extensions.coordinates=["io.druid.extensions:druid-examples:0.6.69","io.druid.extensions:druid-kafka-seven:0.6.69"]
|
||||
druid.extensions.coordinates=["io.druid.extensions:druid-examples:0.6.73","io.druid.extensions:druid-kafka-seven:0.6.73"]
|
||||
|
||||
# Change this config to db to hand off to the rest of the Druid cluster
|
||||
druid.publish.type=noop
|
||||
|
|
|
@ -37,7 +37,7 @@ There are two ways to setup Druid: download a tarball, or [Build From Source](Bu
|
|||
|
||||
h3. Download a Tarball
|
||||
|
||||
We've built a tarball that contains everything you'll need. You'll find it [here](http://static.druid.io/artifacts/releases/druid-services-0.6.69-bin.tar.gz)
|
||||
We've built a tarball that contains everything you'll need. You'll find it [here](http://static.druid.io/artifacts/releases/druid-services-0.6.73-bin.tar.gz)
|
||||
Download this file to a directory of your choosing.
|
||||
You can extract the awesomeness within by issuing:
|
||||
|
||||
|
@ -48,7 +48,7 @@ tar zxvf druid-services-*-bin.tar.gz
|
|||
Not too lost so far right? That's great! If you cd into the directory:
|
||||
|
||||
```
|
||||
cd druid-services-0.6.69
|
||||
cd druid-services-0.6.73
|
||||
```
|
||||
|
||||
You should see a bunch of files:
|
||||
|
|
|
@ -9,7 +9,7 @@ There are two ways to setup Druid: download a tarball, or build it from source.
|
|||
|
||||
h3. Download a Tarball
|
||||
|
||||
We've built a tarball that contains everything you'll need. You'll find it "here":http://static.druid.io/artifacts/releases/druid-services-0.6.69-bin.tar.gz.
|
||||
We've built a tarball that contains everything you'll need. You'll find it "here":http://static.druid.io/artifacts/releases/druid-services-0.6.73-bin.tar.gz.
|
||||
Download this bad boy to a directory of your choosing.
|
||||
|
||||
You can extract the awesomeness within by issuing:
|
||||
|
|
|
@ -4,7 +4,7 @@ druid.port=8081
|
|||
|
||||
druid.zk.service.host=localhost
|
||||
|
||||
druid.extensions.coordinates=["io.druid.extensions:druid-s3-extensions:0.6.69"]
|
||||
druid.extensions.coordinates=["io.druid.extensions:druid-s3-extensions:0.6.73"]
|
||||
|
||||
# Dummy read only AWS account (used to download example data)
|
||||
druid.s3.secretKey=QyyfVZ7llSiRg6Qcrql1eEUG7buFpAK6T6engr1b
|
||||
|
|
|
@ -4,7 +4,7 @@ druid.port=8083
|
|||
|
||||
druid.zk.service.host=localhost
|
||||
|
||||
druid.extensions.coordinates=["io.druid.extensions:druid-examples:0.6.69","io.druid.extensions:druid-kafka-seven:0.6.69","io.druid.extensions:druid-rabbitmq:0.6.69"]
|
||||
druid.extensions.coordinates=["io.druid.extensions:druid-examples:0.6.73","io.druid.extensions:druid-kafka-seven:0.6.73","io.druid.extensions:druid-rabbitmq:0.6.73"]
|
||||
|
||||
# Change this config to db to hand off to the rest of the Druid cluster
|
||||
druid.publish.type=noop
|
||||
|
|
|
@ -28,7 +28,7 @@
|
|||
<parent>
|
||||
<groupId>io.druid</groupId>
|
||||
<artifactId>druid</artifactId>
|
||||
<version>0.6.70-SNAPSHOT</version>
|
||||
<version>0.6.74-SNAPSHOT</version>
|
||||
</parent>
|
||||
|
||||
<dependencies>
|
||||
|
|
|
@ -28,7 +28,7 @@
|
|||
<parent>
|
||||
<groupId>io.druid</groupId>
|
||||
<artifactId>druid</artifactId>
|
||||
<version>0.6.70-SNAPSHOT</version>
|
||||
<version>0.6.74-SNAPSHOT</version>
|
||||
</parent>
|
||||
|
||||
<dependencies>
|
||||
|
|
|
@ -28,7 +28,7 @@
|
|||
<parent>
|
||||
<groupId>io.druid</groupId>
|
||||
<artifactId>druid</artifactId>
|
||||
<version>0.6.70-SNAPSHOT</version>
|
||||
<version>0.6.74-SNAPSHOT</version>
|
||||
</parent>
|
||||
|
||||
<dependencies>
|
||||
|
|
|
@ -28,7 +28,7 @@
|
|||
<parent>
|
||||
<groupId>io.druid</groupId>
|
||||
<artifactId>druid</artifactId>
|
||||
<version>0.6.70-SNAPSHOT</version>
|
||||
<version>0.6.74-SNAPSHOT</version>
|
||||
</parent>
|
||||
|
||||
<dependencies>
|
||||
|
|
|
@ -28,7 +28,7 @@
|
|||
<parent>
|
||||
<groupId>io.druid</groupId>
|
||||
<artifactId>druid</artifactId>
|
||||
<version>0.6.70-SNAPSHOT</version>
|
||||
<version>0.6.74-SNAPSHOT</version>
|
||||
</parent>
|
||||
|
||||
<dependencies>
|
||||
|
|
|
@ -28,7 +28,7 @@
|
|||
<parent>
|
||||
<groupId>io.druid</groupId>
|
||||
<artifactId>druid</artifactId>
|
||||
<version>0.6.70-SNAPSHOT</version>
|
||||
<version>0.6.74-SNAPSHOT</version>
|
||||
</parent>
|
||||
|
||||
<dependencies>
|
||||
|
|
|
@ -28,7 +28,7 @@
|
|||
<parent>
|
||||
<groupId>io.druid</groupId>
|
||||
<artifactId>druid</artifactId>
|
||||
<version>0.6.70-SNAPSHOT</version>
|
||||
<version>0.6.74-SNAPSHOT</version>
|
||||
</parent>
|
||||
|
||||
<dependencies>
|
||||
|
|
19
pom.xml
19
pom.xml
|
@ -23,14 +23,14 @@
|
|||
<groupId>io.druid</groupId>
|
||||
<artifactId>druid</artifactId>
|
||||
<packaging>pom</packaging>
|
||||
<version>0.6.70-SNAPSHOT</version>
|
||||
<version>0.6.74-SNAPSHOT</version>
|
||||
<name>druid</name>
|
||||
<description>druid</description>
|
||||
<scm>
|
||||
<connection>scm:git:ssh://git@github.com/metamx/druid.git</connection>
|
||||
<developerConnection>scm:git:ssh://git@github.com/metamx/druid.git</developerConnection>
|
||||
<url>http://www.github.com/metamx/druid</url>
|
||||
<tag>druid-0.6.69-SNAPSHOT</tag>
|
||||
<tag>druid-0.6.73-SNAPSHOT</tag>
|
||||
</scm>
|
||||
|
||||
<prerequisites>
|
||||
|
@ -94,7 +94,7 @@
|
|||
<dependency>
|
||||
<groupId>com.metamx</groupId>
|
||||
<artifactId>server-metrics</artifactId>
|
||||
<version>0.0.5</version>
|
||||
<version>0.0.9</version>
|
||||
</dependency>
|
||||
|
||||
<dependency>
|
||||
|
@ -535,6 +535,19 @@
|
|||
<artifactId>maven-assembly-plugin</artifactId>
|
||||
<version>2.4</version>
|
||||
</plugin>
|
||||
<plugin>
|
||||
<groupId>org.apache.maven.plugins</groupId>
|
||||
<artifactId>maven-release-plugin</artifactId>
|
||||
<version>2.4.2</version>
|
||||
<dependencies>
|
||||
<dependency>
|
||||
<groupId>org.apache.maven.scm</groupId>
|
||||
<artifactId>maven-scm-provider-gitexe</artifactId>
|
||||
<!-- This version is necessary for use with git version 1.8.5 and above -->
|
||||
<version>1.8.1</version>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
</plugin>
|
||||
</plugins>
|
||||
</pluginManagement>
|
||||
</build>
|
||||
|
|
|
@ -28,7 +28,7 @@
|
|||
<parent>
|
||||
<groupId>io.druid</groupId>
|
||||
<artifactId>druid</artifactId>
|
||||
<version>0.6.70-SNAPSHOT</version>
|
||||
<version>0.6.74-SNAPSHOT</version>
|
||||
</parent>
|
||||
|
||||
<dependencies>
|
||||
|
|
|
@ -50,6 +50,10 @@ public class IntervalChunkingQueryRunner<T> implements QueryRunner<T>
|
|||
@Override
|
||||
public Sequence<T> run(final Query<T> query)
|
||||
{
|
||||
if (period.getMillis() == 0) {
|
||||
return baseRunner.run(query);
|
||||
}
|
||||
|
||||
return Sequences.concat(
|
||||
FunctionalIterable
|
||||
.create(query.getIntervals())
|
||||
|
|
|
@ -69,6 +69,11 @@ public class MetricsEmittingQueryRunner<T> implements QueryRunner<T>
|
|||
public Sequence<T> run(final Query<T> query)
|
||||
{
|
||||
final ServiceMetricEvent.Builder builder = builderFn.apply(query);
|
||||
String queryId = query.getId();
|
||||
if (queryId == null) {
|
||||
queryId = "";
|
||||
}
|
||||
builder.setUser8(queryId);
|
||||
|
||||
return new Sequence<T>()
|
||||
{
|
||||
|
|
|
@ -27,7 +27,7 @@ import org.joda.time.Period;
|
|||
public class QueryConfig
|
||||
{
|
||||
@JsonProperty
|
||||
private Period chunkPeriod = Period.months(1);
|
||||
private Period chunkPeriod = new Period();
|
||||
|
||||
public Period getChunkPeriod()
|
||||
{
|
||||
|
|
|
@ -21,6 +21,7 @@ package io.druid.query.aggregation.post;
|
|||
|
||||
import com.fasterxml.jackson.annotation.JsonCreator;
|
||||
import com.fasterxml.jackson.annotation.JsonProperty;
|
||||
import com.google.common.base.Preconditions;
|
||||
import com.google.common.collect.Sets;
|
||||
import io.druid.query.aggregation.PostAggregator;
|
||||
|
||||
|
@ -38,11 +39,13 @@ public class ConstantPostAggregator implements PostAggregator
|
|||
@JsonCreator
|
||||
public ConstantPostAggregator(
|
||||
@JsonProperty("name") String name,
|
||||
@JsonProperty("value") Number constantValue
|
||||
@JsonProperty("value") Number constantValue,
|
||||
@JsonProperty("constantValue") Number backwardsCompatibleValue
|
||||
)
|
||||
{
|
||||
this.name = name;
|
||||
this.constantValue = constantValue;
|
||||
this.constantValue = constantValue == null ? backwardsCompatibleValue : constantValue;
|
||||
Preconditions.checkNotNull(this.constantValue);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -126,4 +129,5 @@ public class ConstantPostAggregator implements PostAggregator
|
|||
result = 31 * result + (constantValue != null ? constantValue.hashCode() : 0);
|
||||
return result;
|
||||
}
|
||||
|
||||
}
|
||||
|
|
|
@ -169,8 +169,7 @@ public class GroupByQueryQueryToolChest extends QueryToolChest<Row, GroupByQuery
|
|||
.setUser5(Joiner.on(",").join(query.getIntervals()))
|
||||
.setUser6(String.valueOf(query.hasFilters()))
|
||||
.setUser7(String.format("%,d aggs", query.getAggregatorSpecs().size()))
|
||||
.setUser9(Minutes.minutes(numMinutes).toString())
|
||||
.setUser10(query.getId());
|
||||
.setUser9(Minutes.minutes(numMinutes).toString());
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
|
@ -151,8 +151,7 @@ public class SegmentMetadataQueryQueryToolChest extends QueryToolChest<SegmentAn
|
|||
.setUser4(query.getType())
|
||||
.setUser5(Joiner.on(",").join(query.getIntervals()))
|
||||
.setUser6(String.valueOf(query.hasFilters()))
|
||||
.setUser9(Minutes.minutes(numMinutes).toString())
|
||||
.setUser10(query.getId());
|
||||
.setUser9(Minutes.minutes(numMinutes).toString());
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
|
@ -34,4 +34,16 @@ public class AllColumnIncluderator implements ColumnIncluderator
|
|||
{
|
||||
return ALL_CACHE_PREFIX;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object obj)
|
||||
{
|
||||
return obj instanceof AllColumnIncluderator;
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode()
|
||||
{
|
||||
return AllColumnIncluderator.class.hashCode();
|
||||
}
|
||||
}
|
||||
|
|
|
@ -21,7 +21,9 @@ package io.druid.query.metadata.metadata;
|
|||
|
||||
import com.fasterxml.jackson.annotation.JsonCreator;
|
||||
import com.fasterxml.jackson.annotation.JsonProperty;
|
||||
import com.google.common.base.Preconditions;
|
||||
import io.druid.query.BaseQuery;
|
||||
import io.druid.query.DataSource;
|
||||
import io.druid.query.Query;
|
||||
import io.druid.query.TableDataSource;
|
||||
import io.druid.query.spec.QuerySegmentSpec;
|
||||
|
@ -36,17 +38,18 @@ public class SegmentMetadataQuery extends BaseQuery<SegmentAnalysis>
|
|||
|
||||
@JsonCreator
|
||||
public SegmentMetadataQuery(
|
||||
@JsonProperty("dataSource") String dataSource,
|
||||
@JsonProperty("dataSource") DataSource dataSource,
|
||||
@JsonProperty("intervals") QuerySegmentSpec querySegmentSpec,
|
||||
@JsonProperty("toInclude") ColumnIncluderator toInclude,
|
||||
@JsonProperty("merge") Boolean merge,
|
||||
@JsonProperty("context") Map<String, String> context
|
||||
)
|
||||
{
|
||||
super(new TableDataSource(dataSource), querySegmentSpec, context);
|
||||
super(dataSource, querySegmentSpec, context);
|
||||
|
||||
this.toInclude = toInclude == null ? new AllColumnIncluderator() : toInclude;
|
||||
this.merge = merge == null ? false : merge;
|
||||
Preconditions.checkArgument(dataSource instanceof TableDataSource, "SegmentMetadataQuery only supports table datasource");
|
||||
}
|
||||
|
||||
@JsonProperty
|
||||
|
@ -77,7 +80,7 @@ public class SegmentMetadataQuery extends BaseQuery<SegmentAnalysis>
|
|||
public Query<SegmentAnalysis> withOverriddenContext(Map<String, String> contextOverride)
|
||||
{
|
||||
return new SegmentMetadataQuery(
|
||||
((TableDataSource)getDataSource()).getName(),
|
||||
getDataSource(),
|
||||
getQuerySegmentSpec(), toInclude, merge, computeOverridenContext(contextOverride)
|
||||
);
|
||||
}
|
||||
|
@ -86,7 +89,7 @@ public class SegmentMetadataQuery extends BaseQuery<SegmentAnalysis>
|
|||
public Query<SegmentAnalysis> withQuerySegmentSpec(QuerySegmentSpec spec)
|
||||
{
|
||||
return new SegmentMetadataQuery(
|
||||
((TableDataSource)getDataSource()).getName(),
|
||||
getDataSource(),
|
||||
spec, toInclude, merge, getContext());
|
||||
}
|
||||
|
||||
|
|
|
@ -125,8 +125,7 @@ public class SearchQueryQueryToolChest extends QueryToolChest<Result<SearchResul
|
|||
.setUser4("search")
|
||||
.setUser5(COMMA_JOIN.join(query.getIntervals()))
|
||||
.setUser6(String.valueOf(query.hasFilters()))
|
||||
.setUser9(Minutes.minutes(numMinutes).toString())
|
||||
.setUser10(query.getId());
|
||||
.setUser9(Minutes.minutes(numMinutes).toString());
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
|
@ -127,8 +127,7 @@ public class SelectQueryQueryToolChest extends QueryToolChest<Result<SelectResul
|
|||
.setUser4("Select")
|
||||
.setUser5(COMMA_JOIN.join(query.getIntervals()))
|
||||
.setUser6(String.valueOf(query.hasFilters()))
|
||||
.setUser9(Minutes.minutes(numMinutes).toString())
|
||||
.setUser10(query.getId());
|
||||
.setUser9(Minutes.minutes(numMinutes).toString());
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
|
@ -119,8 +119,7 @@ public class TimeBoundaryQueryQueryToolChest
|
|||
return new ServiceMetricEvent.Builder()
|
||||
.setUser2(query.getDataSource().toString())
|
||||
.setUser4(query.getType())
|
||||
.setUser6("false")
|
||||
.setUser10(query.getId());
|
||||
.setUser6("false");
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
|
@ -128,8 +128,7 @@ public class TimeseriesQueryQueryToolChest extends QueryToolChest<Result<Timeser
|
|||
.setUser5(COMMA_JOIN.join(query.getIntervals()))
|
||||
.setUser6(String.valueOf(query.hasFilters()))
|
||||
.setUser7(String.format("%,d aggs", query.getAggregatorSpecs().size()))
|
||||
.setUser9(Minutes.minutes(numMinutes).toString())
|
||||
.setUser10(query.getId());
|
||||
.setUser9(Minutes.minutes(numMinutes).toString());
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
|
@ -133,8 +133,7 @@ public class TopNQueryQueryToolChest extends QueryToolChest<Result<TopNResultVal
|
|||
.setUser5(COMMA_JOIN.join(query.getIntervals()))
|
||||
.setUser6(String.valueOf(query.hasFilters()))
|
||||
.setUser7(String.format("%,d aggs", query.getAggregatorSpecs().size()))
|
||||
.setUser9(Minutes.minutes(numMinutes).toString())
|
||||
.setUser10(query.getId());
|
||||
.setUser9(Minutes.minutes(numMinutes).toString());
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
|
@ -119,7 +119,7 @@ public class QueriesTest
|
|||
"+",
|
||||
Arrays.asList(
|
||||
new FieldAccessPostAggregator("idx", "idx"),
|
||||
new ConstantPostAggregator("const", 1)
|
||||
new ConstantPostAggregator("const", 1, null)
|
||||
)
|
||||
),
|
||||
new ArithmeticPostAggregator(
|
||||
|
@ -127,7 +127,7 @@ public class QueriesTest
|
|||
"-",
|
||||
Arrays.asList(
|
||||
new FieldAccessPostAggregator("rev", "rev"),
|
||||
new ConstantPostAggregator("const", 1)
|
||||
new ConstantPostAggregator("const", 1, null)
|
||||
)
|
||||
)
|
||||
)
|
||||
|
@ -173,7 +173,7 @@ public class QueriesTest
|
|||
"+",
|
||||
Arrays.asList(
|
||||
new FieldAccessPostAggregator("idx", "idx"),
|
||||
new ConstantPostAggregator("const", 1)
|
||||
new ConstantPostAggregator("const", 1, null)
|
||||
)
|
||||
),
|
||||
new ArithmeticPostAggregator(
|
||||
|
@ -181,7 +181,7 @@ public class QueriesTest
|
|||
"-",
|
||||
Arrays.asList(
|
||||
new FieldAccessPostAggregator("rev", "rev2"),
|
||||
new ConstantPostAggregator("const", 1)
|
||||
new ConstantPostAggregator("const", 1, null)
|
||||
)
|
||||
)
|
||||
)
|
||||
|
|
|
@ -67,7 +67,7 @@ public class QueryRunnerTestHelper
|
|||
"uniques",
|
||||
"quality_uniques"
|
||||
);
|
||||
public static final ConstantPostAggregator constant = new ConstantPostAggregator("const", 1L);
|
||||
public static final ConstantPostAggregator constant = new ConstantPostAggregator("const", 1L, null);
|
||||
public static final FieldAccessPostAggregator rowsPostAgg = new FieldAccessPostAggregator("rows", "rows");
|
||||
public static final FieldAccessPostAggregator indexPostAgg = new FieldAccessPostAggregator("index", "index");
|
||||
public static final ArithmeticPostAggregator addRowsIndexConstant =
|
||||
|
|
|
@ -48,7 +48,7 @@ public class ArithmeticPostAggregatorTest
|
|||
List<PostAggregator> postAggregatorList =
|
||||
Lists.newArrayList(
|
||||
new ConstantPostAggregator(
|
||||
"roku", 6
|
||||
"roku", 6, null
|
||||
),
|
||||
new FieldAccessPostAggregator(
|
||||
"rows", "rows"
|
||||
|
@ -79,7 +79,7 @@ public class ArithmeticPostAggregatorTest
|
|||
List<PostAggregator> postAggregatorList =
|
||||
Lists.newArrayList(
|
||||
new ConstantPostAggregator(
|
||||
"roku", 6
|
||||
"roku", 6, null
|
||||
),
|
||||
new FieldAccessPostAggregator(
|
||||
"rows", "rows"
|
||||
|
|
|
@ -19,6 +19,7 @@
|
|||
|
||||
package io.druid.query.aggregation.post;
|
||||
|
||||
import io.druid.jackson.DefaultObjectMapper;
|
||||
import org.junit.Assert;
|
||||
import org.junit.Test;
|
||||
|
||||
|
@ -33,11 +34,11 @@ public class ConstantPostAggregatorTest
|
|||
{
|
||||
ConstantPostAggregator constantPostAggregator;
|
||||
|
||||
constantPostAggregator = new ConstantPostAggregator("shichi", 7);
|
||||
constantPostAggregator = new ConstantPostAggregator("shichi", 7, null);
|
||||
Assert.assertEquals(7, constantPostAggregator.compute(null));
|
||||
constantPostAggregator = new ConstantPostAggregator("rei", 0.0);
|
||||
constantPostAggregator = new ConstantPostAggregator("rei", 0.0, null);
|
||||
Assert.assertEquals(0.0, constantPostAggregator.compute(null));
|
||||
constantPostAggregator = new ConstantPostAggregator("ichi", 1.0);
|
||||
constantPostAggregator = new ConstantPostAggregator("ichi", 1.0, null);
|
||||
Assert.assertNotSame(1, constantPostAggregator.compute(null));
|
||||
}
|
||||
|
||||
|
@ -45,10 +46,35 @@ public class ConstantPostAggregatorTest
|
|||
public void testComparator()
|
||||
{
|
||||
ConstantPostAggregator constantPostAggregator =
|
||||
new ConstantPostAggregator("thistestbasicallydoesnothing unhappyface", 1);
|
||||
new ConstantPostAggregator("thistestbasicallydoesnothing unhappyface", 1, null);
|
||||
Comparator comp = constantPostAggregator.getComparator();
|
||||
Assert.assertEquals(0, comp.compare(0, constantPostAggregator.compute(null)));
|
||||
Assert.assertEquals(0, comp.compare(0, 1));
|
||||
Assert.assertEquals(0, comp.compare(1, 0));
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testSerdeBackwardsCompatible() throws Exception
|
||||
{
|
||||
|
||||
DefaultObjectMapper mapper = new DefaultObjectMapper();
|
||||
ConstantPostAggregator aggregator = mapper.readValue(
|
||||
"{\"type\":\"constant\",\"name\":\"thistestbasicallydoesnothing unhappyface\",\"constantValue\":1}\n",
|
||||
ConstantPostAggregator.class
|
||||
);
|
||||
Assert.assertEquals(new Integer(1), aggregator.getConstantValue());
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testSerde() throws Exception
|
||||
{
|
||||
|
||||
DefaultObjectMapper mapper = new DefaultObjectMapper();
|
||||
ConstantPostAggregator aggregator = new ConstantPostAggregator("aggregator", 2, null);
|
||||
ConstantPostAggregator aggregator1 = mapper.readValue(
|
||||
mapper.writeValueAsString(aggregator),
|
||||
ConstantPostAggregator.class
|
||||
);
|
||||
Assert.assertEquals(aggregator, aggregator1);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -21,6 +21,7 @@ package io.druid.query.metadata;
|
|||
|
||||
import com.google.common.collect.Lists;
|
||||
import com.metamx.common.guava.Sequences;
|
||||
import io.druid.query.LegacyDataSource;
|
||||
import io.druid.query.QueryRunner;
|
||||
import io.druid.query.QueryRunnerFactory;
|
||||
import io.druid.query.QueryRunnerTestHelper;
|
||||
|
@ -98,7 +99,7 @@ public class SegmentAnalyzerTest
|
|||
);
|
||||
|
||||
final SegmentMetadataQuery query = new SegmentMetadataQuery(
|
||||
"test", QuerySegmentSpecs.create("2011/2012"), null, null, null
|
||||
new LegacyDataSource("test"), QuerySegmentSpecs.create("2011/2012"), null, null, null
|
||||
);
|
||||
return Sequences.toList(query.run(runner), Lists.<SegmentAnalysis>newArrayList());
|
||||
}
|
||||
|
|
|
@ -0,0 +1,51 @@
|
|||
/*
|
||||
* Druid - a distributed column store.
|
||||
* Copyright (C) 2012, 2013 Metamarkets Group Inc.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or
|
||||
* modify it under the terms of the GNU General Public License
|
||||
* as published by the Free Software Foundation; either version 2
|
||||
* of the License, or (at your option) any later version.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program; if not, write to the Free Software
|
||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||
*/
|
||||
|
||||
package io.druid.query.metadata;
|
||||
|
||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||
import io.druid.jackson.DefaultObjectMapper;
|
||||
import io.druid.query.Query;
|
||||
import io.druid.query.metadata.metadata.SegmentMetadataQuery;
|
||||
import org.joda.time.Interval;
|
||||
import org.junit.Assert;
|
||||
import org.junit.Test;
|
||||
|
||||
public class SegmentMetadataQueryTest
|
||||
{
|
||||
private ObjectMapper mapper = new DefaultObjectMapper();
|
||||
|
||||
@Test
|
||||
public void testSerde() throws Exception
|
||||
{
|
||||
String queryStr = "{\n"
|
||||
+ " \"queryType\":\"segmentMetadata\",\n"
|
||||
+ " \"dataSource\":\"test_ds\",\n"
|
||||
+ " \"intervals\":[\"2013-12-04T00:00:00.000Z/2013-12-05T00:00:00.000Z\"]\n"
|
||||
+ "}";
|
||||
Query query = mapper.readValue(queryStr, Query.class);
|
||||
Assert.assertTrue(query instanceof SegmentMetadataQuery);
|
||||
Assert.assertEquals("test_ds", query.getDataSource().getName());
|
||||
Assert.assertEquals(new Interval("2013-12-04T00:00:00.000Z/2013-12-05T00:00:00.000Z"), query.getIntervals().get(0));
|
||||
|
||||
// test serialize and deserialize
|
||||
Assert.assertEquals(query, mapper.readValue(mapper.writeValueAsString(query), Query.class));
|
||||
|
||||
}
|
||||
}
|
|
@ -43,7 +43,7 @@ public class TimeseriesBinaryFnTest
|
|||
{
|
||||
final CountAggregatorFactory rowsCount = new CountAggregatorFactory("rows");
|
||||
final LongSumAggregatorFactory indexLongSum = new LongSumAggregatorFactory("index", "index");
|
||||
final ConstantPostAggregator constant = new ConstantPostAggregator("const", 1L);
|
||||
final ConstantPostAggregator constant = new ConstantPostAggregator("const", 1L, null);
|
||||
final FieldAccessPostAggregator rowsPostAgg = new FieldAccessPostAggregator("rows", "rows");
|
||||
final FieldAccessPostAggregator indexPostAgg = new FieldAccessPostAggregator("index", "index");
|
||||
final ArithmeticPostAggregator addRowsIndexConstant = new ArithmeticPostAggregator(
|
||||
|
|
|
@ -47,7 +47,7 @@ public class TopNBinaryFnTest
|
|||
{
|
||||
final CountAggregatorFactory rowsCount = new CountAggregatorFactory("rows");
|
||||
final LongSumAggregatorFactory indexLongSum = new LongSumAggregatorFactory("index", "index");
|
||||
final ConstantPostAggregator constant = new ConstantPostAggregator("const", 1L);
|
||||
final ConstantPostAggregator constant = new ConstantPostAggregator("const", 1L, null);
|
||||
final FieldAccessPostAggregator rowsPostAgg = new FieldAccessPostAggregator("rows", "rows");
|
||||
final FieldAccessPostAggregator indexPostAgg = new FieldAccessPostAggregator("index", "index");
|
||||
final ArithmeticPostAggregator addrowsindexconstant = new ArithmeticPostAggregator(
|
||||
|
|
|
@ -9,7 +9,7 @@
|
|||
<parent>
|
||||
<groupId>io.druid</groupId>
|
||||
<artifactId>druid</artifactId>
|
||||
<version>0.6.70-SNAPSHOT</version>
|
||||
<version>0.6.74-SNAPSHOT</version>
|
||||
</parent>
|
||||
|
||||
<dependencies>
|
||||
|
|
|
@ -28,7 +28,7 @@
|
|||
<parent>
|
||||
<groupId>io.druid</groupId>
|
||||
<artifactId>druid</artifactId>
|
||||
<version>0.6.70-SNAPSHOT</version>
|
||||
<version>0.6.74-SNAPSHOT</version>
|
||||
</parent>
|
||||
|
||||
<dependencies>
|
||||
|
|
|
@ -28,7 +28,7 @@
|
|||
<parent>
|
||||
<groupId>io.druid</groupId>
|
||||
<artifactId>druid</artifactId>
|
||||
<version>0.6.70-SNAPSHOT</version>
|
||||
<version>0.6.74-SNAPSHOT</version>
|
||||
</parent>
|
||||
|
||||
<dependencies>
|
||||
|
|
|
@ -212,12 +212,12 @@ unaryExpression returns [PostAggregator p]
|
|||
if($e.p instanceof ConstantPostAggregator) {
|
||||
ConstantPostAggregator c = (ConstantPostAggregator)$e.p;
|
||||
double v = c.getConstantValue().doubleValue() * -1;
|
||||
$p = new ConstantPostAggregator(Double.toString(v), v);
|
||||
$p = new ConstantPostAggregator(Double.toString(v), v, null);
|
||||
} else {
|
||||
$p = new ArithmeticPostAggregator(
|
||||
"-"+$e.p.getName(),
|
||||
"*",
|
||||
Lists.newArrayList($e.p, new ConstantPostAggregator("-1", -1.0))
|
||||
Lists.newArrayList($e.p, new ConstantPostAggregator("-1", -1.0, null))
|
||||
);
|
||||
}
|
||||
}
|
||||
|
@ -240,7 +240,7 @@ aggregate returns [AggregatorFactory agg]
|
|||
;
|
||||
|
||||
constant returns [ConstantPostAggregator c]
|
||||
: value=NUMBER { double v = Double.parseDouble($value.text); $c = new ConstantPostAggregator(Double.toString(v), v); }
|
||||
: value=NUMBER { double v = Double.parseDouble($value.text); $c = new ConstantPostAggregator(Double.toString(v), v, null); }
|
||||
;
|
||||
|
||||
/* time filters must be top level filters */
|
||||
|
|
|
@ -29,7 +29,7 @@ public class LocalCacheProvider implements CacheProvider
|
|||
{
|
||||
@JsonProperty
|
||||
@Min(0)
|
||||
private long sizeInBytes = 0;
|
||||
private long sizeInBytes = 10485760;
|
||||
|
||||
@JsonProperty
|
||||
@Min(0)
|
||||
|
|
|
@ -35,7 +35,9 @@ import org.joda.time.DateTime;
|
|||
@JsonSubTypes.Type(name = "loadByInterval", value = IntervalLoadRule.class),
|
||||
@JsonSubTypes.Type(name = "loadForever", value = ForeverLoadRule.class),
|
||||
@JsonSubTypes.Type(name = "dropByPeriod", value = PeriodDropRule.class),
|
||||
@JsonSubTypes.Type(name = "dropByInterval", value = IntervalDropRule.class)
|
||||
@JsonSubTypes.Type(name = "dropByInterval", value = IntervalDropRule.class),
|
||||
@JsonSubTypes.Type(name = "dropForever", value = ForeverDropRule.class)
|
||||
|
||||
})
|
||||
public interface Rule
|
||||
{
|
||||
|
|
|
@ -27,7 +27,7 @@
|
|||
<parent>
|
||||
<groupId>io.druid</groupId>
|
||||
<artifactId>druid</artifactId>
|
||||
<version>0.6.70-SNAPSHOT</version>
|
||||
<version>0.6.74-SNAPSHOT</version>
|
||||
</parent>
|
||||
|
||||
<dependencies>
|
||||
|
|
|
@ -54,7 +54,7 @@ import java.util.List;
|
|||
*/
|
||||
@Command(
|
||||
name = "broker",
|
||||
description = "Runs a broker node, see http://druid.io/docs/0.6.69/Broker.html for a description"
|
||||
description = "Runs a broker node, see http://druid.io/docs/0.6.73/Broker.html for a description"
|
||||
)
|
||||
public class CliBroker extends ServerRunnable
|
||||
{
|
||||
|
|
|
@ -66,7 +66,7 @@ import java.util.List;
|
|||
*/
|
||||
@Command(
|
||||
name = "coordinator",
|
||||
description = "Runs the Coordinator, see http://druid.io/docs/0.6.69/Coordinator.html for a description."
|
||||
description = "Runs the Coordinator, see http://druid.io/docs/0.6.73/Coordinator.html for a description."
|
||||
)
|
||||
public class CliCoordinator extends ServerRunnable
|
||||
{
|
||||
|
|
|
@ -41,7 +41,7 @@ import java.util.List;
|
|||
*/
|
||||
@Command(
|
||||
name = "hadoop",
|
||||
description = "Runs the batch Hadoop Druid Indexer, see http://druid.io/docs/0.6.69/Batch-ingestion.html for a description."
|
||||
description = "Runs the batch Hadoop Druid Indexer, see http://druid.io/docs/0.6.73/Batch-ingestion.html for a description."
|
||||
)
|
||||
public class CliHadoopIndexer implements Runnable
|
||||
{
|
||||
|
|
|
@ -42,7 +42,7 @@ import java.util.List;
|
|||
*/
|
||||
@Command(
|
||||
name = "historical",
|
||||
description = "Runs a Historical node, see http://druid.io/docs/0.6.69/Historical.html for a description"
|
||||
description = "Runs a Historical node, see http://druid.io/docs/0.6.73/Historical.html for a description"
|
||||
)
|
||||
public class CliHistorical extends ServerRunnable
|
||||
{
|
||||
|
|
|
@ -93,7 +93,7 @@ import java.util.List;
|
|||
*/
|
||||
@Command(
|
||||
name = "overlord",
|
||||
description = "Runs an Overlord node, see http://druid.io/docs/0.6.69/Indexing-Service.html for a description"
|
||||
description = "Runs an Overlord node, see http://druid.io/docs/0.6.73/Indexing-Service.html for a description"
|
||||
)
|
||||
public class CliOverlord extends ServerRunnable
|
||||
{
|
||||
|
|
|
@ -30,7 +30,7 @@ import java.util.List;
|
|||
*/
|
||||
@Command(
|
||||
name = "realtime",
|
||||
description = "Runs a realtime node, see http://druid.io/docs/0.6.69/Realtime.html for a description"
|
||||
description = "Runs a realtime node, see http://druid.io/docs/0.6.73/Realtime.html for a description"
|
||||
)
|
||||
public class CliRealtime extends ServerRunnable
|
||||
{
|
||||
|
|
|
@ -42,7 +42,7 @@ import java.util.concurrent.Executor;
|
|||
*/
|
||||
@Command(
|
||||
name = "realtime",
|
||||
description = "Runs a standalone realtime node for examples, see http://druid.io/docs/0.6.69/Realtime.html for a description"
|
||||
description = "Runs a standalone realtime node for examples, see http://druid.io/docs/0.6.73/Realtime.html for a description"
|
||||
)
|
||||
public class CliRealtimeExample extends ServerRunnable
|
||||
{
|
||||
|
|
Loading…
Reference in New Issue