Merge branch 'master' of github.com:metamx/druid into better-context

This commit is contained in:
fjy 2014-03-21 10:46:36 -07:00
commit e832dc7a8c
9 changed files with 79 additions and 11 deletions

View File

@ -20,7 +20,7 @@ io.druid.cli.Main server coordinator
Rules
-----
Segments are loaded and dropped from the cluster based on a set of rules. Rules indicate how segments should be assigned to different historical node tiers and how many replicants of a segment should exist in each tier. Rules may also indicate when segments should be dropped entirely from the cluster. The coordinator loads a set of rules from the database. Rules may be specific to a certain datasource and/or a default set of rules can be configured. Rules are read in order and hence the ordering of rules is important. The coordinator will cycle through all available segments and match each segment with the first rule that applies. Each segment may only match a single rule
Segments are loaded and dropped from the cluster based on a set of rules. Rules indicate how segments should be assigned to different historical node tiers and how many replicants of a segment should exist in each tier. Rules may also indicate when segments should be dropped entirely from the cluster. The coordinator loads a set of rules from the database. Rules may be specific to a certain datasource and/or a default set of rules can be configured. Rules are read in order and hence the ordering of rules is important. The coordinator will cycle through all available segments and match each segment with the first rule that applies. Each segment may only match a single rule.
For more information on rules, see [Rule Configuration](Rule-Configuration.html).
@ -136,4 +136,4 @@ FAQ
No. If the Druid coordinator is not started up, no new segments will be loaded in the cluster and outdated segments will not be dropped. However, the coordinator node can be started up at any time, and after a configurable delay, will start running coordinator tasks.
This also means that if you have a working cluster and all of your coordinators die, the cluster will continue to function, it just wont experience any changes to its data topology.
This also means that if you have a working cluster and all of your coordinators die, the cluster will continue to function, it just wont experience any changes to its data topology.

View File

@ -2,7 +2,7 @@
layout: doc_page
---
# groupBy Queries
These types of queries take a groupBy query object and return an array of JSON objects where each object represents a grouping asked for by the query. Note: If you only want to do straight aggreagates for some time range, we highly recommend using [TimeseriesQueries](TimeseriesQuery.html) instead. The performance will be substantially better.
These types of queries take a groupBy query object and return an array of JSON objects where each object represents a grouping asked for by the query. Note: If you only want to do straight aggregates for some time range, we highly recommend using [TimeseriesQueries](TimeseriesQuery.html) instead. The performance will be substantially better.
An example groupBy query object is shown below:
``` json
@ -87,4 +87,4 @@ To pull it all together, the above query would return *n\*m* data points, up to
},
...
]
```
```

View File

@ -37,7 +37,7 @@ There are several main parts to a search query:
|intervals|A JSON Object representing ISO-8601 Intervals. This defines the time ranges to run the query over.|yes|
|searchDimensions|The dimensions to run the search over. Excluding this means the search is run over all dimensions.|no|
|query|See [SearchQuerySpec](SearchQuerySpec.html).|yes|
|sort|How the results of the search should sorted. Two possible types here are "lexicographic" and "strlen".|yes|
|sort|How the results of the search should be sorted. Two possible types here are "lexicographic" and "strlen".|yes|
|context|An additional JSON Object which can be used to specify certain flags.|no|
The format of the result is:

View File

@ -15,7 +15,7 @@ Segment metadata queries return per segment information about:
{
"queryType":"segmentMetadata",
"dataSource":"sample_datasource",
"intervals":["2013-01-01/2014-01-01"],
"intervals":["2013-01-01/2014-01-01"]
}
```

View File

@ -9,6 +9,7 @@ TopN queries return a sorted set of results for the values in a given dimension
A topN query object looks like:
```json
{
"queryType": "topN",
"dataSource": "sample_data",
"dimension": "sample_dim",

View File

@ -34,4 +34,16 @@ public class AllColumnIncluderator implements ColumnIncluderator
{
return ALL_CACHE_PREFIX;
}
@Override
public boolean equals(Object obj)
{
return obj instanceof AllColumnIncluderator;
}
@Override
public int hashCode()
{
return AllColumnIncluderator.class.hashCode();
}
}

View File

@ -21,7 +21,9 @@ package io.druid.query.metadata.metadata;
import com.fasterxml.jackson.annotation.JsonCreator;
import com.fasterxml.jackson.annotation.JsonProperty;
import com.google.common.base.Preconditions;
import io.druid.query.BaseQuery;
import io.druid.query.DataSource;
import io.druid.query.Query;
import io.druid.query.TableDataSource;
import io.druid.query.spec.QuerySegmentSpec;
@ -36,17 +38,18 @@ public class SegmentMetadataQuery extends BaseQuery<SegmentAnalysis>
@JsonCreator
public SegmentMetadataQuery(
@JsonProperty("dataSource") String dataSource,
@JsonProperty("dataSource") DataSource dataSource,
@JsonProperty("intervals") QuerySegmentSpec querySegmentSpec,
@JsonProperty("toInclude") ColumnIncluderator toInclude,
@JsonProperty("merge") Boolean merge,
@JsonProperty("context") Map<String, Object> context
)
{
super(new TableDataSource(dataSource), querySegmentSpec, context);
super(dataSource, querySegmentSpec, context);
this.toInclude = toInclude == null ? new AllColumnIncluderator() : toInclude;
this.merge = merge == null ? false : merge;
Preconditions.checkArgument(dataSource instanceof TableDataSource, "SegmentMetadataQuery only supports table datasource");
}
@JsonProperty
@ -77,7 +80,7 @@ public class SegmentMetadataQuery extends BaseQuery<SegmentAnalysis>
public Query<SegmentAnalysis> withOverriddenContext(Map<String, Object> contextOverride)
{
return new SegmentMetadataQuery(
((TableDataSource)getDataSource()).getName(),
getDataSource(),
getQuerySegmentSpec(), toInclude, merge, computeOverridenContext(contextOverride)
);
}
@ -86,7 +89,7 @@ public class SegmentMetadataQuery extends BaseQuery<SegmentAnalysis>
public Query<SegmentAnalysis> withQuerySegmentSpec(QuerySegmentSpec spec)
{
return new SegmentMetadataQuery(
((TableDataSource)getDataSource()).getName(),
getDataSource(),
spec, toInclude, merge, getContext());
}

View File

@ -21,6 +21,7 @@ package io.druid.query.metadata;
import com.google.common.collect.Lists;
import com.metamx.common.guava.Sequences;
import io.druid.query.LegacyDataSource;
import io.druid.query.QueryRunner;
import io.druid.query.QueryRunnerFactory;
import io.druid.query.QueryRunnerTestHelper;
@ -98,7 +99,7 @@ public class SegmentAnalyzerTest
);
final SegmentMetadataQuery query = new SegmentMetadataQuery(
"test", QuerySegmentSpecs.create("2011/2012"), null, null, null
new LegacyDataSource("test"), QuerySegmentSpecs.create("2011/2012"), null, null, null
);
return Sequences.toList(query.run(runner), Lists.<SegmentAnalysis>newArrayList());
}

View File

@ -0,0 +1,51 @@
/*
* Druid - a distributed column store.
* Copyright (C) 2012, 2013 Metamarkets Group Inc.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
package io.druid.query.metadata;
import com.fasterxml.jackson.databind.ObjectMapper;
import io.druid.jackson.DefaultObjectMapper;
import io.druid.query.Query;
import io.druid.query.metadata.metadata.SegmentMetadataQuery;
import org.joda.time.Interval;
import org.junit.Assert;
import org.junit.Test;
public class SegmentMetadataQueryTest
{
private ObjectMapper mapper = new DefaultObjectMapper();
@Test
public void testSerde() throws Exception
{
String queryStr = "{\n"
+ " \"queryType\":\"segmentMetadata\",\n"
+ " \"dataSource\":\"test_ds\",\n"
+ " \"intervals\":[\"2013-12-04T00:00:00.000Z/2013-12-05T00:00:00.000Z\"]\n"
+ "}";
Query query = mapper.readValue(queryStr, Query.class);
Assert.assertTrue(query instanceof SegmentMetadataQuery);
Assert.assertEquals("test_ds", query.getDataSource().getName());
Assert.assertEquals(new Interval("2013-12-04T00:00:00.000Z/2013-12-05T00:00:00.000Z"), query.getIntervals().get(0));
// test serialize and deserialize
Assert.assertEquals(query, mapper.readValue(mapper.writeValueAsString(query), Query.class));
}
}