Update igalBranch to latest master

This commit is contained in:
Igal Levy 2013-10-29 13:50:29 -07:00
commit 0d297869cc
33 changed files with 321 additions and 82 deletions

View File

@ -28,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.0-SNAPSHOT</version>
<version>0.6.1-SNAPSHOT</version>
</parent>
<dependencies>

View File

@ -28,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.0-SNAPSHOT</version>
<version>0.6.1-SNAPSHOT</version>
</parent>
<dependencies>

View File

@ -4,4 +4,34 @@ layout: doc_page
# Data Flow
The diagram below illustrates how different Druid nodes download data and respond to queries:
<img src="../img/druid-dataflow-2x.png" width="800"/>
### Real-time Nodes
Real-time nodes ingest streaming data and announce themselves and the segments they are serving in Zookeeper on start up. During the segment hand-off stage, real-time nodes create a segment metadata entry in MySQL for the segment to hand-off. This segment is uploaded to Deep Storage. Real-time nodes use Zookeeper to monitor when historical nodes complete downloading the segment (indicating hand-off completion) so that it can forget about it. Real-time nodes also respond to query requests from broker nodes and also return query results to the broker nodes.
### Deep Storage
Batch indexed segments and segments created by real-time nodes are uploaded to deep storage. Historical nodes download these segments to serve for queries.
### MySQL
Real-time nodes and batch indexing create new segment metadata entries for the new segments they've created. Coordinator nodes read this metadata table to determine what segments should be loaded in the cluster.
### Coordinator Nodes
Coordinator nodes read segment metadata information from MySQL to determine what segments should be loaded in the cluster. Coordinator nodes user Zookeeper to determine what historical nodes exist, and also create Zookeeper entries to tell historical nodes to load and drop new segments.
### Zookeeper
Real-time nodes announce themselves and the segments they are serving in Zookeeper and also use Zookeeper to monitor segment hand-off. Coordinator nodes use Zookeeper to determine what historical nodes exist in the cluster and create new entries to communicate to historical nodes to load or drop new data. Historical nodes announce themselves and the segments they serve in Zookeeper. Historical nodes also monitor Zookeeper for new load or drop requests. Broker nodes use Zookeeper to determine what historical and real-time nodes exist in the cluster.
### Historical Nodes
Historical nodes announce themselves and the segments they are serving in Zookeeper. Historical nodes also use Zookeeper to monitor for signals to load or drop new segments. Historical nodes download segments from deep storage, respond to the queries from broker nodes about these segments, and return results to the broker nodes.
### Broker Nodes
Broker nodes receive queries from external clients and forward those queries down to real-time and historical nodes. When the individual nodes return their results, broker nodes merge these results and returns them to the caller. Broker nodes use Zookeeper to determine what real-time and historical nodes exist.

View File

@ -25,11 +25,11 @@ git checkout druid-0.6.0
### Downloading the DSK (Druid Standalone Kit)
[Download](http://static.druid.io/data/examples/druid-services-0.4.6.tar.gz) a stand-alone tarball and run it:
[Download](http://static.druid.io/artifacts/releases/druid-services-0.6.0-bin.tar.gz) a stand-alone tarball and run it:
``` bash
tar -xzf druid-services-0.X.X-SNAPSHOT-bin.tar.gz
cd druid-services-0.X.X-SNAPSHOT
tar -xzf druid-services-0.X.X-bin.tar.gz
cd druid-services-0.X.X
```
Twitter Example

156
docs/content/Modules.md Normal file
View File

@ -0,0 +1,156 @@
---
layout: doc_page
---
Druid version 0.6 introduces a new module system that allows for the addition of extensions at runtime.
## Specifying extensions
There are two ways of adding druid extensions currently.
### Add to the classpath
If you add your extension jar to the classpath at runtime, Druid will load it into the system. This mechanism is relatively easy to reason about, but it also means that you have to ensure that all dependency jars on the classpath are compatible. That is, Druid makes no provisions while using this method to maintain class loader isolation so you must make sure that the jars on your classpath are mutually compatible.
### Specify maven coordinates
Druid has the ability to automatically load extension jars from maven at runtime. With this mechanism, Druid also loads up the dependencies of the extension jar into an isolated class loader. That means that your extension can depend on a different version of a library that Druid also uses and both can co-exist.
## Configuring the extensions
Druid 0.6 introduces four new properties for configuring the loading of extensions:
* `druid.extensions.coordinates`
This is a JSON Array list of "groupId:artifactId:version" maven coordinates. Defaults to `[]`
* `druid.extensions.localRepository`
This specifies where to look for the "local repository". The way maven gets dependencies is that it downloads them to a "local repository" on your local disk and then collects the paths to each of the jars. This specifies the directory to consider the "local repository". Defaults to `~/.m2/repository`
* `druid.extensions.remoteRepositories`
This is a JSON Array list of remote repositories to load dependencies from. Defaults to `["http://repo1.maven.org/maven2/", "https://metamx.artifactoryonline.com/metamx/pub-libs-releases-local"]`
* `druid.extensions.searchCurrentClassloader`
This is a boolean flag that determines if Druid will search the main classloader for extensions. It defaults to true but can be turned off if you have reason to not automatically add all modules on the classpath.
### I want classloader isolation, but I don't want my production machines downloading their own dependencies. What should I do?
If you want to take advantage of the maven-based classloader isolation but you are also rightly frightened by the prospect of each of your production machines downloading their own dependencies on deploy, this section is for you.
The trick to doing this is
1) Specify a local directory for `druid.extensions.localRepository`
2) Run the `tools pull-deps` command to pull all the specified dependencies down into your local repository
3) Bundle up the local repository along with your other Druid stuff into whatever you use for a deployable artifact
4) Run Your druid processes with `druid.extensions.remoteRepositories=[]` and a local repository set to wherever your bundled "local" repository is located
The Druid processes will then only load up jars from the local repository and will not try to go out onto the internet to find the maven dependencies.
## Writing your own extensions
Druid's extensions leverage Guice in order to add things at runtime. Basically, Guice is a framework for Dependency Injection, but we use it to hold the expected object graph of the Druid process. Extensions can make any changes they want/need to the object graph via adding Guice bindings. While the extensions actually give you the capability to change almost anything however you want, in general, we expect people to want to extend one of a few things.
1. Add a new deep storage implementation
1. Add a new Firehose
1. Add Aggregators
1. Add Complex metrics
1. Add new Query types
Extensions are added to the system via an implementation of `io.druid.initialization.DruidModule`.
### Creating a Druid Module
The DruidModule class is has two methods
1. A `configure(Binder)` method
2. A `getJacksonModules()` method
The `configure(Binder)` method is the same method that a normal Guice module would have.
The `getJacksonModules()` method provides a list of Jackson modules that are used to help initialize the Jackson ObjectMapper instances used by Druid. This is how you add extensions that are instantiated via Jackson (like AggregatorFactory and Firehose objects) to Druid.
### Registering your Druid Module
Once you have your DruidModule created, you will need to package an extra file in the `META-INF/services` directory of your jar. This is easiest to accomplish with a maven project by creating files in the `src/main/resources` directory. There are examples of this in the Druid code under the `cassandra-storage`, `hdfs-storage` and `s3-extensions` modules, for examples.
The file that should exist in your jar is
`META-INF/services/io.druid.initialization.DruidModule`
It should be a text file with a new-line delimited list of package-qualified classes that implement DruidModule like
```
io.druid.storage.cassandra.CassandraDruidModule
```
If your jar has this file, then when it is added to the classpath or as an extension, Druid will notice the file and will instantiate instances of the Module. Your Module should have a default constructor, but if you need access to runtime configuration properties, it can have a method with @Inject on it to get a Properties object injected into it from Guice.
### Adding a new deep storage implementation
Check the `cassandra-storage`, `hdfs-storage` and `s3-extensions` modules for examples of how to do this.
The basic idea behind the extension is that you need to add bindings for your DataSegmentPusher and DataSegmentPuller objects. The way to add them is something like (taken from HdfsStorageDruidModule)
``` java
Binders.dataSegmentPullerBinder(binder)
.addBinding("hdfs")
.to(HdfsDataSegmentPuller.class).in(LazySingleton.class);
Binders.dataSegmentPusherBinder(binder)
.addBinding("hdfs")
.to(HdfsDataSegmentPusher.class).in(LazySingleton.class);
```
`Binders.dataSegment*Binder()` is a call provided by the druid-api jar which sets up a Guice multibind "MapBinder". If that doesn't make sense, don't worry about it, just think of it as a magical incantation.
`addBinding("hdfs")` for the Puller binder creates a new handler for loadSpec objects of type "hdfs". For the Pusher binder it creates a new type value that you can specify for the `druid.storage.type` parameter.
`to(...).in(...);` is normal Guice stuff.
### Adding a new Firehose
There is an example of this in the `s3-extensions` module with the StaticS3FirehoseFactory.
Adding a Firehose is done almost entirely through the Jackson Modules instead of Guice. Specifically, note the implementation
``` java
@Override
public List<? extends Module> getJacksonModules()
{
return ImmutableList.of(
new SimpleModule().registerSubtypes(new NamedType(StaticS3FirehoseFactory.class, "static-s3"))
);
}
```
This is registering the FirehoseFactory with Jackson's polymorphic serde layer. More concretely, having this will mean that if you specify a `"firehose": { "type": "static-s3", ... }` in your realtime config, then the system will load this FirehoseFactory for your firehose.
Note that inside of Druid, we have made the @JacksonInject annotation for Jackson deserialized objects actually use the base Guice injector to resolve the object to be injected. So, if your FirehoseFactory needs access to some object, you can add a @JacksonInject annotation on a setter and it will get set on instantiation.
### Adding Aggregators
Adding AggregatorFactory objects is very similar to Firehose objects. They operate purely through Jackson and thus should just be additions to the Jackson modules returned by your DruidModule.
### Adding Complex Metrics
Adding ComplexMetrics is a little ugly in the current version. The method of getting at complex metrics is through registration with the `ComplexMetrics.registerSerde()` method. There is no special Guice stuff to get this working, just in your `configure(Binder)` method register the serde.
### Adding new Query types
Adding a new Query type requires the implementation of three interfaces.
1. `io.druid.query.Query`
1. `io.druid.query.QueryToolChest`
1. `io.druid.query.QueryRunnerFactory`
Registering these uses the same general strategy as a deep storage mechanism does. You do something like
``` java
DruidBinders.queryToolChestBinder(binder)
.addBinding(SegmentMetadataQuery.class)
.to(SegmentMetadataQueryQueryToolChest.class);
DruidBinders.queryRunnerFactoryBinder(binder)
.addBinding(SegmentMetadataQuery.class)
.to(SegmentMetadataQueryRunnerFactory.class);
```
The first one binds the SegmentMetadataQueryQueryToolChest for usage when a SegmentMetadataQuery is used. The second one does the same thing but for the QueryRunnerFactory instead.

View File

@ -320,7 +320,7 @@ Feel free to tweak other query parameters to answer other questions you may have
Next Steps
----------
What to know even more information about the Druid Cluster? Check out [Tutorial%3A The Druid Cluster](Tutorial%3A-The-Druid-Cluster.html)
What to know even more information about the Druid Cluster? Check out [The Druid Cluster](Tutorial%3A-The-Druid-Cluster.html)
Druid is even more fun if you load your own data into it! To learn how to load your data, see [Loading Your Data](Tutorial%3A-Loading-Your-Data-Part-1.html).

View File

@ -244,6 +244,5 @@ druid.processing.buffer.sizeBytes=10000000
Next Steps
----------
Now that you have an understanding of what the Druid cluster looks like, why not load some of your own data?
If you are intested in how data flows through the different Druid components, check out the Druid [Data Flow](Data-Flow.html). Now that you have an understanding of what the Druid cluster looks like, why not load some of your own data?
Check out the next [tutorial](Tutorial%3A-Loading-Your-Data-Part-1.html) section for more info!

View File

@ -9,16 +9,16 @@ There are two ways to setup Druid: download a tarball, or build it from source.
h3. Download a Tarball
We've built a tarball that contains everything you'll need. You'll find it "here":http://static.druid.io/data/examples/druid-services-0.6.0.tar.gz.
We've built a tarball that contains everything you'll need. You'll find it "here":http://static.druid.io/artifacts/releases/druid-services-0.6.0-bin.tar.gz.
Download this bad boy to a directory of your choosing.
You can extract the awesomeness within by issuing:
pre. tar -zxvf druid-services-0.6.0.tar.gz
pre. tar -zxvf druid-services-0.X.X.tar.gz
Not too lost so far right? That's great! If you cd into the directory:
pre. cd druid-services-0.6.0-SNAPSHOT
pre. cd druid-services-0.X.X
You should see a bunch of files:
* run_example_server.sh
@ -31,7 +31,7 @@ The other way to setup Druid is from source via git. To do so, run these command
<pre><code>git clone git@github.com:metamx/druid.git
cd druid
git checkout druid-0.6.0
git checkout druid-0.X.X
./build.sh
</code></pre>

View File

@ -47,6 +47,7 @@ h2. Querying
h2. Architecture
* "Design":./Design.html
** "Data Flow":./Data-Flow.html
* "Segments":./Segments.html
* Node Types
** "Historical":./Historical.html
@ -57,6 +58,7 @@ h2. Architecture
*** "Firehose":./Firehose.html
*** "Plumber":./Plumber.html
** "Indexing Service":./Indexing-Service.html
* "Modules":./Modules.html
* External Dependencies
** "Deep Storage":./Deep-Storage.html
** "MySQL":./MySQL.html

View File

@ -60,7 +60,7 @@ ssh -q -f -i ~/.ssh/druid-keypair -o StrictHostKeyChecking=no ubuntu@${INSTANCE_
echo "Prepared $INSTANCE_ADDRESS for druid."
# Now to scp a tarball up that can run druid!
if [ -f ../../services/target/druid-services-*-SNAPSHOT-bin.tar.gz ];
if [ -f ../../services/target/druid-services-*-bin.tar.gz ];
then
echo "Uploading druid tarball to server..."
scp -i ~/.ssh/druid-keypair -o StrictHostKeyChecking=no ../../services/target/druid-services-*-bin.tar.gz ubuntu@${INSTANCE_ADDRESS}:

View File

@ -28,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.0-SNAPSHOT</version>
<version>0.6.1-SNAPSHOT</version>
</parent>
<dependencies>

View File

@ -18,8 +18,7 @@
~ Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
-->
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>io.druid.extensions</groupId>
<artifactId>druid-hdfs-storage</artifactId>
@ -29,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.0-SNAPSHOT</version>
<version>0.6.1-SNAPSHOT</version>
</parent>
<dependencies>

View File

@ -28,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.0-SNAPSHOT</version>
<version>0.6.1-SNAPSHOT</version>
</parent>
<dependencies>

View File

@ -28,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.0-SNAPSHOT</version>
<version>0.6.1-SNAPSHOT</version>
</parent>
<dependencies>

View File

@ -0,0 +1,49 @@
/*
* Druid - a distributed column store.
* Copyright (C) 2012, 2013 Metamarkets Group Inc.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
package io.druid.guice;
import com.fasterxml.jackson.databind.Module;
import com.fasterxml.jackson.databind.jsontype.NamedType;
import com.fasterxml.jackson.databind.module.SimpleModule;
import com.google.common.collect.ImmutableList;
import com.google.inject.Binder;
import io.druid.indexing.common.index.EventReceiverFirehoseFactory;
import io.druid.initialization.DruidModule;
import java.util.List;
public class IndexingServiceFirehoseModule implements DruidModule
{
@Override
public List<? extends Module> getJacksonModules()
{
return ImmutableList.<Module>of(
new SimpleModule("IndexingServiceFirehoseModule")
.registerSubtypes(
new NamedType(EventReceiverFirehoseFactory.class, "receiver")
)
);
}
@Override
public void configure(Binder binder)
{
}
}

View File

@ -37,7 +37,6 @@ import io.druid.query.QueryRunner;
import org.joda.time.Interval;
import java.io.IOException;
import java.util.Arrays;
import java.util.List;
public abstract class AbstractTask implements Task
@ -189,13 +188,12 @@ public abstract class AbstractTask implements Task
{
final List<TaskLock> locks = toolbox.getTaskActionClient().submit(new LockListAction());
if (locks.isEmpty()) {
return Arrays.asList(
toolbox.getTaskActionClient()
.submit(new LockAcquireAction(getImplicitLockInterval().get()))
);
if (locks.isEmpty() && getImplicitLockInterval().isPresent()) {
// In the Peon's local mode, the implicit lock interval is not pre-acquired, so we need to try it here.
toolbox.getTaskActionClient().submit(new LockAcquireAction(getImplicitLockInterval().get()));
return toolbox.getTaskActionClient().submit(new LockListAction());
} else {
return locks;
}
return locks;
}
}

View File

@ -206,7 +206,8 @@ public class ForkingTaskRunner implements TaskRunner, TaskLogStreamer
command.add(statusFile.toString());
String nodeType = task.getNodeType();
if (nodeType != null) {
command.add(String.format("--nodeType %s", nodeType));
command.add("--nodeType");
command.add(nodeType);
}
jsonMapper.writeValue(taskFile, task);

View File

@ -23,7 +23,7 @@
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<packaging>pom</packaging>
<version>0.6.0-SNAPSHOT</version>
<version>0.6.1-SNAPSHOT</version>
<name>druid</name>
<description>druid</description>
<scm>
@ -60,7 +60,7 @@
<dependency>
<groupId>io.druid</groupId>
<artifactId>druid-api</artifactId>
<version>0.1.0-SNAPSHOT</version>
<version>0.1.3</version>
</dependency>
<!-- Compile Scope -->
@ -177,7 +177,7 @@
<dependency>
<groupId>it.uniroma3.mat</groupId>
<artifactId>extendedset</artifactId>
<version>1.3.2</version>
<version>1.3.4</version>
</dependency>
<dependency>
<groupId>com.google.guava</groupId>

View File

@ -28,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.0-SNAPSHOT</version>
<version>0.6.1-SNAPSHOT</version>
</parent>
<dependencies>

View File

@ -24,6 +24,7 @@ package io.druid.segment;
*/
public interface ColumnSelectorFactory
{
public DimensionSelector makeDimensionSelector(String dimensionName);
public FloatColumnSelector makeFloatColumnSelector(String columnName);
public ObjectColumnSelector makeObjectColumnSelector(String columnName);
}

View File

@ -21,7 +21,7 @@ package io.druid.segment;import org.joda.time.DateTime;
/**
*/
public interface Cursor extends ColumnSelectorFactory, DimensionSelectorFactory
public interface Cursor extends ColumnSelectorFactory
{
public DateTime getTime();
public void advance();

View File

@ -1,25 +0,0 @@
/*
* Druid - a distributed column store.
* Copyright (C) 2012, 2013 Metamarkets Group Inc.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
package io.druid.segment;/**
*/
public interface DimensionSelectorFactory
{
public DimensionSelector makeDimensionSelector(String dimensionName);
}

View File

@ -42,6 +42,7 @@ import io.druid.query.aggregation.Aggregator;
import io.druid.query.aggregation.AggregatorFactory;
import io.druid.query.aggregation.PostAggregator;
import io.druid.segment.ColumnSelectorFactory;
import io.druid.segment.DimensionSelector;
import io.druid.segment.FloatColumnSelector;
import io.druid.segment.ObjectColumnSelector;
import io.druid.segment.serde.ComplexMetricExtractor;
@ -256,6 +257,12 @@ public class IncrementalIndex implements Iterable<Row>
}
};
}
@Override
public DimensionSelector makeDimensionSelector(String dimension) {
// we should implement this, but this is going to be rewritten soon anyways
throw new UnsupportedOperationException("Incremental index aggregation does not support dimension selectors");
}
}
);
}

View File

@ -133,6 +133,11 @@ public class SpatialDimensionRowFormatter
return (retVal == null) ? row.getDimension(dimension) : retVal;
}
@Override
public Object getRaw(String dimension) {
return row.getRaw(dimension);
}
@Override
public float getFloatMetric(String metric)
{

View File

@ -28,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.0-SNAPSHOT</version>
<version>0.6.1-SNAPSHOT</version>
</parent>
<dependencies>

View File

@ -18,8 +18,7 @@
~ Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
-->
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>io.druid</groupId>
<artifactId>druid-server</artifactId>
@ -29,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.0-SNAPSHOT</version>
<version>0.6.1-SNAPSHOT</version>
</parent>
<dependencies>

View File

@ -209,6 +209,12 @@ class WikipediaIrcDecoder implements IrcDecoder
}
}
@Override
public Object getRaw(String dimension) {
return dimensions.get(dimension);
}
@Override
public float getFloatMetric(String metric)
{

View File

@ -150,6 +150,12 @@ public class RealtimeManagerTest
{
return 0;
}
@Override
public Object getRaw(String dimension)
{
return null;
}
};
}

View File

@ -80,6 +80,12 @@ public class SinkTest
{
return 0;
}
@Override
public Object getRaw(String dimension)
{
return null;
}
});
FireHydrant currHydrant = sink.getCurrIndex();
@ -113,6 +119,12 @@ public class SinkTest
{
return 0;
}
@Override
public Object getRaw(String dimension)
{
return null;
}
});
Assert.assertEquals(currHydrant, swapHydrant);

View File

@ -17,18 +17,17 @@
~ along with this program; if not, write to the Free Software
~ Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
-->
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>io.druid</groupId>
<artifactId>druid-services</artifactId>
<name>druid-services</name>
<description>druid-services</description>
<version>0.6.0-SNAPSHOT</version>
<version>0.6.1-SNAPSHOT</version>
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.0-SNAPSHOT</version>
<version>0.6.1-SNAPSHOT</version>
</parent>
<dependencies>

View File

@ -26,6 +26,7 @@ import com.google.inject.Provides;
import com.google.inject.util.Providers;
import com.metamx.common.logger.Logger;
import io.airlift.command.Command;
import io.druid.guice.IndexingServiceFirehoseModule;
import io.druid.guice.IndexingServiceModuleHelper;
import io.druid.guice.Jerseys;
import io.druid.guice.JsonConfigProvider;
@ -101,7 +102,8 @@ public class CliMiddleManager extends ServerRunnable
config.getVersion()
);
}
}
},
new IndexingServiceFirehoseModule()
);
}
}

View File

@ -30,6 +30,7 @@ import com.google.inject.servlet.GuiceFilter;
import com.google.inject.util.Providers;
import com.metamx.common.logger.Logger;
import io.airlift.command.Command;
import io.druid.guice.IndexingServiceFirehoseModule;
import io.druid.guice.IndexingServiceModuleHelper;
import io.druid.guice.JacksonConfigProvider;
import io.druid.guice.Jerseys;
@ -206,7 +207,8 @@ public class CliOverlord extends ServerRunnable
JsonConfigProvider.bind(binder, "druid.indexer.autoscale", SimpleResourceManagementConfig.class);
}
}
},
new IndexingServiceFirehoseModule()
);
}

View File

@ -19,13 +19,12 @@
package io.druid.cli;
import com.fasterxml.jackson.databind.jsontype.NamedType;
import com.fasterxml.jackson.databind.module.SimpleModule;
import com.google.common.base.Throwables;
import com.google.common.collect.ImmutableList;
import com.google.inject.Binder;
import com.google.inject.Injector;
import com.google.inject.Key;
import com.google.inject.Module;
import com.google.inject.multibindings.MapBinder;
import com.metamx.common.lifecycle.Lifecycle;
import com.metamx.common.logger.Logger;
@ -33,6 +32,7 @@ import io.airlift.command.Arguments;
import io.airlift.command.Command;
import io.airlift.command.Option;
import io.druid.guice.Binders;
import io.druid.guice.IndexingServiceFirehoseModule;
import io.druid.guice.Jerseys;
import io.druid.guice.JsonConfigProvider;
import io.druid.guice.LazySingleton;
@ -49,7 +49,6 @@ import io.druid.indexing.common.actions.TaskActionClientFactory;
import io.druid.indexing.common.actions.TaskActionToolbox;
import io.druid.indexing.common.config.TaskConfig;
import io.druid.indexing.common.index.ChatHandlerProvider;
import io.druid.indexing.common.index.EventReceiverFirehoseFactory;
import io.druid.indexing.common.index.NoopChatHandlerProvider;
import io.druid.indexing.common.index.ServiceAnnouncingChatHandlerProvider;
import io.druid.indexing.overlord.HeapMemoryTaskStorage;
@ -100,7 +99,7 @@ public class CliPeon extends GuiceRunnable
protected List<Object> getModules()
{
return ImmutableList.<Object>of(
new DruidModule()
new Module()
{
@Override
public void configure(Binder binder)
@ -179,16 +178,8 @@ public class CliPeon extends GuiceRunnable
.to(RemoteTaskActionClientFactory.class).in(LazySingleton.class);
}
@Override
public List<? extends com.fasterxml.jackson.databind.Module> getJacksonModules()
{
return Arrays.asList(
new SimpleModule("PeonModule")
.registerSubtypes(new NamedType(EventReceiverFirehoseFactory.class, "receiver"))
);
}
}
},
new IndexingServiceFirehoseModule()
);
}