Merge remote-tracking branch 'upstream/master' into rabbitmq-lyra

This commit is contained in:
Stefán Freyr Stefánsson 2013-11-20 16:25:53 +00:00
commit 6aafba4393
40 changed files with 60 additions and 87 deletions

View File

@ -30,4 +30,4 @@ echo "For examples, see: "
echo " "
ls -1 examples/*/*sh
echo " "
echo "See also http://druid.io/docs/0.6.19"
echo "See also http://druid.io/docs/0.6.20"

View File

@ -28,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.20-SNAPSHOT</version>
<version>0.6.21-SNAPSHOT</version>
</parent>
<dependencies>

View File

@ -28,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.20-SNAPSHOT</version>
<version>0.6.21-SNAPSHOT</version>
</parent>
<dependencies>

View File

@ -3,7 +3,7 @@ layout: doc_page
---
# Booting a Single Node Cluster #
[Loading Your Data](Tutorial%3A-Loading-Your-Data-Part-2.html) and [All About Queries](Tutorial%3A-All-About-Queries.html) contain recipes to boot a small druid cluster on localhost. Here we will boot a small cluster on EC2. You can checkout the code, or download a tarball from [here](http://static.druid.io/artifacts/druid-services-0.6.19-bin.tar.gz).
[Loading Your Data](Tutorial%3A-Loading-Your-Data-Part-2.html) and [All About Queries](Tutorial%3A-All-About-Queries.html) contain recipes to boot a small druid cluster on localhost. Here we will boot a small cluster on EC2. You can checkout the code, or download a tarball from [here](http://static.druid.io/artifacts/druid-services-0.6.20-bin.tar.gz).
The [ec2 run script](https://github.com/metamx/druid/blob/master/examples/bin/run_ec2.sh), run_ec2.sh, is located at 'examples/bin' if you have checked out the code, or at the root of the project if you've downloaded a tarball. The scripts rely on the [Amazon EC2 API Tools](http://aws.amazon.com/developertools/351), and you will need to set three environment variables:

View File

@ -1,8 +0,0 @@
---
layout: doc_page
---
If you are interested in contributing to the code, we accept [pull requests](https://help.github.com/articles/using-pull-requests). Note: we have only just completed decoupling our Metamarkets-specific code from the code base and we took some short-cuts in interface design to make it happen. So, there are a number of interfaces that exist right now which are likely to be in flux. If you are embedding Druid in your system, it will be safest for the time being to only extend/implement interfaces that this wiki describes, as those are intended as stable (unless otherwise mentioned).
For issue tracking, we are using the github issue tracker. Please fill out an issue from the Issues tab on the github screen.
We also have a [Libraries](Libraries.html) page that lists external libraries that people have created for working with Druid.

View File

@ -1,14 +0,0 @@
---
layout: doc_page
---
A version may be declared as a release candidate if it has been deployed to a sizable production cluster. Release candidates are declared as stable after we feel fairly confident there are no major bugs in the version. Check out the [Versioning](Versioning.html) section for how we describe software versions.
Release Candidate
-----------------
The current release candidate is tagged at version [0.6.19](https://github.com/metamx/druid/tree/druid-0.6.19).
Stable Release
--------------
The current stable is tagged at version [0.5.49](https://github.com/metamx/druid/tree/druid-0.5.49).

View File

@ -19,13 +19,13 @@ Clone Druid and build it:
git clone https://github.com/metamx/druid.git druid
cd druid
git fetch --tags
git checkout druid-0.6.19
git checkout druid-0.6.20
./build.sh
```
### Downloading the DSK (Druid Standalone Kit)
[Download](http://static.druid.io/artifacts/releases/druid-services-0.6.19-bin.tar.gz) a stand-alone tarball and run it:
[Download](http://static.druid.io/artifacts/releases/druid-services-0.6.20-bin.tar.gz) a stand-alone tarball and run it:
``` bash
tar -xzf druid-services-0.X.X-bin.tar.gz

View File

@ -27,6 +27,9 @@ druid.host=localhost
druid.service=realtime
druid.port=8083
druid.extensions.coordinates=["io.druid.extensions:druid-kafka-seven:0.6.20"]
druid.zk.service.host=localhost
druid.db.connector.connectURI=jdbc\:mysql\://localhost\:3306/druid
@ -187,8 +190,8 @@ Extending the code
Realtime integration is intended to be extended in two ways:
1. Connect to data streams from varied systems ([Firehose](https://github.com/metamx/druid/blob/druid-0.6.19/realtime/src/main/java/com/metamx/druid/realtime/firehose/FirehoseFactory.java))
2. Adjust the publishing strategy to match your needs ([Plumber](https://github.com/metamx/druid/blob/druid-0.6.19/realtime/src/main/java/com/metamx/druid/realtime/plumber/PlumberSchool.java))
1. Connect to data streams from varied systems ([Firehose](https://github.com/druid-io/druid-api/blob/master/src/main/java/io/druid/data/input/FirehoseFactory.java))
2. Adjust the publishing strategy to match your needs ([Plumber](https://github.com/metamx/druid/blob/master/server/src/main/java/io/druid/segment/realtime/plumber/PlumberSchool.java))
The expectations are that the former will be very common and something that users of Druid will do on a fairly regular basis. Most users will probably never have to deal with the latter form of customization. Indeed, we hope that all potential use cases can be packaged up as part of Druid proper without requiring proprietary customization.

View File

@ -1,16 +0,0 @@
---
layout: doc_page
---
Numerous backend engineers at [Metamarkets](http://www.metamarkets.com) and other companies work on Druid full-time. If you any questions about usage or code, feel free to contact any of us.
Google Groups Mailing List
--------------------------
The best place for questions is through our mailing list:
[druid-development@googlegroups.com](mailto:druid-development@googlegroups.com)
[https://groups.google.com/d/forum/druid-development](https://groups.google.com/d/forum/druid-development)
IRC
---
Several of us also hang out in the channel \#druid-dev on irc.freenode.net.

View File

@ -47,7 +47,7 @@ There are two ways to setup Druid: download a tarball, or [Build From Source](Bu
### Download a Tarball
We've built a tarball that contains everything you'll need. You'll find it [here](http://static.druid.io/artifacts/releases/druid-services-0.6.19-bin.tar.gz). Download this file to a directory of your choosing.
We've built a tarball that contains everything you'll need. You'll find it [here](http://static.druid.io/artifacts/releases/druid-services-0.6.20-bin.tar.gz). Download this file to a directory of your choosing.
You can extract the awesomeness within by issuing:
@ -58,7 +58,7 @@ tar -zxvf druid-services-*-bin.tar.gz
Not too lost so far right? That's great! If you cd into the directory:
```
cd druid-services-0.6.19
cd druid-services-0.6.20
```
You should see a bunch of files:

View File

@ -42,7 +42,7 @@ With real-world data, we recommend having a message bus such as [Apache Kafka](h
#### Setting up Kafka
[KafkaFirehoseFactory](https://github.com/metamx/druid/blob/druid-0.6.19/realtime/src/main/java/com/metamx/druid/realtime/firehose/KafkaFirehoseFactory.java) is how druid communicates with Kafka. Using this [Firehose](Firehose.html) with the right configuration, we can import data into Druid in real-time without writing any code. To load data to a real-time node via Kafka, we'll first need to initialize Zookeeper and Kafka, and then configure and initialize a [Realtime](Realtime.html) node.
[KafkaFirehoseFactory](https://github.com/metamx/druid/blob/druid-0.6.20/realtime/src/main/java/com/metamx/druid/realtime/firehose/KafkaFirehoseFactory.java) is how druid communicates with Kafka. Using this [Firehose](Firehose.html) with the right configuration, we can import data into Druid in real-time without writing any code. To load data to a real-time node via Kafka, we'll first need to initialize Zookeeper and Kafka, and then configure and initialize a [Realtime](Realtime.html) node.
Instructions for booting a Zookeeper and then Kafka cluster are available [here](http://kafka.apache.org/07/quickstart.html).

View File

@ -11,7 +11,7 @@ In this tutorial, we will set up other types of Druid nodes as well as and exter
If you followed the first tutorial, you should already have Druid downloaded. If not, let's go back and do that first.
You can download the latest version of druid [here](http://static.druid.io/artifacts/releases/druid-services-0.6.19-bin.tar.gz)
You can download the latest version of druid [here](http://static.druid.io/artifacts/releases/druid-services-0.6.20-bin.tar.gz)
and untar the contents within by issuing:
@ -147,7 +147,7 @@ druid.port=8081
druid.zk.service.host=localhost
druid.extensions.coordinates=["io.druid.extensions:druid-s3-extensions:0.6.19"]
druid.extensions.coordinates=["io.druid.extensions:druid-s3-extensions:0.6.20"]
# Dummy read only AWS account (used to download example data)
druid.s3.secretKey=QyyfVZ7llSiRg6Qcrql1eEUG7buFpAK6T6engr1b
@ -237,7 +237,7 @@ druid.port=8083
druid.zk.service.host=localhost
druid.extensions.coordinates=["io.druid.extensions:druid-examples:0.6.19-SNAPSHOT"]
druid.extensions.coordinates=["io.druid.extensions:druid-examples:0.6.20-SNAPSHOT"]
druid.db.connector.connectURI=jdbc\:mysql\://localhost\:3306/druid
druid.db.connector.user=druid

View File

@ -37,7 +37,7 @@ There are two ways to setup Druid: download a tarball, or [Build From Source](Bu
h3. Download a Tarball
We've built a tarball that contains everything you'll need. You'll find it [here](http://static.druid.io/artifacts/releases/druid-services-0.6.19-bin.tar.gz)
We've built a tarball that contains everything you'll need. You'll find it [here](http://static.druid.io/artifacts/releases/druid-services-0.6.20-bin.tar.gz)
Download this file to a directory of your choosing.
You can extract the awesomeness within by issuing:
@ -48,7 +48,7 @@ tar zxvf druid-services-*-bin.tar.gz
Not too lost so far right? That's great! If you cd into the directory:
```
cd druid-services-0.6.19
cd druid-services-0.6.20
```
You should see a bunch of files:

View File

@ -9,7 +9,7 @@ There are two ways to setup Druid: download a tarball, or build it from source.
h3. Download a Tarball
We've built a tarball that contains everything you'll need. You'll find it "here":http://static.druid.io/artifacts/releases/druid-services-0.6.19-bin.tar.gz.
We've built a tarball that contains everything you'll need. You'll find it "here":http://static.druid.io/artifacts/releases/druid-services-0.6.20-bin.tar.gz.
Download this bad boy to a directory of your choosing.
You can extract the awesomeness within by issuing:

View File

@ -5,9 +5,7 @@
h1. Contents
* "Introduction":./
* "Download":./Download.html
* "Support":./Support.html
* "Contribute":./Contribute.html
* "Concepts and Terminology":./Concepts-and-Terminology.html
h2. Getting Started
* "Tutorial: A First Look at Druid":./Tutorial:-A-First-Look-at-Druid.html
@ -23,6 +21,9 @@ h2. Evaluate Druid
h2. Configuration
* "Configuration":Configuration.html
h2. Extend Druid
* "Modules":./Modules.html
h2. Data Ingestion
* "Realtime":./Realtime.html
* "Batch":./Batch-ingestion.html
@ -57,12 +58,10 @@ h2. Architecture
*** "Firehose":./Firehose.html
*** "Plumber":./Plumber.html
** "Indexing Service":./Indexing-Service.html
* "Modules":./Modules.html
* External Dependencies
** "Deep Storage":./Deep-Storage.html
** "MySQL":./MySQL.html
** "ZooKeeper":./ZooKeeper.html
* "Concepts and Terminology":./Concepts-and-Terminology.html
h2. Development
* "Versioning":./Versioning.html

View File

@ -4,7 +4,7 @@ druid.port=8081
druid.zk.service.host=localhost
druid.extensions.coordinates=["io.druid.extensions:druid-s3-extensions:0.6.19"]
druid.extensions.coordinates=["io.druid.extensions:druid-s3-extensions:0.6.20"]
# Dummy read only AWS account (used to download example data)
druid.s3.secretKey=QyyfVZ7llSiRg6Qcrql1eEUG7buFpAK6T6engr1b

View File

@ -4,7 +4,7 @@ druid.port=8083
druid.zk.service.host=localhost
druid.extensions.coordinates=["io.druid.extensions:druid-examples:0.6.19","io.druid.extensions:druid-kafka-seven:0.6.19"]
druid.extensions.coordinates=["io.druid.extensions:druid-examples:0.6.20","io.druid.extensions:druid-kafka-seven:0.6.20"]
druid.db.connector.connectURI=jdbc\:mysql\://localhost\:3306/druid
druid.db.connector.user=druid

View File

@ -28,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.20-SNAPSHOT</version>
<version>0.6.21-SNAPSHOT</version>
</parent>
<dependencies>

View File

@ -28,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.20-SNAPSHOT</version>
<version>0.6.21-SNAPSHOT</version>
</parent>
<dependencies>

View File

@ -28,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.20-SNAPSHOT</version>
<version>0.6.21-SNAPSHOT</version>
</parent>
<dependencies>

View File

@ -28,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.20-SNAPSHOT</version>
<version>0.6.21-SNAPSHOT</version>
</parent>
<dependencies>

View File

@ -28,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.20-SNAPSHOT</version>
<version>0.6.21-SNAPSHOT</version>
</parent>
<dependencies>

View File

@ -28,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.20-SNAPSHOT</version>
<version>0.6.21-SNAPSHOT</version>
</parent>
<dependencies>

View File

@ -23,7 +23,7 @@
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<packaging>pom</packaging>
<version>0.6.20-SNAPSHOT</version>
<version>0.6.21-SNAPSHOT</version>
<name>druid</name>
<description>druid</description>
<scm>

View File

@ -28,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.20-SNAPSHOT</version>
<version>0.6.21-SNAPSHOT</version>
</parent>
<dependencies>

View File

@ -28,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.20-SNAPSHOT</version>
<version>0.6.21-SNAPSHOT</version>
</parent>
<dependencies>

View File

@ -28,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.20-SNAPSHOT</version>
<version>0.6.21-SNAPSHOT</version>
</parent>
<dependencies>

View File

@ -42,6 +42,7 @@ import com.metamx.emitter.EmittingLogger;
import io.druid.client.cache.Cache;
import io.druid.client.selector.QueryableDruidServer;
import io.druid.client.selector.ServerSelector;
import io.druid.guice.annotations.Smile;
import io.druid.query.BySegmentResultValueClass;
import io.druid.query.CacheStrategy;
import io.druid.query.Query;
@ -86,7 +87,7 @@ public class CachingClusteredClient<T> implements QueryRunner<T>
QueryToolChestWarehouse warehouse,
TimelineServerView serverView,
Cache cache,
ObjectMapper objectMapper
@Smile ObjectMapper objectMapper
)
{
this.warehouse = warehouse;
@ -276,7 +277,7 @@ public class CachingClusteredClient<T> implements QueryRunner<T>
}
return objectMapper.readValues(
objectMapper.getJsonFactory().createJsonParser(cachedResult),
objectMapper.getFactory().createParser(cachedResult),
cacheObjectClazz
);
}

View File

@ -161,7 +161,6 @@ public class JettyServerModule extends JerseyServletModule
connector.setPort(node.getPort());
connector.setMaxIdleTime(Ints.checkedCast(config.getMaxIdleTime().toStandardDuration().getMillis()));
connector.setStatsOn(true);
connector.setAcceptors(config.getNumThreads());
server.setConnectors(new Connector[]{connector});

View File

@ -31,7 +31,7 @@ public class ServerConfig
{
@JsonProperty
@Min(1)
private int numThreads = 10;
private int numThreads = Math.max(10, Runtime.getRuntime().availableProcessors() + 1);
@JsonProperty
@NotNull

View File

@ -47,9 +47,14 @@ public class ServerMonitor extends AbstractMonitor
public boolean doMonitor(ServiceEmitter emitter)
{
emitter.emit(new ServiceMetricEvent.Builder().build("server/segment/max", serverConfig.getMaxSize()));
long totalUsed = 0;
long totalCount = 0;
for (Map.Entry<String, Long> entry : serverManager.getDataSourceSizes().entrySet()) {
String dataSource = entry.getKey();
long used = entry.getValue();
totalUsed += used;
final ServiceMetricEvent.Builder builder = new ServiceMetricEvent.Builder().setUser1(dataSource)
.setUser2(serverConfig.getTier());
@ -60,12 +65,18 @@ public class ServerMonitor extends AbstractMonitor
for (Map.Entry<String, Long> entry : serverManager.getDataSourceCounts().entrySet()) {
String dataSource = entry.getKey();
long count = entry.getValue();
totalCount += count;
final ServiceMetricEvent.Builder builder = new ServiceMetricEvent.Builder().setUser1(dataSource)
.setUser2(serverConfig.getTier());
emitter.emit(builder.build("server/segment/count", count));
}
final ServiceMetricEvent.Builder builder = new ServiceMetricEvent.Builder().setUser2(serverConfig.getTier());
emitter.emit(builder.build("server/segment/totalUsed", totalUsed));
emitter.emit(builder.build("server/segment/totalUsedPercent", totalUsed / (double) serverConfig.getMaxSize()));
emitter.emit(builder.build("server/segment/totalCount", totalCount));
return true;
}
}

View File

@ -27,7 +27,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.20-SNAPSHOT</version>
<version>0.6.21-SNAPSHOT</version>
</parent>
<dependencies>

View File

@ -52,7 +52,7 @@ import java.util.List;
*/
@Command(
name = "broker",
description = "Runs a broker node, see https://github.com/metamx/druid/wiki/Broker for a description"
description = "Runs a broker node, see http://druid.io/docs/0.6.20/Broker.html for a description"
)
public class CliBroker extends ServerRunnable
{

View File

@ -63,7 +63,7 @@ import java.util.List;
*/
@Command(
name = "coordinator",
description = "Runs the Coordinator, see http://druid.io/docs/0.6.19/Coordinator.html for a description."
description = "Runs the Coordinator, see http://druid.io/docs/0.6.20/Coordinator.html for a description."
)
public class CliCoordinator extends ServerRunnable
{

View File

@ -41,7 +41,7 @@ import java.util.List;
*/
@Command(
name = "hadoop",
description = "Runs the batch Hadoop Druid Indexer, see http://druid.io/docs/0.6.19/Batch-ingestion.html for a description."
description = "Runs the batch Hadoop Druid Indexer, see http://druid.io/docs/0.6.20/Batch-ingestion.html for a description."
)
public class CliHadoopIndexer implements Runnable
{

View File

@ -32,8 +32,6 @@ import io.druid.query.QuerySegmentWalker;
import io.druid.server.coordination.ServerManager;
import io.druid.server.coordination.ZkCoordinator;
import io.druid.server.initialization.JettyServerInitializer;
import io.druid.server.metrics.MetricsModule;
import io.druid.server.metrics.ServerMonitor;
import org.eclipse.jetty.server.Server;
import java.util.List;
@ -42,7 +40,7 @@ import java.util.List;
*/
@Command(
name = "historical",
description = "Runs a Historical node, see http://druid.io/docs/0.6.19/Historical.html for a description"
description = "Runs a Historical node, see http://druid.io/docs/0.6.20/Historical.html for a description"
)
public class CliHistorical extends ServerRunnable
{
@ -66,7 +64,6 @@ public class CliHistorical extends ServerRunnable
binder.bind(ZkCoordinator.class).in(ManageLifecycle.class);
binder.bind(QuerySegmentWalker.class).to(ServerManager.class).in(LazySingleton.class);
MetricsModule.register(binder, ServerMonitor.class);
binder.bind(NodeTypeConfig.class).toInstance(new NodeTypeConfig("historical"));
binder.bind(JettyServerInitializer.class).to(QueryJettyServerInitializer.class).in(LazySingleton.class);

View File

@ -93,7 +93,7 @@ import java.util.List;
*/
@Command(
name = "overlord",
description = "Runs an Overlord node, see https://github.com/metamx/druid/wiki/Indexing-Service for a description"
description = "Runs an Overlord node, see http://druid.io/docs/0.6.20/Indexing-Service.html for a description"
)
public class CliOverlord extends ServerRunnable
{

View File

@ -30,7 +30,7 @@ import java.util.List;
*/
@Command(
name = "realtime",
description = "Runs a realtime node, see https://github.com/metamx/druid/wiki/Realtime for a description"
description = "Runs a realtime node, see http://druid.io/docs/0.6.20/Realtime.html for a description"
)
public class CliRealtime extends ServerRunnable
{

View File

@ -42,7 +42,7 @@ import java.util.concurrent.Executor;
*/
@Command(
name = "realtime",
description = "Runs a standalone realtime node for examples, see https://github.com/metamx/druid/wiki/Realtime for a description"
description = "Runs a standalone realtime node for examples, see http://druid.io/docs/0.6.20/Realtime.html for a description"
)
public class CliRealtimeExample extends ServerRunnable
{

View File

@ -60,6 +60,7 @@ public class ConvertProperties implements Runnable
new Rename("druid.database.password", "druid.db.connector.password"),
new Rename("druid.database.poll.duration", "druid.manager.segment.pollDuration"),
new Rename("druid.database.password", "druid.db.connector.password"),
new Rename("druid.database.validation", "druid.db.connector.useValidationQuery"),
new Rename("com.metamx.emitter", "druid.emitter"),
new Rename("com.metamx.emitter.logging", "druid.emitter.logging"),
new Rename("com.metamx.emitter.logging.level", "druid.emitter.logging.logLevel"),