mirror of https://github.com/apache/druid.git
Merge branch 'guice' of github.com:metamx/druid into guice
This commit is contained in:
commit
02b86a6fc7
|
@ -1,3 +1,6 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
Aggregations are specifications of processing over metrics available in Druid.
|
Aggregations are specifications of processing over metrics available in Druid.
|
||||||
Available aggregations are:
|
Available aggregations are:
|
||||||
|
|
||||||
|
@ -13,7 +16,7 @@ computes the sum of values as a 64-bit, signed integer
|
||||||
"fieldName" : <metric_name>
|
"fieldName" : <metric_name>
|
||||||
}</code>
|
}</code>
|
||||||
|
|
||||||
`name` – output name for the summed value
|
`name` – output name for the summed value
|
||||||
`fieldName` – name of the metric column to sum over
|
`fieldName` – name of the metric column to sum over
|
||||||
|
|
||||||
#### `doubleSum` aggregator
|
#### `doubleSum` aggregator
|
||||||
|
@ -84,4 +87,4 @@ All JavaScript functions must return numerical values.
|
||||||
"fnAggregate" : "function(current, a, b) { return current + (Math.log(a) * b); }"
|
"fnAggregate" : "function(current, a, b) { return current + (Math.log(a) * b); }"
|
||||||
"fnCombine" : "function(partialA, partialB) { return partialA + partialB; }"
|
"fnCombine" : "function(partialA, partialB) { return partialA + partialB; }"
|
||||||
"fnReset" : "function() { return 10; }"
|
"fnReset" : "function() { return 10; }"
|
||||||
}</code>
|
}</code>
|
|
@ -1,14 +1,17 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
Batch Data Ingestion
|
Batch Data Ingestion
|
||||||
====================
|
====================
|
||||||
|
|
||||||
There are two choices for batch data ingestion to your Druid cluster, you can use the [[Indexing service]] or you can use the `HadoopDruidIndexerMain`. This page describes how to use the `HadoopDruidIndexerMain`.
|
There are two choices for batch data ingestion to your Druid cluster, you can use the [Indexing service](Indexing-service.html) or you can use the `HadoopDruidIndexerMain`. This page describes how to use the `HadoopDruidIndexerMain`.
|
||||||
|
|
||||||
Which should I use?
|
Which should I use?
|
||||||
-------------------
|
-------------------
|
||||||
|
|
||||||
The [[Indexing service]] is a node that can run as part of your Druid cluster and can accomplish a number of different types of indexing tasks. Even if all you care about is batch indexing, it provides for the encapsulation of things like the Database that is used for segment metadata and other things, so that your indexing tasks do not need to include such information. Long-term, the indexing service is going to be the preferred method of ingesting data.
|
The [Indexing service](Indexing-service.html) is a node that can run as part of your Druid cluster and can accomplish a number of different types of indexing tasks. Even if all you care about is batch indexing, it provides for the encapsulation of things like the Database that is used for segment metadata and other things, so that your indexing tasks do not need to include such information. Long-term, the indexing service is going to be the preferred method of ingesting data.
|
||||||
|
|
||||||
The `HadoopDruidIndexerMain` runs hadoop jobs in order to separate and index data segments. It takes advantage of Hadoop as a job scheduling and distributed job execution platform. It is a simple method if you already have Hadoop running and don’t want to spend the time configuring and deploying the [[Indexing service]] just yet.
|
The `HadoopDruidIndexerMain` runs hadoop jobs in order to separate and index data segments. It takes advantage of Hadoop as a job scheduling and distributed job execution platform. It is a simple method if you already have Hadoop running and don’t want to spend the time configuring and deploying the [Indexing service](Indexing service.html) just yet.
|
||||||
|
|
||||||
HadoopDruidIndexer
|
HadoopDruidIndexer
|
||||||
------------------
|
------------------
|
||||||
|
@ -135,4 +138,4 @@ This is a specification of the properties that tell the job how to update metada
|
||||||
|password|password for db|yes|
|
|password|password for db|yes|
|
||||||
|segmentTable|table to use in DB|yes|
|
|segmentTable|table to use in DB|yes|
|
||||||
|
|
||||||
These properties should parrot what you have configured for your [[Master]].
|
These properties should parrot what you have configured for your [Master](Master.html).
|
||||||
|
|
|
@ -1,6 +1,9 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
# Booting a Single Node Cluster #
|
# Booting a Single Node Cluster #
|
||||||
|
|
||||||
[[Loading Your Data]] and [[Querying Your Data]] contain recipes to boot a small druid cluster on localhost. Here we will boot a small cluster on EC2. You can checkout the code, or download a tarball from [here](http://static.druid.io/artifacts/druid-services-0.5.51-SNAPSHOT-bin.tar.gz).
|
[Loading Your Data](Loading-Your-Data.html) and [Querying Your Data](Querying-Your-Data.html) contain recipes to boot a small druid cluster on localhost. Here we will boot a small cluster on EC2. You can checkout the code, or download a tarball from [here](http://static.druid.io/artifacts/druid-services-0.5.51-SNAPSHOT-bin.tar.gz).
|
||||||
|
|
||||||
The [ec2 run script](https://github.com/metamx/druid/blob/master/examples/bin/run_ec2.sh), run_ec2.sh, is located at 'examples/bin' if you have checked out the code, or at the root of the project if you've downloaded a tarball. The scripts rely on the [Amazon EC2 API Tools](http://aws.amazon.com/developertools/351), and you will need to set three environment variables:
|
The [ec2 run script](https://github.com/metamx/druid/blob/master/examples/bin/run_ec2.sh), run_ec2.sh, is located at 'examples/bin' if you have checked out the code, or at the root of the project if you've downloaded a tarball. The scripts rely on the [Amazon EC2 API Tools](http://aws.amazon.com/developertools/351), and you will need to set three environment variables:
|
||||||
|
|
||||||
|
|
|
@ -1,3 +1,6 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
Broker
|
Broker
|
||||||
======
|
======
|
||||||
|
|
||||||
|
@ -6,9 +9,9 @@ The Broker is the node to route queries to if you want to run a distributed clus
|
||||||
Forwarding Queries
|
Forwarding Queries
|
||||||
------------------
|
------------------
|
||||||
|
|
||||||
Most druid queries contain an interval object that indicates a span of time for which data is requested. Likewise, Druid [[Segments]] are partitioned to contain data for some interval of time and segments are distributed across a cluster. Consider a simple datasource with 7 segments where each segment contains data for a given day of the week. Any query issued to the datasource for more than one day of data will hit more than one segment. These segments will likely be distributed across multiple nodes, and hence, the query will likely hit multiple nodes.
|
Most druid queries contain an interval object that indicates a span of time for which data is requested. Likewise, Druid [Segments](Segments.html) are partitioned to contain data for some interval of time and segments are distributed across a cluster. Consider a simple datasource with 7 segments where each segment contains data for a given day of the week. Any query issued to the datasource for more than one day of data will hit more than one segment. These segments will likely be distributed across multiple nodes, and hence, the query will likely hit multiple nodes.
|
||||||
|
|
||||||
To determine which nodes to forward queries to, the Broker node first builds a view of the world from information in Zookeeper. Zookeeper maintains information about [[Compute]] and [[Realtime]] nodes and the segments they are serving. For every datasource in Zookeeper, the Broker node builds a timeline of segments and the nodes that serve them. When queries are received for a specific datasource and interval, the Broker node performs a lookup into the timeline associated with the query datasource for the query interval and retrieves the nodes that contain data for the query. The Broker node then forwards down the query to the selected nodes.
|
To determine which nodes to forward queries to, the Broker node first builds a view of the world from information in Zookeeper. Zookeeper maintains information about [Compute](Compute.html) and [Realtime](Realtime.html) nodes and the segments they are serving. For every datasource in Zookeeper, the Broker node builds a timeline of segments and the nodes that serve them. When queries are received for a specific datasource and interval, the Broker node performs a lookup into the timeline associated with the query datasource for the query interval and retrieves the nodes that contain data for the query. The Broker node then forwards down the query to the selected nodes.
|
||||||
|
|
||||||
Caching
|
Caching
|
||||||
-------
|
-------
|
||||||
|
@ -24,4 +27,4 @@ Broker nodes can be run using the `com.metamx.druid.http.BrokerMain` class.
|
||||||
Configuration
|
Configuration
|
||||||
-------------
|
-------------
|
||||||
|
|
||||||
See [[Configuration]].
|
See [Configuration](Configuration.html).
|
||||||
|
|
|
@ -1,3 +1,6 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
### Clone and Build from Source
|
### Clone and Build from Source
|
||||||
|
|
||||||
The other way to setup Druid is from source via git. To do so, run these commands:
|
The other way to setup Druid is from source via git. To do so, run these commands:
|
||||||
|
|
|
@ -1,4 +1,7 @@
|
||||||
A Druid cluster consists of various node types that need to be set up depending on your use case. See our [[Design]] docs for a description of the different node types.
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
|
A Druid cluster consists of various node types that need to be set up depending on your use case. See our [Design](Design.html) docs for a description of the different node types.
|
||||||
|
|
||||||
Setup Scripts
|
Setup Scripts
|
||||||
-------------
|
-------------
|
||||||
|
@ -8,14 +11,14 @@ One of our community members, [housejester](https://github.com/housejester/), co
|
||||||
Minimum Physical Layout: Absolute Minimum
|
Minimum Physical Layout: Absolute Minimum
|
||||||
-----------------------------------------
|
-----------------------------------------
|
||||||
|
|
||||||
As a special case, the absolute minimum setup is one of the standalone examples for realtime ingestion and querying; see [[Examples]] that can easily run on one machine with one core and 1GB RAM. This layout can be set up to try some basic queries with Druid.
|
As a special case, the absolute minimum setup is one of the standalone examples for realtime ingestion and querying; see [Examples](Examples.html) that can easily run on one machine with one core and 1GB RAM. This layout can be set up to try some basic queries with Druid.
|
||||||
|
|
||||||
Minimum Physical Layout: Experimental Testing with 4GB of RAM
|
Minimum Physical Layout: Experimental Testing with 4GB of RAM
|
||||||
-------------------------------------------------------------
|
-------------------------------------------------------------
|
||||||
|
|
||||||
This layout can be used to load some data from deep storage onto a Druid compute node for the first time. A minimal physical layout for a 1 or 2 core machine with 4GB of RAM is:
|
This layout can be used to load some data from deep storage onto a Druid compute node for the first time. A minimal physical layout for a 1 or 2 core machine with 4GB of RAM is:
|
||||||
|
|
||||||
1. node1: [[Master]] + metadata service + zookeeper + [[Compute]]
|
1. node1: [Master](Master.html) + metadata service + zookeeper + [Compute](Compute.html)
|
||||||
2. transient nodes: indexer
|
2. transient nodes: indexer
|
||||||
|
|
||||||
This setup is only reasonable to prove that a configuration works. It would not be worthwhile to use this layout for performance measurement.
|
This setup is only reasonable to prove that a configuration works. It would not be worthwhile to use this layout for performance measurement.
|
||||||
|
@ -27,13 +30,13 @@ Comfortable Physical Layout: Pilot Project with Multiple Machines
|
||||||
|
|
||||||
A minimal physical layout not constrained by cores that demonstrates parallel querying and realtime, using AWS-EC2 “small”/m1.small (one core, with 1.7GB of RAM) or larger, no realtime, is:
|
A minimal physical layout not constrained by cores that demonstrates parallel querying and realtime, using AWS-EC2 “small”/m1.small (one core, with 1.7GB of RAM) or larger, no realtime, is:
|
||||||
|
|
||||||
1. node1: [[Master]] (m1.small)
|
1. node1: [Master](Master.html) (m1.small)
|
||||||
2. node2: metadata service (m1.small)
|
2. node2: metadata service (m1.small)
|
||||||
3. node3: zookeeper (m1.small)
|
3. node3: zookeeper (m1.small)
|
||||||
4. node4: [[Broker]] (m1.small or m1.medium or m1.large)
|
4. node4: [Broker](Broker.html) (m1.small or m1.medium or m1.large)
|
||||||
5. node5: [[Compute]] (m1.small or m1.medium or m1.large)
|
5. node5: [Compute](Compute.html) (m1.small or m1.medium or m1.large)
|
||||||
6. node6: [[Compute]] (m1.small or m1.medium or m1.large)
|
6. node6: [Compute](Compute.html) (m1.small or m1.medium or m1.large)
|
||||||
7. node7: [[Realtime]] (m1.small or m1.medium or m1.large)
|
7. node7: [Realtime](Realtime.html) (m1.small or m1.medium or m1.large)
|
||||||
8. transient nodes: indexer
|
8. transient nodes: indexer
|
||||||
|
|
||||||
This layout naturally lends itself to adding more RAM and core to Compute nodes, and to adding many more Compute nodes. Depending on the actual load, the Master, metadata server, and Zookeeper might need to use larger machines.
|
This layout naturally lends itself to adding more RAM and core to Compute nodes, and to adding many more Compute nodes. Depending on the actual load, the Master, metadata server, and Zookeeper might need to use larger machines.
|
||||||
|
@ -45,18 +48,18 @@ High Availability Physical Layout
|
||||||
|
|
||||||
An HA layout allows full rolling restarts and heavy volume:
|
An HA layout allows full rolling restarts and heavy volume:
|
||||||
|
|
||||||
1. node1: [[Master]] (m1.small or m1.medium or m1.large)
|
1. node1: [Master](Master.html) (m1.small or m1.medium or m1.large)
|
||||||
2. node2: [[Master]] (m1.small or m1.medium or m1.large) (backup)
|
2. node2: [Master](Master.html) (m1.small or m1.medium or m1.large) (backup)
|
||||||
3. node3: metadata service (c1.medium or m1.large)
|
3. node3: metadata service (c1.medium or m1.large)
|
||||||
4. node4: metadata service (c1.medium or m1.large) (backup)
|
4. node4: metadata service (c1.medium or m1.large) (backup)
|
||||||
5. node5: zookeeper (c1.medium)
|
5. node5: zookeeper (c1.medium)
|
||||||
6. node6: zookeeper (c1.medium)
|
6. node6: zookeeper (c1.medium)
|
||||||
7. node7: zookeeper (c1.medium)
|
7. node7: zookeeper (c1.medium)
|
||||||
8. node8: [[Broker]] (m1.small or m1.medium or m1.large or m2.xlarge or m2.2xlarge or m2.4xlarge)
|
8. node8: [Broker](Broker.html) (m1.small or m1.medium or m1.large or m2.xlarge or m2.2xlarge or m2.4xlarge)
|
||||||
9. node9: [[Broker]] (m1.small or m1.medium or m1.large or m2.xlarge or m2.2xlarge or m2.4xlarge) (backup)
|
9. node9: [Broker](Broker.html) (m1.small or m1.medium or m1.large or m2.xlarge or m2.2xlarge or m2.4xlarge) (backup)
|
||||||
10. node10: [[Compute]] (m1.small or m1.medium or m1.large or m2.xlarge or m2.2xlarge or m2.4xlarge)
|
10. node10: [Compute](Compute.html) (m1.small or m1.medium or m1.large or m2.xlarge or m2.2xlarge or m2.4xlarge)
|
||||||
11. node11: [[Compute]] (m1.small or m1.medium or m1.large or m2.xlarge or m2.2xlarge or m2.4xlarge)
|
11. node11: [Compute](Compute.html) (m1.small or m1.medium or m1.large or m2.xlarge or m2.2xlarge or m2.4xlarge)
|
||||||
12. node12: [[Realtime]] (m1.small or m1.medium or m1.large or m2.xlarge or m2.2xlarge or m2.4xlarge)
|
12. node12: [Realtime](Realtime.html) (m1.small or m1.medium or m1.large or m2.xlarge or m2.2xlarge or m2.4xlarge)
|
||||||
13. transient nodes: indexer
|
13. transient nodes: indexer
|
||||||
|
|
||||||
Sizing for Cores and RAM
|
Sizing for Cores and RAM
|
||||||
|
@ -76,7 +79,7 @@ Local disk (“ephemeral” on AWS EC2) for caching is recommended over network
|
||||||
Setup
|
Setup
|
||||||
-----
|
-----
|
||||||
|
|
||||||
Setting up a cluster is essentially just firing up all of the nodes you want with the proper [[configuration]]. One thing to be aware of is that there are a few properties in the configuration that potentially need to be set individually for each process:
|
Setting up a cluster is essentially just firing up all of the nodes you want with the proper [configuration](configuration.html). One thing to be aware of is that there are a few properties in the configuration that potentially need to be set individually for each process:
|
||||||
|
|
||||||
<code>
|
<code>
|
||||||
druid.server.type=historical|realtime
|
druid.server.type=historical|realtime
|
||||||
|
@ -104,8 +107,8 @@ The following table shows the possible services and fully qualified class for ma
|
||||||
|
|
||||||
|service|main class|
|
|service|main class|
|
||||||
|-------|----------|
|
|-------|----------|
|
||||||
|[[ Realtime ]]|com.metamx.druid.realtime.RealtimeMain|
|
|[ Realtime ]( Realtime .html)|com.metamx.druid.realtime.RealtimeMain|
|
||||||
|[[ Master ]]|com.metamx.druid.http.MasterMain|
|
|[ Master ]( Master .html)|com.metamx.druid.http.MasterMain|
|
||||||
|[[ Broker ]]|com.metamx.druid.http.BrokerMain|
|
|[ Broker ]( Broker .html)|com.metamx.druid.http.BrokerMain|
|
||||||
|[[ Compute ]]|com.metamx.druid.http.ComputeMain|
|
|[ Compute ]( Compute .html)|com.metamx.druid.http.ComputeMain|
|
||||||
|
|
||||||
|
|
|
@ -1,3 +1,6 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
Compute
|
Compute
|
||||||
=======
|
=======
|
||||||
|
|
||||||
|
@ -8,9 +11,9 @@ Loading and Serving Segments
|
||||||
|
|
||||||
Each compute node maintains a constant connection to Zookeeper and watches a configurable set of Zookeeper paths for new segment information. Compute nodes do not communicate directly with each other or with the master nodes but instead rely on Zookeeper for coordination.
|
Each compute node maintains a constant connection to Zookeeper and watches a configurable set of Zookeeper paths for new segment information. Compute nodes do not communicate directly with each other or with the master nodes but instead rely on Zookeeper for coordination.
|
||||||
|
|
||||||
The [[Master]] node is responsible for assigning new segments to compute nodes. Assignment is done by creating an ephemeral Zookeeper entry under a load queue path associated with a compute node. For more information on how the master assigns segments to compute nodes, please see [[Master]].
|
The [Master](Master.html) node is responsible for assigning new segments to compute nodes. Assignment is done by creating an ephemeral Zookeeper entry under a load queue path associated with a compute node. For more information on how the master assigns segments to compute nodes, please see [Master](Master.html).
|
||||||
|
|
||||||
When a compute node notices a new load queue entry in its load queue path, it will first check a local disk directory (cache) for the information about segment. If no information about the segment exists in the cache, the compute node will download metadata about the new segment to serve from Zookeeper. This metadata includes specifications about where the segment is located in deep storage and about how to decompress and process the segment. For more information about segment metadata and Druid segments in general, please see [[Segments]]. Once a compute node completes processing a segment, the segment is announced in Zookeeper under a served segments path associated with the node. At this point, the segment is available for querying.
|
When a compute node notices a new load queue entry in its load queue path, it will first check a local disk directory (cache) for the information about segment. If no information about the segment exists in the cache, the compute node will download metadata about the new segment to serve from Zookeeper. This metadata includes specifications about where the segment is located in deep storage and about how to decompress and process the segment. For more information about segment metadata and Druid segments in general, please see [Segments](Segments.html). Once a compute node completes processing a segment, the segment is announced in Zookeeper under a served segments path associated with the node. At this point, the segment is available for querying.
|
||||||
|
|
||||||
Loading and Serving Segments From Cache
|
Loading and Serving Segments From Cache
|
||||||
---------------------------------------
|
---------------------------------------
|
||||||
|
@ -22,7 +25,7 @@ The segment cache is also leveraged when a compute node is first started. On sta
|
||||||
Querying Segments
|
Querying Segments
|
||||||
-----------------
|
-----------------
|
||||||
|
|
||||||
Please see [[Querying]] for more information on querying compute nodes.
|
Please see [Querying](Querying.html) for more information on querying compute nodes.
|
||||||
|
|
||||||
For every query that a compute node services, it will log the query and report metrics on the time taken to run the query.
|
For every query that a compute node services, it will log the query and report metrics on the time taken to run the query.
|
||||||
|
|
||||||
|
@ -34,4 +37,4 @@ Compute nodes can be run using the `com.metamx.druid.http.ComputeMain` class.
|
||||||
Configuration
|
Configuration
|
||||||
-------------
|
-------------
|
||||||
|
|
||||||
See [[Configuration]].
|
See [Configuration](Configuration.html).
|
||||||
|
|
|
@ -1,3 +1,6 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
Concepts and Terminology
|
Concepts and Terminology
|
||||||
========================
|
========================
|
||||||
|
|
||||||
|
@ -9,4 +12,4 @@ Concepts and Terminology
|
||||||
|
|
||||||
- **Segment:** A collection of (internal) records that are stored and processed together.
|
- **Segment:** A collection of (internal) records that are stored and processed together.
|
||||||
- **Shard:** A unit of partitioning data across machine. TODO: clarify; by time or other dimensions?
|
- **Shard:** A unit of partitioning data across machine. TODO: clarify; by time or other dimensions?
|
||||||
- **specFile** is specification for services in JSON format; see [[Realtime]] and [[Batch-ingestion]]
|
- **specFile** is specification for services in JSON format; see [Realtime](Realtime.html) and [Batch-ingestion](Batch-ingestion.html)
|
||||||
|
|
|
@ -1,4 +1,7 @@
|
||||||
This describes the basic server configuration that is loaded by all the server processes; the same file is loaded by all. See also the json “specFile” descriptions in [[Realtime]] and [[Batch-ingestion]].
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
|
This describes the basic server configuration that is loaded by all the server processes; the same file is loaded by all. See also the json “specFile” descriptions in [Realtime](Realtime.html) and [Batch-ingestion](Batch-ingestion.html).
|
||||||
|
|
||||||
JVM Configuration Best Practices
|
JVM Configuration Best Practices
|
||||||
================================
|
================================
|
||||||
|
@ -77,7 +80,7 @@ Configuration groupings
|
||||||
|
|
||||||
### S3 Access
|
### S3 Access
|
||||||
|
|
||||||
These properties are for connecting with S3 and using it to pull down segments. In the future, we plan on being able to use other deep storage file systems as well, like HDFS. The file system is actually only accessed by the [[Compute]], [[Realtime]] and [[Indexing service]] nodes.
|
These properties are for connecting with S3 and using it to pull down segments. In the future, we plan on being able to use other deep storage file systems as well, like HDFS. The file system is actually only accessed by the [Compute](Compute.html), [Realtime](Realtime.html) and [Indexing service](Indexing service.html) nodes.
|
||||||
|
|
||||||
|Property|Description|Default|
|
|Property|Description|Default|
|
||||||
|--------|-----------|-------|
|
|--------|-----------|-------|
|
||||||
|
@ -88,7 +91,7 @@ These properties are for connecting with S3 and using it to pull down segments.
|
||||||
|
|
||||||
### JDBC connection
|
### JDBC connection
|
||||||
|
|
||||||
These properties specify the jdbc connection and other configuration around the “segments table” database. The only processes that connect to the DB with these properties are the [[Master]] and [[Indexing service]]. This is tested on MySQL.
|
These properties specify the jdbc connection and other configuration around the “segments table” database. The only processes that connect to the DB with these properties are the [Master](Master.html) and [Indexing service](Indexing-service.html). This is tested on MySQL.
|
||||||
|
|
||||||
|Property|Description|Default|
|
|Property|Description|Default|
|
||||||
|--------|-----------|-------|
|
|--------|-----------|-------|
|
||||||
|
@ -110,7 +113,7 @@ These properties specify the jdbc connection and other configuration around the
|
||||||
|
|
||||||
### Zk properties
|
### Zk properties
|
||||||
|
|
||||||
See [[ZooKeeper]] for a description of these properties.
|
See [ZooKeeper](ZooKeeper.html) for a description of these properties.
|
||||||
|
|
||||||
### Service properties
|
### Service properties
|
||||||
|
|
||||||
|
@ -143,7 +146,7 @@ These are properties that the compute nodes use
|
||||||
|
|
||||||
### Emitter Properties
|
### Emitter Properties
|
||||||
|
|
||||||
The Druid servers emit various metrics and alerts via something we call an [[Emitter]]. There are two emitter implementations included with the code, one that just logs to log4j and one that does POSTs of JSON events to a server. More information can be found on the [[Emitter]] page. The properties for using the logging emitter are described below.
|
The Druid servers emit various metrics and alerts via something we call an [Emitter](Emitter.html). There are two emitter implementations included with the code, one that just logs to log4j and one that does POSTs of JSON events to a server. More information can be found on the [Emitter](Emitter.html) page. The properties for using the logging emitter are described below.
|
||||||
|
|
||||||
|Property|Description|Default|
|
|Property|Description|Default|
|
||||||
|--------|-----------|-------|
|
|--------|-----------|-------|
|
||||||
|
@ -155,5 +158,5 @@ The Druid servers emit various metrics and alerts via something we call an [[Emi
|
||||||
|
|
||||||
|Property|Description|Default|
|
|Property|Description|Default|
|
||||||
|--------|-----------|-------|
|
|--------|-----------|-------|
|
||||||
|`druid.realtime.specFile`|The file with realtime specifications in it. See [[Realtime]].|none|
|
|`druid.realtime.specFile`|The file with realtime specifications in it. See [Realtime](Realtime.html).|none|
|
||||||
|
|
||||||
|
|
|
@ -1,5 +1,8 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
If you are interested in contributing to the code, we accept [pull requests](https://help.github.com/articles/using-pull-requests). Note: we have only just completed decoupling our Metamarkets-specific code from the code base and we took some short-cuts in interface design to make it happen. So, there are a number of interfaces that exist right now which are likely to be in flux. If you are embedding Druid in your system, it will be safest for the time being to only extend/implement interfaces that this wiki describes, as those are intended as stable (unless otherwise mentioned).
|
If you are interested in contributing to the code, we accept [pull requests](https://help.github.com/articles/using-pull-requests). Note: we have only just completed decoupling our Metamarkets-specific code from the code base and we took some short-cuts in interface design to make it happen. So, there are a number of interfaces that exist right now which are likely to be in flux. If you are embedding Druid in your system, it will be safest for the time being to only extend/implement interfaces that this wiki describes, as those are intended as stable (unless otherwise mentioned).
|
||||||
|
|
||||||
For issue tracking, we are using the github issue tracker. Please fill out an issue from the Issues tab on the github screen.
|
For issue tracking, we are using the github issue tracker. Please fill out an issue from the Issues tab on the github screen.
|
||||||
|
|
||||||
We also have a [[Libraries]] page that lists external libraries that people have created for working with Druid.
|
We also have a [Libraries](Libraries.html) page that lists external libraries that people have created for working with Druid.
|
||||||
|
|
|
@ -1,3 +1,6 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
Deep storage is where segments are stored. It is a storage mechanism that Druid does not provide. This deep storage infrastructure defines the level of durability of your data, as long as Druid nodes can see this storage infrastructure and get at the segments stored on it, you will not lose data no matter how many Druid nodes you lose. If segments disappear from this storage layer, then you will lose whatever data those segments represented.
|
Deep storage is where segments are stored. It is a storage mechanism that Druid does not provide. This deep storage infrastructure defines the level of durability of your data, as long as Druid nodes can see this storage infrastructure and get at the segments stored on it, you will not lose data no matter how many Druid nodes you lose. If segments disappear from this storage layer, then you will lose whatever data those segments represented.
|
||||||
|
|
||||||
The currently supported types of deep storage follow.
|
The currently supported types of deep storage follow.
|
||||||
|
|
|
@ -1,3 +1,6 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
For a comprehensive look at the architecture of Druid, read the [White Paper](http://static.druid.io/docs/druid.pdf).
|
For a comprehensive look at the architecture of Druid, read the [White Paper](http://static.druid.io/docs/druid.pdf).
|
||||||
|
|
||||||
What is Druid?
|
What is Druid?
|
||||||
|
@ -50,7 +53,7 @@ Getting data into the Druid system requires an indexing process. This gives the
|
||||||
- Bitmap compression
|
- Bitmap compression
|
||||||
- RLE (on the roadmap, but not yet implemented)
|
- RLE (on the roadmap, but not yet implemented)
|
||||||
|
|
||||||
The output of the indexing process is stored in a “deep storage” LOB store/file system ([[Deep Storage]] for information about potential options). Data is then loaded by compute nodes by first downloading the data to their local disk and then memory mapping it before serving queries.
|
The output of the indexing process is stored in a “deep storage” LOB store/file system ([Deep Storage](Deep Storage.html) for information about potential options). Data is then loaded by compute nodes by first downloading the data to their local disk and then memory mapping it before serving queries.
|
||||||
|
|
||||||
If a compute node dies, it will no longer serve its segments, but given that the segments are still available on the “deep storage” any other node can simply download the segment and start serving it. This means that it is possible to actually remove all compute nodes from the cluster and then re-provision them without any data loss. It also means that if the “deep storage” is not available, the nodes can continue to serve the segments they have already pulled down (i.e. the cluster goes stale, not down).
|
If a compute node dies, it will no longer serve its segments, but given that the segments are still available on the “deep storage” any other node can simply download the segment and start serving it. This means that it is possible to actually remove all compute nodes from the cluster and then re-provision them without any data loss. It also means that if the “deep storage” is not available, the nodes can continue to serve the segments they have already pulled down (i.e. the cluster goes stale, not down).
|
||||||
|
|
||||||
|
|
|
@ -1,4 +1,7 @@
|
||||||
A version may be declared as a release candidate if it has been deployed to a sizable production cluster. Release candidates are declared as stable after we feel fairly confident there are no major bugs in the version. Check out the [[Versioning]] section for how we describe software versions.
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
|
A version may be declared as a release candidate if it has been deployed to a sizable production cluster. Release candidates are declared as stable after we feel fairly confident there are no major bugs in the version. Check out the [Versioning](Versioning.html) section for how we describe software versions.
|
||||||
|
|
||||||
Release Candidate
|
Release Candidate
|
||||||
-----------------
|
-----------------
|
||||||
|
|
|
@ -1,6 +1,9 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
# Druid Personal Demo Cluster (DPDC)
|
# Druid Personal Demo Cluster (DPDC)
|
||||||
|
|
||||||
Note, there are currently some issues with the CloudFormation. We are working through them and will update the documentation here when things work properly. In the meantime, the simplest way to get your feet wet with a cluster setup is to run through the instructions at [housejester/druid-test-harness](https://github.com/housejester/druid-test-harness), though it is based on an older version. If you just want to get a feel for the types of data and queries that you can issue, check out [[Realtime Examples]]
|
Note, there are currently some issues with the CloudFormation. We are working through them and will update the documentation here when things work properly. In the meantime, the simplest way to get your feet wet with a cluster setup is to run through the instructions at [housejester/druid-test-harness](https://github.com/housejester/druid-test-harness), though it is based on an older version. If you just want to get a feel for the types of data and queries that you can issue, check out [Realtime Examples](Realtime-Examples.html)
|
||||||
|
|
||||||
## Introduction
|
## Introduction
|
||||||
To make it easy for you to get started with Druid, we created an AWS (Amazon Web Services) [CloudFormation](http://aws.amazon.com/cloudformation/) Template that allows you to create a small pre-configured Druid cluster using your own AWS account. The cluster contains a pre-loaded sample workload, the Wikipedia edit stream, and a basic query interface that gets you familiar with Druid capabilities like drill-downs and filters.
|
To make it easy for you to get started with Druid, we created an AWS (Amazon Web Services) [CloudFormation](http://aws.amazon.com/cloudformation/) Template that allows you to create a small pre-configured Druid cluster using your own AWS account. The cluster contains a pre-loaded sample workload, the Wikipedia edit stream, and a basic query interface that gets you familiar with Druid capabilities like drill-downs and filters.
|
||||||
|
@ -11,7 +14,7 @@ This guide walks you through the steps to create the cluster and then how to cre
|
||||||
|
|
||||||
## What’s in this Druid Demo Cluster?
|
## What’s in this Druid Demo Cluster?
|
||||||
|
|
||||||
1. A single "Master" node. This node co-locates the [[Master]] process, the [[Broker]] process, Zookeeper, and the MySQL instance. You can read more about Druid architecture [[Design]].
|
1. A single "Master" node. This node co-locates the [Master](Master.html) process, the [Broker](Broker.html) process, Zookeeper, and the MySQL instance. You can read more about Druid architecture [Design](Design.html).
|
||||||
|
|
||||||
1. Three compute nodes; these compute nodes, have been pre-configured to work with the Master node and should automatically load up the Wikipedia edit stream data (no specific setup is required).
|
1. Three compute nodes; these compute nodes, have been pre-configured to work with the Master node and should automatically load up the Wikipedia edit stream data (no specific setup is required).
|
||||||
|
|
||||||
|
|
|
@ -1,3 +1,6 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
We are not experts on Cassandra, if anything is incorrect about our portrayal, please let us know on the mailing list or via some other means. We will fix this page.
|
We are not experts on Cassandra, if anything is incorrect about our portrayal, please let us know on the mailing list or via some other means. We will fix this page.
|
||||||
|
|
||||||
Druid is highly optimized for scans and aggregations, it supports arbitrarily deep drill downs into data sets without the need to pre-compute, and it can ingest event streams in real-time and allow users to query events as they come in. Cassandra is a great key-value store and it has some features that allow you to use it to do more interesting things than what you can do with a pure key-value store. But, it is not built for the same use cases that Druid handles, namely regularly scanning over billions of entries per query.
|
Druid is highly optimized for scans and aggregations, it supports arbitrarily deep drill downs into data sets without the need to pre-compute, and it can ingest event streams in real-time and allow users to query events as they come in. Cassandra is a great key-value store and it has some features that allow you to use it to do more interesting things than what you can do with a pure key-value store. But, it is not built for the same use cases that Druid handles, namely regularly scanning over billions of entries per query.
|
||||||
|
|
|
@ -1,3 +1,6 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
Druid is a complementary addition to Hadoop. Hadoop is great at storing and making accessible large amounts of individually low-value data. Unfortunately, Hadoop is not great at providing query speed guarantees on top of that data, nor does it have very good operational characteristics for a customer-facing production system. Druid, on the other hand, excels at taking high-value summaries of the low-value data on Hadoop, making it available in a fast and always-on fashion, such that it could be exposed directly to a customer.
|
Druid is a complementary addition to Hadoop. Hadoop is great at storing and making accessible large amounts of individually low-value data. Unfortunately, Hadoop is not great at providing query speed guarantees on top of that data, nor does it have very good operational characteristics for a customer-facing production system. Druid, on the other hand, excels at taking high-value summaries of the low-value data on Hadoop, making it available in a fast and always-on fashion, such that it could be exposed directly to a customer.
|
||||||
|
|
||||||
Druid also requires some infrastructure to exist for “deep storage”. HDFS is one of the implemented options for this “deep storage”.
|
Druid also requires some infrastructure to exist for “deep storage”. HDFS is one of the implemented options for this “deep storage”.
|
||||||
|
|
|
@ -1,3 +1,6 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
The question of Druid versus Impala or Shark basically comes down to your product requirements and what the systems were designed to do.
|
The question of Druid versus Impala or Shark basically comes down to your product requirements and what the systems were designed to do.
|
||||||
|
|
||||||
Druid was designed to
|
Druid was designed to
|
||||||
|
@ -17,11 +20,11 @@ What does this mean? We can talk about it in terms of four general areas
|
||||||
|
|
||||||
## Fault Tolerance
|
## Fault Tolerance
|
||||||
|
|
||||||
Druid pulls segments down from [[Deep Storage]] before serving queries on top of it. This means that for the data to exist in the Druid cluster, it must exist as a local copy on a historical node. If deep storage becomes unavailable for any reason, new segments will not be loaded into the system, but the cluster will continue to operate exactly as it was when the backing store disappeared.
|
Druid pulls segments down from [Deep Storage](Deep Storage.html) before serving queries on top of it. This means that for the data to exist in the Druid cluster, it must exist as a local copy on a historical node. If deep storage becomes unavailable for any reason, new segments will not be loaded into the system, but the cluster will continue to operate exactly as it was when the backing store disappeared.
|
||||||
|
|
||||||
Impala and Shark, on the other hand, pull their data in from HDFS (or some other Hadoop FileSystem) in response to a query. This has implications for the operation of queries if you need to take HDFS down for a bit (say a software upgrade). It's possible that data that has been cached in the nodes is still available when the backing file system goes down, but I'm not sure.
|
Impala and Shark, on the other hand, pull their data in from HDFS (or some other Hadoop FileSystem) in response to a query. This has implications for the operation of queries if you need to take HDFS down for a bit (say a software upgrade). It's possible that data that has been cached in the nodes is still available when the backing file system goes down, but I'm not sure.
|
||||||
|
|
||||||
This is just one example, but Druid was built to continue operating in the face of failures of any one of its various pieces. The [[Design]] describes these design decisions from the Druid side in more detail.
|
This is just one example, but Druid was built to continue operating in the face of failures of any one of its various pieces. The [Design](Design.html) describes these design decisions from the Druid side in more detail.
|
||||||
|
|
||||||
## Query Speed
|
## Query Speed
|
||||||
|
|
||||||
|
|
|
@ -1,3 +1,6 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
###How does Druid compare to Redshift?
|
###How does Druid compare to Redshift?
|
||||||
|
|
||||||
In terms of drawing a differentiation, Redshift is essentially ParAccel (Actian) which Amazon is licensing.
|
In terms of drawing a differentiation, Redshift is essentially ParAccel (Actian) which Amazon is licensing.
|
||||||
|
|
|
@ -1,6 +1,9 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
How does Druid compare to Vertica?
|
How does Druid compare to Vertica?
|
||||||
|
|
||||||
Vertica is similar to ParAccel/Redshift ([[Druid-vs-Redshift]]) described above in that it wasn’t built for real-time streaming data ingestion and it supports full SQL.
|
Vertica is similar to ParAccel/Redshift ([Druid-vs-Redshift](Druid-vs-Redshift.html)) described above in that it wasn’t built for real-time streaming data ingestion and it supports full SQL.
|
||||||
|
|
||||||
The other big difference is that instead of employing indexing, Vertica tries to optimize processing by leveraging run-length encoding (RLE) and other compression techniques along with a “projection” system that creates materialized copies of the data in a different sort order (to maximize the effectiveness of RLE).
|
The other big difference is that instead of employing indexing, Vertica tries to optimize processing by leveraging run-length encoding (RLE) and other compression techniques along with a “projection” system that creates materialized copies of the data in a different sort order (to maximize the effectiveness of RLE).
|
||||||
|
|
||||||
|
|
|
@ -1,3 +1,6 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
Examples
|
Examples
|
||||||
========
|
========
|
||||||
|
|
||||||
|
@ -31,7 +34,7 @@ Clone Druid and build it:
|
||||||
Twitter Example
|
Twitter Example
|
||||||
---------------
|
---------------
|
||||||
|
|
||||||
For a full tutorial based on the twitter example, check out this [[Twitter Tutorial]].
|
For a full tutorial based on the twitter example, check out this [Twitter Tutorial](Twitter-Tutorial.html).
|
||||||
|
|
||||||
This Example uses a feature of Twitter that allows for sampling of it’s stream. We sample the Twitter stream via our [TwitterSpritzerFirehoseFactory](https://github.com/metamx/druid/blob/master/examples/src/main/java/druid/examples/twitter/TwitterSpritzerFirehoseFactory.java) class and use it to simulate the kinds of data you might ingest into Druid. Then, with the client part, the sample shows what kinds of analytics explorations you can do during and after the data is loaded.
|
This Example uses a feature of Twitter that allows for sampling of it’s stream. We sample the Twitter stream via our [TwitterSpritzerFirehoseFactory](https://github.com/metamx/druid/blob/master/examples/src/main/java/druid/examples/twitter/TwitterSpritzerFirehoseFactory.java) class and use it to simulate the kinds of data you might ingest into Druid. Then, with the client part, the sample shows what kinds of analytics explorations you can do during and after the data is loaded.
|
||||||
|
|
||||||
|
@ -45,7 +48,7 @@ This Example uses a feature of Twitter that allows for sampling of it’s stream
|
||||||
|
|
||||||
### What you’ll do
|
### What you’ll do
|
||||||
|
|
||||||
See [[Tutorial]]
|
See [Tutorial](Tutorial.html)
|
||||||
|
|
||||||
Rand Example
|
Rand Example
|
||||||
------------
|
------------
|
||||||
|
|
|
@ -1,3 +1,6 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
A filter is a JSON object indicating which rows of data should be included in the computation for a query. It’s essentially the equivalent of the WHERE clause in SQL. Druid supports the following types of filters.
|
A filter is a JSON object indicating which rows of data should be included in the computation for a query. It’s essentially the equivalent of the WHERE clause in SQL. Druid supports the following types of filters.
|
||||||
|
|
||||||
### Selector filter
|
### Selector filter
|
||||||
|
|
|
@ -1,3 +1,6 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
Firehoses describe the data stream source. They are pluggable and thus the configuration schema can and will vary based on the `type` of the firehose.
|
Firehoses describe the data stream source. They are pluggable and thus the configuration schema can and will vary based on the `type` of the firehose.
|
||||||
|
|
||||||
|Field|Type|Description|Required|
|
|Field|Type|Description|Required|
|
||||||
|
@ -25,11 +28,11 @@ This firehose ingests events from a predefined list of S3 objects.
|
||||||
|
|
||||||
#### TwitterSpritzerFirehose
|
#### TwitterSpritzerFirehose
|
||||||
|
|
||||||
See [[Examples]]. This firehose connects directly to the twitter spritzer data stream.
|
See [Examples](Examples.html). This firehose connects directly to the twitter spritzer data stream.
|
||||||
|
|
||||||
#### RandomFirehose
|
#### RandomFirehose
|
||||||
|
|
||||||
See [[Examples]]. This firehose creates a stream of random numbers.
|
See [Examples](Examples.html). This firehose creates a stream of random numbers.
|
||||||
|
|
||||||
#### RabbitMqFirehouse
|
#### RabbitMqFirehouse
|
||||||
|
|
||||||
|
|
|
@ -1,3 +1,6 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
The granularity field determines how data gets bucketed across the time dimension, i.e how it gets aggregated by hour, day, minute, etc.
|
The granularity field determines how data gets bucketed across the time dimension, i.e how it gets aggregated by hour, day, minute, etc.
|
||||||
|
|
||||||
It can be specified either as a string for simple granularities or as an object for arbitrary granularities.
|
It can be specified either as a string for simple granularities or as an object for arbitrary granularities.
|
||||||
|
@ -8,7 +11,7 @@ Simple granularities are specified as a string and bucket timestamps by their UT
|
||||||
|
|
||||||
Supported granularity strings are: `all`, `none`, `minute`, `fifteen_minute`, `thirty_minute`, `hour` and `day`
|
Supported granularity strings are: `all`, `none`, `minute`, `fifteen_minute`, `thirty_minute`, `hour` and `day`
|
||||||
\* **`all`** buckets everything into a single bucket
|
\* **`all`** buckets everything into a single bucket
|
||||||
\* **`none`** does not bucket data (it actually uses the granularity of the index - minimum here is `none` which means millisecond granularity). Using `none` in a [[timeseries query|TimeSeriesQuery]] is currently not recommended (the system will try to generate 0 values for all milliseconds that didn’t exist, which is often a lot).
|
\* **`none`** does not bucket data (it actually uses the granularity of the index - minimum here is `none` which means millisecond granularity). Using `none` in a [timeseries query|TimeSeriesQuery](timeseries query|TimeSeriesQuery.html) is currently not recommended (the system will try to generate 0 values for all milliseconds that didn’t exist, which is often a lot).
|
||||||
|
|
||||||
### Duration Granularities
|
### Duration Granularities
|
||||||
|
|
||||||
|
|
|
@ -1,3 +1,6 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
These types of queries take a groupBy query object and return an array of JSON objects where each object represents a grouping asked for by the query.
|
These types of queries take a groupBy query object and return an array of JSON objects where each object represents a grouping asked for by the query.
|
||||||
|
|
||||||
An example groupBy query object is shown below:
|
An example groupBy query object is shown below:
|
||||||
|
@ -90,12 +93,12 @@ There are 9 main parts to a groupBy query:
|
||||||
|queryType|This String should always be “groupBy”; this is the first thing Druid looks at to figure out how to interpret the query|yes|
|
|queryType|This String should always be “groupBy”; this is the first thing Druid looks at to figure out how to interpret the query|yes|
|
||||||
|dataSource|A String defining the data source to query, very similar to a table in a relational database|yes|
|
|dataSource|A String defining the data source to query, very similar to a table in a relational database|yes|
|
||||||
|dimensions|A JSON list of dimensions to do the groupBy over|yes|
|
|dimensions|A JSON list of dimensions to do the groupBy over|yes|
|
||||||
|orderBy|See [[OrderBy]].|no|
|
|orderBy|See [OrderBy](OrderBy.html).|no|
|
||||||
|having|See [[Having]].|no|
|
|having|See [Having](Having.html).|no|
|
||||||
|granularity|Defines the granularity of the query. See [[Granularities]]|yes|
|
|granularity|Defines the granularity of the query. See [Granularities](Granularities.html)|yes|
|
||||||
|filter|See [[Filters]]|no|
|
|filter|See [Filters](Filters.html)|no|
|
||||||
|aggregations|See [[Aggregations]]|yes|
|
|aggregations|See [Aggregations](Aggregations.html)|yes|
|
||||||
|postAggregations|See [[Post Aggregations]]|no|
|
|postAggregations|See [Post Aggregations](Post-Aggregations.html)|no|
|
||||||
|intervals|A JSON Object representing ISO-8601 Intervals. This defines the time ranges to run the query over.|yes|
|
|intervals|A JSON Object representing ISO-8601 Intervals. This defines the time ranges to run the query over.|yes|
|
||||||
|context|An additional JSON Object which can be used to specify certain flags.|no|
|
|context|An additional JSON Object which can be used to specify certain flags.|no|
|
||||||
|
|
||||||
|
|
|
@ -1,3 +1,6 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
A having clause is a JSON object identifying which rows from a groupBy query should be returned, by specifying conditions on aggregated values.
|
A having clause is a JSON object identifying which rows from a groupBy query should be returned, by specifying conditions on aggregated values.
|
||||||
|
|
||||||
It is essentially the equivalent of the HAVING clause in SQL.
|
It is essentially the equivalent of the HAVING clause in SQL.
|
||||||
|
|
17
docs/Home.md
17
docs/Home.md
|
@ -1,6 +1,9 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
Druid is an open-source analytics datastore designed for realtime, exploratory, queries on large-scale data sets (100’s of Billions entries, 100’s TB data). Druid provides for cost effective, always-on, realtime data ingestion and arbitrary data exploration.
|
Druid is an open-source analytics datastore designed for realtime, exploratory, queries on large-scale data sets (100’s of Billions entries, 100’s TB data). Druid provides for cost effective, always-on, realtime data ingestion and arbitrary data exploration.
|
||||||
|
|
||||||
- Check out some [[Examples]]
|
- Check out some [Examples](Examples.html)
|
||||||
- Try out Druid with our Getting Started [Tutorial](https://github.com/metamx/druid/wiki/Tutorial%3A-A-First-Look-at-Druid)
|
- Try out Druid with our Getting Started [Tutorial](https://github.com/metamx/druid/wiki/Tutorial%3A-A-First-Look-at-Druid)
|
||||||
- Learn more by reading the [White Paper](http://static.druid.io/docs/druid.pdf)
|
- Learn more by reading the [White Paper](http://static.druid.io/docs/druid.pdf)
|
||||||
|
|
||||||
|
@ -16,7 +19,7 @@ The first one is the joy that everyone feels the first time they get Hadoop runn
|
||||||
|
|
||||||
Druid is especially useful if you are summarizing your data sets and then querying the summarizations. If you put your summarizations into Druid, you will get quick queryability out of a system that you can be confident will scale up as your data volumes increase. Deployments have scaled up to 2TB of data per hour at peak ingested and aggregated in real-time.
|
Druid is especially useful if you are summarizing your data sets and then querying the summarizations. If you put your summarizations into Druid, you will get quick queryability out of a system that you can be confident will scale up as your data volumes increase. Deployments have scaled up to 2TB of data per hour at peak ingested and aggregated in real-time.
|
||||||
|
|
||||||
We have more details about the general design of the system and why you might want to use it in our [White Paper](http://static.druid.io/docs/druid.pdf) or in our [[Design]] doc.
|
We have more details about the general design of the system and why you might want to use it in our [White Paper](http://static.druid.io/docs/druid.pdf) or in our [Design](Design.html) doc.
|
||||||
|
|
||||||
The data store world is vast, confusing and constantly in flux. This page is meant to help potential evaluators decide whether Druid is a good fit for the problem one needs to solve. If anything about it is incorrect please provide that feedback on the mailing list or via some other means, we will fix this page.
|
The data store world is vast, confusing and constantly in flux. This page is meant to help potential evaluators decide whether Druid is a good fit for the problem one needs to solve. If anything about it is incorrect please provide that feedback on the mailing list or via some other means, we will fix this page.
|
||||||
|
|
||||||
|
@ -35,11 +38,11 @@ The data store world is vast, confusing and constantly in flux. This page is mea
|
||||||
\* Downtime is no big deal
|
\* Downtime is no big deal
|
||||||
|
|
||||||
#### Druid vs…
|
#### Druid vs…
|
||||||
\* [[Druid-vs-Impala-or-Shark]]
|
\* [Druid-vs-Impala-or-Shark](Druid-vs-Impala-or-Shark.html)
|
||||||
\* [[Druid-vs-Redshift]]
|
\* [Druid-vs-Redshift](Druid-vs-Redshift.html)
|
||||||
\* [[Druid-vs-Vertica]]
|
\* [Druid-vs-Vertica](Druid-vs-Vertica.html)
|
||||||
\* [[Druid-vs-Cassandra]]
|
\* [Druid-vs-Cassandra](Druid-vs-Cassandra.html)
|
||||||
\* [[Druid-vs-Hadoop]]
|
\* [Druid-vs-Hadoop](Druid-vs-Hadoop.html)
|
||||||
|
|
||||||
Key Features
|
Key Features
|
||||||
------------
|
------------
|
||||||
|
|
|
@ -1,6 +1,9 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
Disclaimer: We are still in the process of finalizing the indexing service and these configs are prone to change at any time. We will announce when we feel the indexing service and the configurations described are stable.
|
Disclaimer: We are still in the process of finalizing the indexing service and these configs are prone to change at any time. We will announce when we feel the indexing service and the configurations described are stable.
|
||||||
|
|
||||||
The indexing service is a distributed task/job queue. It accepts requests in the form of [[Tasks]] and executes those tasks across a set of worker nodes. Worker capacity can be automatically adjusted based on the number of tasks pending in the system. The indexing service is highly available, has built in retry logic, and can backup per task logs in deep storage.
|
The indexing service is a distributed task/job queue. It accepts requests in the form of [Tasks](Tasks.html) and executes those tasks across a set of worker nodes. Worker capacity can be automatically adjusted based on the number of tasks pending in the system. The indexing service is highly available, has built in retry logic, and can backup per task logs in deep storage.
|
||||||
|
|
||||||
The indexing service is composed of two main components, a coordinator node that manages task distribution and worker capacity, and worker nodes that execute tasks in separate JVMs.
|
The indexing service is composed of two main components, a coordinator node that manages task distribution and worker capacity, and worker nodes that execute tasks in separate JVMs.
|
||||||
|
|
||||||
|
@ -42,7 +45,7 @@ The coordinator also exposes a simple UI to show what tasks are currently runnin
|
||||||
|
|
||||||
#### Task Execution
|
#### Task Execution
|
||||||
|
|
||||||
The coordinator retrieves worker setup metadata from the Druid [[MySQL]] config table. This metadata contains information about the version of workers to create, the maximum and minimum number of workers in the cluster at one time, and additional information required to automatically create workers.
|
The coordinator retrieves worker setup metadata from the Druid [MySQL](MySQL.html) config table. This metadata contains information about the version of workers to create, the maximum and minimum number of workers in the cluster at one time, and additional information required to automatically create workers.
|
||||||
|
|
||||||
Tasks are assigned to workers by creating entries under specific /tasks paths associated with a worker, similar to how the Druid master node assigns segments to compute nodes. See [Worker Configuration](Indexing-Service#configuration-1). Once a worker picks up a task, it deletes the task entry and announces a task status under a /status path associated with the worker. Tasks are submitted to a worker until the worker hits capacity. If all workers in a cluster are at capacity, the indexer coordinator node automatically creates new worker resources.
|
Tasks are assigned to workers by creating entries under specific /tasks paths associated with a worker, similar to how the Druid master node assigns segments to compute nodes. See [Worker Configuration](Indexing-Service#configuration-1). Once a worker picks up a task, it deletes the task entry and announces a task status under a /status path associated with the worker. Tasks are submitted to a worker until the worker hits capacity. If all workers in a cluster are at capacity, the indexer coordinator node automatically creates new worker resources.
|
||||||
|
|
||||||
|
|
|
@ -1,3 +1,6 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
### R
|
### R
|
||||||
|
|
||||||
- [RDruid](https://github.com/metamx/RDruid) - Druid connector for R
|
- [RDruid](https://github.com/metamx/RDruid) - Druid connector for R
|
||||||
|
@ -10,6 +13,9 @@ Some great folks have written their own libraries to interact with Druid
|
||||||
#### Ruby
|
#### Ruby
|
||||||
\* [madvertise/ruby-druid](https://github.com/madvertise/ruby-druid) - A ruby client for Druid
|
\* [madvertise/ruby-druid](https://github.com/madvertise/ruby-druid) - A ruby client for Druid
|
||||||
|
|
||||||
|
#### Python
|
||||||
|
\* [metamx/pydruid](https://github.com/metamx/pydruid) - A python client for Druid
|
||||||
|
|
||||||
#### Helper Libraries
|
#### Helper Libraries
|
||||||
|
|
||||||
- [madvertise/druid-dumbo](https://github.com/madvertise/druid-dumbo) - Scripts to help generate batch configs for the ingestion of data into Druid
|
- [madvertise/druid-dumbo](https://github.com/madvertise/druid-dumbo) - Scripts to help generate batch configs for the ingestion of data into Druid
|
||||||
|
|
|
@ -1,6 +1,9 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
Once you have a realtime node working, it is time to load your own data to see how Druid performs.
|
Once you have a realtime node working, it is time to load your own data to see how Druid performs.
|
||||||
|
|
||||||
Druid can ingest data in three ways: via Kafka and a realtime node, via the indexing service, and via the Hadoop batch loader. Data is ingested in realtime using a [[Firehose]].
|
Druid can ingest data in three ways: via Kafka and a realtime node, via the indexing service, and via the Hadoop batch loader. Data is ingested in realtime using a [Firehose](Firehose.html).
|
||||||
|
|
||||||
## Create Config Directories ##
|
## Create Config Directories ##
|
||||||
Each type of node needs its own config file and directory, so create them as subdirectories under the druid directory.
|
Each type of node needs its own config file and directory, so create them as subdirectories under the druid directory.
|
||||||
|
@ -14,7 +17,7 @@ mkdir config/broker
|
||||||
|
|
||||||
## Loading Data with Kafka ##
|
## Loading Data with Kafka ##
|
||||||
|
|
||||||
[KafkaFirehoseFactory](https://github.com/metamx/druid/blob/master/realtime/src/main/java/com/metamx/druid/realtime/firehose/KafkaFirehoseFactory.java) is how druid communicates with Kafka. Using this [[Firehose]] with the right configuration, we can import data into Druid in realtime without writing any code. To load data to a realtime node via Kafka, we'll first need to initialize Zookeeper and Kafka, and then configure and initialize a [[Realtime]] node.
|
[KafkaFirehoseFactory](https://github.com/metamx/druid/blob/master/realtime/src/main/java/com/metamx/druid/realtime/firehose/KafkaFirehoseFactory.java) is how druid communicates with Kafka. Using this [Firehose](Firehose.html) with the right configuration, we can import data into Druid in realtime without writing any code. To load data to a realtime node via Kafka, we'll first need to initialize Zookeeper and Kafka, and then configure and initialize a [Realtime](Realtime.html) node.
|
||||||
|
|
||||||
### Booting Kafka ###
|
### Booting Kafka ###
|
||||||
|
|
||||||
|
@ -162,7 +165,7 @@ curl -X POST "http://localhost:8080/druid/v2/?pretty" \
|
||||||
}
|
}
|
||||||
} ]
|
} ]
|
||||||
```
|
```
|
||||||
Now you're ready for [[Querying Your Data]]!
|
Now you're ready for [Querying Your Data](Querying-Your-Data.html)!
|
||||||
|
|
||||||
## Loading Data with the HadoopDruidIndexer ##
|
## Loading Data with the HadoopDruidIndexer ##
|
||||||
|
|
||||||
|
@ -181,7 +184,7 @@ mysql -u root
|
||||||
GRANT ALL ON druid.* TO 'druid'@'localhost' IDENTIFIED BY 'diurd';
|
GRANT ALL ON druid.* TO 'druid'@'localhost' IDENTIFIED BY 'diurd';
|
||||||
CREATE database druid;
|
CREATE database druid;
|
||||||
```
|
```
|
||||||
The [[Master]] node will create the tables it needs based on its configuration.
|
The [Master](Master.html) node will create the tables it needs based on its configuration.
|
||||||
|
|
||||||
### Make sure you have ZooKeeper Running ###
|
### Make sure you have ZooKeeper Running ###
|
||||||
|
|
||||||
|
@ -203,7 +206,7 @@ cd ..
|
||||||
```
|
```
|
||||||
|
|
||||||
### Launch a Master Node ###
|
### Launch a Master Node ###
|
||||||
If you've already setup a realtime node, be aware that although you can run multiple node types on one physical computer, you must assign them unique ports. Having used 8080 for the [[Realtime]] node, we use 8081 for the [[Master]].
|
If you've already setup a realtime node, be aware that although you can run multiple node types on one physical computer, you must assign them unique ports. Having used 8080 for the [Realtime](Realtime.html) node, we use 8081 for the [Master](Master.html).
|
||||||
|
|
||||||
1. Setup a configuration file called config/master/runtime.properties similar to:
|
1. Setup a configuration file called config/master/runtime.properties similar to:
|
||||||
```bash
|
```bash
|
||||||
|
@ -248,7 +251,7 @@ druid.paths.indexCache=/tmp/druid/indexCache
|
||||||
# Path on local FS for storage of segment metadata; dir will be created if needed
|
# Path on local FS for storage of segment metadata; dir will be created if needed
|
||||||
druid.paths.segmentInfoCache=/tmp/druid/segmentInfoCache
|
druid.paths.segmentInfoCache=/tmp/druid/segmentInfoCache
|
||||||
```
|
```
|
||||||
2. Launch the [[Master]] node
|
2. Launch the [Master](Master.html) node
|
||||||
```bash
|
```bash
|
||||||
java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 \
|
java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 \
|
||||||
-classpath lib/*:config/master \
|
-classpath lib/*:config/master \
|
||||||
|
@ -321,7 +324,7 @@ We can use the same records we have been, in a file called records.json:
|
||||||
|
|
||||||
### Run the Hadoop Job ###
|
### Run the Hadoop Job ###
|
||||||
|
|
||||||
Now its time to run the Hadoop [[Batch-ingestion]] job, HadoopDruidIndexer, which will fill a historical [[Compute]] node with data. First we'll need to configure the job.
|
Now its time to run the Hadoop [Batch-ingestion](Batch-ingestion.html) job, HadoopDruidIndexer, which will fill a historical [Compute](Compute.html) node with data. First we'll need to configure the job.
|
||||||
|
|
||||||
1. Create a config called batchConfig.json similar to:
|
1. Create a config called batchConfig.json similar to:
|
||||||
```json
|
```json
|
||||||
|
@ -364,4 +367,4 @@ Now its time to run the Hadoop [[Batch-ingestion]] job, HadoopDruidIndexer, whic
|
||||||
java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Ddruid.realtime.specFile=realtime.spec -classpath lib/* com.metamx.druid.indexer.HadoopDruidIndexerMain batchConfig.json
|
java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Ddruid.realtime.specFile=realtime.spec -classpath lib/* com.metamx.druid.indexer.HadoopDruidIndexerMain batchConfig.json
|
||||||
```
|
```
|
||||||
|
|
||||||
You can now move on to [[Querying Your Data]]!
|
You can now move on to [Querying Your Data](Querying-Your-Data.html)!
|
|
@ -1,3 +1,6 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
Master
|
Master
|
||||||
======
|
======
|
||||||
|
|
||||||
|
@ -12,7 +15,7 @@ Rules
|
||||||
|
|
||||||
Segments are loaded and dropped from the cluster based on a set of rules. Rules indicate how segments should be assigned to different compute node tiers and how many replicants of a segment should exist in each tier. Rules may also indicate when segments should be dropped entirely from the cluster. The master loads a set of rules from the database. Rules may be specific to a certain datasource and/or a default set of rules can be configured. Rules are read in order and hence the ordering of rules is important. The master will cycle through all available segments and match each segment with the first rule that applies. Each segment may only match a single rule
|
Segments are loaded and dropped from the cluster based on a set of rules. Rules indicate how segments should be assigned to different compute node tiers and how many replicants of a segment should exist in each tier. Rules may also indicate when segments should be dropped entirely from the cluster. The master loads a set of rules from the database. Rules may be specific to a certain datasource and/or a default set of rules can be configured. Rules are read in order and hence the ordering of rules is important. The master will cycle through all available segments and match each segment with the first rule that applies. Each segment may only match a single rule
|
||||||
|
|
||||||
For more information on rules, see [[Rule Configuration]].
|
For more information on rules, see [Rule Configuration](Rule-Configuration.html).
|
||||||
|
|
||||||
Cleaning Up Segments
|
Cleaning Up Segments
|
||||||
--------------------
|
--------------------
|
||||||
|
@ -100,4 +103,4 @@ Master nodes can be run using the `com.metamx.druid.http.MasterMain` class.
|
||||||
Configuration
|
Configuration
|
||||||
-------------
|
-------------
|
||||||
|
|
||||||
See [[Configuration]].
|
See [Configuration](Configuration.html).
|
||||||
|
|
|
@ -1,3 +1,6 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
MySQL is an external dependency of Druid. We use it to store various metadata about the system, but not to store the actual data. There are a number of tables used for various purposes described below.
|
MySQL is an external dependency of Druid. We use it to store various metadata about the system, but not to store the actual data. There are a number of tables used for various purposes described below.
|
||||||
|
|
||||||
Segments Table
|
Segments Table
|
||||||
|
@ -5,7 +8,7 @@ Segments Table
|
||||||
|
|
||||||
This is dictated by the `druid.database.segmentTable` property (Note that these properties are going to change in the next stable version after 0.4.12).
|
This is dictated by the `druid.database.segmentTable` property (Note that these properties are going to change in the next stable version after 0.4.12).
|
||||||
|
|
||||||
This table stores metadata about the segments that are available in the system. The table is polled by the [[Master]] to determine the set of segments that should be available for querying in the system. The table has two main functional columns, the other columns are for indexing purposes.
|
This table stores metadata about the segments that are available in the system. The table is polled by the [Master](Master.html) to determine the set of segments that should be available for querying in the system. The table has two main functional columns, the other columns are for indexing purposes.
|
||||||
|
|
||||||
The `used` column is a boolean “tombstone”. A 1 means that the segment should be “used” by the cluster (i.e. it should be loaded and available for requests). A 0 means that the segment should not be actively loaded into the cluster. We do this as a means of removing segments from the cluster without actually removing their metadata (which allows for simpler rolling back if that is ever an issue).
|
The `used` column is a boolean “tombstone”. A 1 means that the segment should be “used” by the cluster (i.e. it should be loaded and available for requests). A 0 means that the segment should not be actively loaded into the cluster. We do this as a means of removing segments from the cluster without actually removing their metadata (which allows for simpler rolling back if that is ever an issue).
|
||||||
|
|
||||||
|
@ -31,7 +34,7 @@ Note that the format of this blob can and will change from time-to-time.
|
||||||
Rule Table
|
Rule Table
|
||||||
----------
|
----------
|
||||||
|
|
||||||
The rule table is used to store the various rules about where segments should land. These rules are used by the [[Master]] when making segment (re-)allocation decisions about the cluster.
|
The rule table is used to store the various rules about where segments should land. These rules are used by the [Master](Master.html) when making segment (re-)allocation decisions about the cluster.
|
||||||
|
|
||||||
Config Table
|
Config Table
|
||||||
------------
|
------------
|
||||||
|
@ -41,4 +44,4 @@ The config table is used to store runtime configuration objects. We do not have
|
||||||
Task-related Tables
|
Task-related Tables
|
||||||
-------------------
|
-------------------
|
||||||
|
|
||||||
There are also a number of tables created and used by the [[Indexing Service]] in the course of its work.
|
There are also a number of tables created and used by the [Indexing Service](Indexing-Service.html) in the course of its work.
|
||||||
|
|
|
@ -1,3 +1,6 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
The orderBy field provides the functionality to sort and limit the set of results from a groupBy query. Available options are:
|
The orderBy field provides the functionality to sort and limit the set of results from a groupBy query. Available options are:
|
||||||
|
|
||||||
### DefaultLimitSpec
|
### DefaultLimitSpec
|
||||||
|
|
|
@ -1,3 +1,6 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
The Plumber is the thing that handles generated segments both while they are being generated and when they are “done”. This is also technically a pluggable interface and there are multiple implementations, but there are a lot of details handled by the plumber such that it is expected that there will only be a few implementations and only more advanced third-parties will implement their own. See [here](https://github.com/metamx/druid/wiki/Plumber#available-plumbers) for a description of the plumbers included with Druid.
|
The Plumber is the thing that handles generated segments both while they are being generated and when they are “done”. This is also technically a pluggable interface and there are multiple implementations, but there are a lot of details handled by the plumber such that it is expected that there will only be a few implementations and only more advanced third-parties will implement their own. See [here](https://github.com/metamx/druid/wiki/Plumber#available-plumbers) for a description of the plumbers included with Druid.
|
||||||
|
|
||||||
|Field|Type|Description|Required|
|
|Field|Type|Description|Required|
|
||||||
|
|
|
@ -1,3 +1,6 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
Post-aggregations are specifications of processing that should happen on aggregated values as they come out of Druid. If you include a post aggregation as part of a query, make sure to include all aggregators the post-aggregator requires.
|
Post-aggregations are specifications of processing that should happen on aggregated values as they come out of Druid. If you include a post aggregation as part of a query, make sure to include all aggregators the post-aggregator requires.
|
||||||
|
|
||||||
There are several post-aggregators available.
|
There are several post-aggregators available.
|
||||||
|
@ -19,9 +22,9 @@ The grammar for an arithmetic post aggregation is:
|
||||||
|
|
||||||
### Field accessor post-aggregator
|
### Field accessor post-aggregator
|
||||||
|
|
||||||
This returns the value produced by the specified [[aggregator|Aggregations]].
|
This returns the value produced by the specified [aggregator|Aggregations](aggregator|Aggregations.html).
|
||||||
|
|
||||||
`fieldName` refers to the output name of the aggregator given in the [[aggregations|Aggregations]] portion of the query.
|
`fieldName` refers to the output name of the aggregator given in the [aggregations|Aggregations](aggregations|Aggregations.html) portion of the query.
|
||||||
|
|
||||||
<code>field_accessor : {
|
<code>field_accessor : {
|
||||||
"type" : "fieldAccess",
|
"type" : "fieldAccess",
|
||||||
|
|
|
@ -1,6 +1,9 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
# Setup #
|
# Setup #
|
||||||
|
|
||||||
Before we start querying druid, we're going to finish setting up a complete cluster on localhost. In [[Loading Your Data]] we setup a [[Realtime]], [[Compute]] and [[Master]] node. If you've already completed that tutorial, you need only follow the directions for 'Booting a Broker Node'.
|
Before we start querying druid, we're going to finish setting up a complete cluster on localhost. In [Loading Your Data](Loading-Your-Data.html) we setup a [Realtime](Realtime.html), [Compute](Compute.html) and [Master](Master.html) node. If you've already completed that tutorial, you need only follow the directions for 'Booting a Broker Node'.
|
||||||
|
|
||||||
## Booting a Broker Node ##
|
## Booting a Broker Node ##
|
||||||
|
|
||||||
|
@ -95,11 +98,11 @@ com.metamx.druid.http.ComputeMain
|
||||||
|
|
||||||
# Querying Your Data #
|
# Querying Your Data #
|
||||||
|
|
||||||
Now that we have a complete cluster setup on localhost, we need to load data. To do so, refer to [[Loading Your Data]]. Having done that, its time to query our data! For a complete specification of queries, see [[Querying]].
|
Now that we have a complete cluster setup on localhost, we need to load data. To do so, refer to [Loading Your Data](Loading-Your-Data.html). Having done that, its time to query our data! For a complete specification of queries, see [Querying](Querying.html).
|
||||||
|
|
||||||
## Querying Different Nodes ##
|
## Querying Different Nodes ##
|
||||||
|
|
||||||
As a shared-nothing system, there are three ways to query druid, against the [[Realtime]], [[Compute]] or [[Broker]] node. Querying a Realtime node returns only realtime data, querying a compute node returns only historical segments. Querying the broker will query both realtime and compute segments and compose an overall result for the query. This is the normal mode of operation for queries in druid.
|
As a shared-nothing system, there are three ways to query druid, against the [Realtime](Realtime.html), [Compute](Compute.html) or [Broker](Broker.html) node. Querying a Realtime node returns only realtime data, querying a compute node returns only historical segments. Querying the broker will query both realtime and compute segments and compose an overall result for the query. This is the normal mode of operation for queries in druid.
|
||||||
|
|
||||||
### Construct a Query ###
|
### Construct a Query ###
|
||||||
|
|
||||||
|
@ -180,7 +183,7 @@ Now that we know what nodes can be queried (although you should usually use the
|
||||||
|
|
||||||
## Querying Against the realtime.spec ##
|
## Querying Against the realtime.spec ##
|
||||||
|
|
||||||
How are we to know what queries we can run? Although [[Querying]] is a helpful index, to get a handle on querying our data we need to look at our [[Realtime]] node's realtime.spec file:
|
How are we to know what queries we can run? Although [Querying](Querying.html) is a helpful index, to get a handle on querying our data we need to look at our [Realtime](Realtime.html) node's realtime.spec file:
|
||||||
|
|
||||||
```json
|
```json
|
||||||
[{
|
[{
|
||||||
|
@ -222,7 +225,7 @@ Our dataSource tells us the name of the relation/table, or 'source of data', to
|
||||||
|
|
||||||
### aggregations ###
|
### aggregations ###
|
||||||
|
|
||||||
Note the [[Aggregations]] in our query:
|
Note the [Aggregations](Aggregations.html) in our query:
|
||||||
|
|
||||||
```json
|
```json
|
||||||
"aggregations": [
|
"aggregations": [
|
||||||
|
@ -241,7 +244,7 @@ this matches up to the aggregators in the schema of our realtime.spec!
|
||||||
|
|
||||||
### dimensions ###
|
### dimensions ###
|
||||||
|
|
||||||
Lets look back at our actual records (from [[Loading Your Data]]):
|
Lets look back at our actual records (from [Loading Your Data](Loading Your Data.html)):
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{"utcdt": "2010-01-01T01:01:01", "wp": 1000, "gender": "male", "age": 100}
|
{"utcdt": "2010-01-01T01:01:01", "wp": 1000, "gender": "male", "age": 100}
|
||||||
|
@ -356,8 +359,8 @@ Which gets us just people aged 40:
|
||||||
} ]
|
} ]
|
||||||
```
|
```
|
||||||
|
|
||||||
Check out [[Filters]] for more.
|
Check out [Filters](Filters.html) for more.
|
||||||
|
|
||||||
## Learn More ##
|
## Learn More ##
|
||||||
|
|
||||||
You can learn more about querying at [[Querying]]! Now check out [[Booting a production cluster]]!
|
You can learn more about querying at [Querying](Querying.html)! Now check out [Booting a production cluster](Booting-a-production-cluster.html)!
|
|
@ -1,7 +1,10 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
Querying
|
Querying
|
||||||
========
|
========
|
||||||
|
|
||||||
Queries are made using an HTTP REST style request to a [[Broker]], [[Compute]], or [[Realtime]] node. The query is expressed in JSON and each of these node types expose the same REST query interface.
|
Queries are made using an HTTP REST style request to a [Broker](Broker.html), [Compute](Compute.html), or [Realtime](Realtime.html) node. The query is expressed in JSON and each of these node types expose the same REST query interface.
|
||||||
|
|
||||||
We start by describing an example query with additional comments that mention possible variations. Query operators are also summarized in a table below.
|
We start by describing an example query with additional comments that mention possible variations. Query operators are also summarized in a table below.
|
||||||
|
|
||||||
|
@ -52,7 +55,7 @@ The dataSource JSON field shown next identifies where to apply the query. In thi
|
||||||
\`\`\`javascript
|
\`\`\`javascript
|
||||||
[dataSource]() “randSeq”,
|
[dataSource]() “randSeq”,
|
||||||
\`\`\`
|
\`\`\`
|
||||||
The granularity JSON field specifies the bucket size for values. It could be a built-in time interval like “second”, “minute”, “fifteen\_minute”, “thirty\_minute”, “hour” or “day”. It can also be an expression like `{"type": "period", "period":"PT6m"}` meaning “6 minute buckets”. See [[Granularities]] for more information on the different options for this field. In this example, it is set to the special value “all” which means [bucket all data points together into the same time bucket]()
|
The granularity JSON field specifies the bucket size for values. It could be a built-in time interval like “second”, “minute”, “fifteen\_minute”, “thirty\_minute”, “hour” or “day”. It can also be an expression like `{"type": "period", "period":"PT6m"}` meaning “6 minute buckets”. See [Granularities](Granularities.html) for more information on the different options for this field. In this example, it is set to the special value “all” which means [bucket all data points together into the same time bucket]()
|
||||||
\`\`\`javascript
|
\`\`\`javascript
|
||||||
[granularity]() “all”,
|
[granularity]() “all”,
|
||||||
\`\`\`
|
\`\`\`
|
||||||
|
@ -60,7 +63,7 @@ The dimensions JSON field value is an array of zero or more fields as defined in
|
||||||
\`\`\`javascript
|
\`\`\`javascript
|
||||||
[dimensions]() [],
|
[dimensions]() [],
|
||||||
\`\`\`
|
\`\`\`
|
||||||
A groupBy also requires the JSON field “aggregations” (See [[Aggregations]]), which are applied to the column specified by fieldName and the output of the aggregation will be named according to the value in the “name” field:
|
A groupBy also requires the JSON field “aggregations” (See [Aggregations](Aggregations.html)), which are applied to the column specified by fieldName and the output of the aggregation will be named according to the value in the “name” field:
|
||||||
\`\`\`javascript
|
\`\`\`javascript
|
||||||
[aggregations]() [
|
[aggregations]() [
|
||||||
{ [type]() “count”, [name]() “rows” },
|
{ [type]() “count”, [name]() “rows” },
|
||||||
|
@ -68,7 +71,7 @@ A groupBy also requires the JSON field “aggregations” (See [[Aggregations]])
|
||||||
{ [type]() “doubleSum”, [fieldName]() “outColumn”, [name]() “randomNumberSum” }
|
{ [type]() “doubleSum”, [fieldName]() “outColumn”, [name]() “randomNumberSum” }
|
||||||
],
|
],
|
||||||
\`\`\`
|
\`\`\`
|
||||||
You can also specify postAggregations, which are applied after data has been aggregated for the current granularity and dimensions bucket. See [[Post Aggregations]] for a detailed description. In the rand example, an arithmetic type operation (division, as specified by “fn”) is performed with the result “name” of “avg\_random”. The “fields” field specifies the inputs from the aggregation stage to this expression. Note that identifiers corresponding to “name” JSON field inside the type “fieldAccess” are required but not used outside this expression, so they are prefixed with “dummy” for clarity:
|
You can also specify postAggregations, which are applied after data has been aggregated for the current granularity and dimensions bucket. See [Post Aggregations](Post Aggregations.html) for a detailed description. In the rand example, an arithmetic type operation (division, as specified by “fn”) is performed with the result “name” of “avg\_random”. The “fields” field specifies the inputs from the aggregation stage to this expression. Note that identifiers corresponding to “name” JSON field inside the type “fieldAccess” are required but not used outside this expression, so they are prefixed with “dummy” for clarity:
|
||||||
\`\`\`javascript
|
\`\`\`javascript
|
||||||
[postAggregations]() [{
|
[postAggregations]() [{
|
||||||
[type]() “arithmetic”,
|
[type]() “arithmetic”,
|
||||||
|
@ -96,11 +99,11 @@ The following table summarizes query properties.
|
||||||
|timeseries, groupBy, search, timeBoundary|dataSource|query is applied to this data source|yes|
|
|timeseries, groupBy, search, timeBoundary|dataSource|query is applied to this data source|yes|
|
||||||
|timeseries, groupBy, search|intervals|range of time series to include in query|yes|
|
|timeseries, groupBy, search|intervals|range of time series to include in query|yes|
|
||||||
|timeseries, groupBy, search, timeBoundary|context|This is a key-value map that can allow the query to alter some of the behavior of a query. It is primarily used for debugging, for example if you include `"bySegment":true` in the map, you will get results associated with the data segment they came from.|no|
|
|timeseries, groupBy, search, timeBoundary|context|This is a key-value map that can allow the query to alter some of the behavior of a query. It is primarily used for debugging, for example if you include `"bySegment":true` in the map, you will get results associated with the data segment they came from.|no|
|
||||||
|timeseries, groupBy, search|filter|Specifies the filter (the “WHERE” clause in SQL) for the query. See [[Filters]]|no|
|
|timeseries, groupBy, search|filter|Specifies the filter (the “WHERE” clause in SQL) for the query. See [Filters](Filters.html)|no|
|
||||||
|timeseries, groupBy, search|granularity|the timestamp granularity to bucket results into (i.e. “hour”). See [[Granularities]] for more information.|no|
|
|timeseries, groupBy, search|granularity|the timestamp granularity to bucket results into (i.e. “hour”). See [Granularities](Granularities.html) for more information.|no|
|
||||||
|groupBy|dimensions|constrains the groupings; if empty, then one value per time granularity bucket|yes|
|
|groupBy|dimensions|constrains the groupings; if empty, then one value per time granularity bucket|yes|
|
||||||
|timeseries, groupBy|aggregations|aggregations that combine values in a bucket. See [[Aggregations]].|yes|
|
|timeseries, groupBy|aggregations|aggregations that combine values in a bucket. See [Aggregations](Aggregations.html).|yes|
|
||||||
|timeseries, groupBy|postAggregations|aggregations of aggregations. See [[Post Aggregations]].|yes|
|
|timeseries, groupBy|postAggregations|aggregations of aggregations. See [Post Aggregations](Post Aggregations.html).|yes|
|
||||||
|search|limit|maximum number of results (default is 1000), a system-level maximum can also be set via `com.metamx.query.search.maxSearchLimit`|no|
|
|search|limit|maximum number of results (default is 1000), a system-level maximum can also be set via `com.metamx.query.search.maxSearchLimit`|no|
|
||||||
|search|searchDimensions|Dimensions to apply the search query to. If not specified, it will search through all dimensions.|no|
|
|search|searchDimensions|Dimensions to apply the search query to. If not specified, it will search through all dimensions.|no|
|
||||||
|search|query|The query portion of the search query. This is essentially a predicate that specifies if something matches.|yes|
|
|search|query|The query portion of the search query. This is essentially a predicate that specifies if something matches.|yes|
|
||||||
|
@ -108,4 +111,4 @@ The following table summarizes query properties.
|
||||||
Additional Information about Query Types
|
Additional Information about Query Types
|
||||||
----------------------------------------
|
----------------------------------------
|
||||||
|
|
||||||
[[TimeseriesQuery]]
|
[TimeseriesQuery](TimeseriesQuery.html)
|
||||||
|
|
|
@ -1,7 +1,10 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
Realtime
|
Realtime
|
||||||
========
|
========
|
||||||
|
|
||||||
Realtime nodes provide a realtime index. Data indexed via these nodes is immediately available for querying. Realtime nodes will periodically build segments representing the data they’ve collected over some span of time and hand these segments off to [[Compute]] nodes.
|
Realtime nodes provide a realtime index. Data indexed via these nodes is immediately available for querying. Realtime nodes will periodically build segments representing the data they’ve collected over some span of time and hand these segments off to [Compute](Compute.html) nodes.
|
||||||
|
|
||||||
Running
|
Running
|
||||||
-------
|
-------
|
||||||
|
@ -18,7 +21,7 @@ The segment propagation diagram for real-time data ingestion can be seen below:
|
||||||
Configuration
|
Configuration
|
||||||
-------------
|
-------------
|
||||||
|
|
||||||
Realtime nodes take a mix of base server configuration and spec files that describe how to connect, process and expose the realtime feed. See [[Configuration]] for information about general server configuration.
|
Realtime nodes take a mix of base server configuration and spec files that describe how to connect, process and expose the realtime feed. See [Configuration](Configuration.html) for information about general server configuration.
|
||||||
|
|
||||||
### Realtime “specFile”
|
### Realtime “specFile”
|
||||||
|
|
||||||
|
@ -59,7 +62,7 @@ There are four parts to a realtime stream specification, `schema`, `config`, `fi
|
||||||
|
|
||||||
#### Schema
|
#### Schema
|
||||||
|
|
||||||
This describes the data schema for the output Druid segment. More information about concepts in Druid and querying can be found at [[Concepts-and-Terminology]] and [[Querying]].
|
This describes the data schema for the output Druid segment. More information about concepts in Druid and querying can be found at [Concepts-and-Terminology](Concepts-and-Terminology.html) and [Querying](Querying.html).
|
||||||
|
|
||||||
|Field|Type|Description|Required|
|
|Field|Type|Description|Required|
|
||||||
|-----|----|-----------|--------|
|
|-----|----|-----------|--------|
|
||||||
|
@ -80,11 +83,11 @@ This provides configuration for the data processing portion of the realtime stre
|
||||||
|
|
||||||
### Firehose
|
### Firehose
|
||||||
|
|
||||||
See [[Firehose]].
|
See [Firehose](Firehose.html).
|
||||||
|
|
||||||
### Plumber
|
### Plumber
|
||||||
|
|
||||||
See [[Plumber]]
|
See [Plumber](Plumber.html)
|
||||||
|
|
||||||
Constraints
|
Constraints
|
||||||
-----------
|
-----------
|
||||||
|
|
|
@ -1,3 +1,6 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
Note: It is recommended that the master console is used to configure rules. However, the master node does have HTTP endpoints to programmatically configure rules.
|
Note: It is recommended that the master console is used to configure rules. However, the master node does have HTTP endpoints to programmatically configure rules.
|
||||||
|
|
||||||
Load Rules
|
Load Rules
|
||||||
|
|
|
@ -1,3 +1,6 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
A search query returns dimension values that match the search specification.
|
A search query returns dimension values that match the search specification.
|
||||||
|
|
||||||
<code>{
|
<code>{
|
||||||
|
@ -27,11 +30,11 @@ There are several main parts to a search query:
|
||||||
|--------|-----------|---------|
|
|--------|-----------|---------|
|
||||||
|queryType|This String should always be “search”; this is the first thing Druid looks at to figure out how to interpret the query|yes|
|
|queryType|This String should always be “search”; this is the first thing Druid looks at to figure out how to interpret the query|yes|
|
||||||
|dataSource|A String defining the data source to query, very similar to a table in a relational database|yes|
|
|dataSource|A String defining the data source to query, very similar to a table in a relational database|yes|
|
||||||
|granularity|Defines the granularity of the query. See [[Granularities]]|yes|
|
|granularity|Defines the granularity of the query. See [Granularities](Granularities.html)|yes|
|
||||||
|filter|See [[Filters]]|no|
|
|filter|See [Filters](Filters.html)|no|
|
||||||
|intervals|A JSON Object representing ISO-8601 Intervals. This defines the time ranges to run the query over.|yes|
|
|intervals|A JSON Object representing ISO-8601 Intervals. This defines the time ranges to run the query over.|yes|
|
||||||
|searchDimensions|The dimensions to run the search over. Excluding this means the search is run over all dimensions.|no|
|
|searchDimensions|The dimensions to run the search over. Excluding this means the search is run over all dimensions.|no|
|
||||||
|query|See [[SearchQuerySpec]].|yes|
|
|query|See [SearchQuerySpec](SearchQuerySpec.html).|yes|
|
||||||
|sort|How the results of the search should sorted. Two possible types here are “lexicographic” and “strlen”.|yes|
|
|sort|How the results of the search should sorted. Two possible types here are “lexicographic” and “strlen”.|yes|
|
||||||
|context|An additional JSON Object which can be used to specify certain flags.|no|
|
|context|An additional JSON Object which can be used to specify certain flags.|no|
|
||||||
|
|
||||||
|
|
|
@ -1,3 +1,6 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
Search query specs define how a “match” is defined between a search value and a dimension value. The available search query specs are:
|
Search query specs define how a “match” is defined between a search value and a dimension value. The available search query specs are:
|
||||||
|
|
||||||
InsensitiveContainsSearchQuerySpec
|
InsensitiveContainsSearchQuerySpec
|
||||||
|
|
|
@ -1,3 +1,6 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
Segment metadata queries return per segment information about:
|
Segment metadata queries return per segment information about:
|
||||||
\* Cardinality of all columns in the segment
|
\* Cardinality of all columns in the segment
|
||||||
\* Estimated byte size for the segment columns in TSV format
|
\* Estimated byte size for the segment columns in TSV format
|
||||||
|
|
|
@ -1,7 +1,10 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
Segments
|
Segments
|
||||||
========
|
========
|
||||||
|
|
||||||
Segments are the fundamental structure to store data in Druid. [[Compute]] and [[Realtime]] nodes load and serve segments for querying. To construct segments, Druid will always shard data by a time partition. Data may be further sharded based on dimension cardinality and row count.
|
Segments are the fundamental structure to store data in Druid. [Compute](Compute.html) and [Realtime](Realtime.html) nodes load and serve segments for querying. To construct segments, Druid will always shard data by a time partition. Data may be further sharded based on dimension cardinality and row count.
|
||||||
|
|
||||||
The latest Druid segment version is `v9`.
|
The latest Druid segment version is `v9`.
|
||||||
|
|
||||||
|
|
|
@ -1,3 +1,6 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
Note: This feature is highly experimental and only works with spatially indexed dimensions.
|
Note: This feature is highly experimental and only works with spatially indexed dimensions.
|
||||||
|
|
||||||
The grammar for a spatial filter is as follows:
|
The grammar for a spatial filter is as follows:
|
||||||
|
|
|
@ -1,3 +1,6 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
Note: This feature is highly experimental.
|
Note: This feature is highly experimental.
|
||||||
|
|
||||||
In any of the data specs, there is now the option of providing spatial dimensions. For example, for a JSON data spec, spatial dimensions can be specified as follows:
|
In any of the data specs, there is now the option of providing spatial dimensions. For example, for a JSON data spec, spatial dimensions can be specified as follows:
|
||||||
|
|
|
@ -1,3 +1,6 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
This page describes how to use Riak-CS for deep storage instead of S3. We are still setting up some of the peripheral stuff (file downloads, etc.).
|
This page describes how to use Riak-CS for deep storage instead of S3. We are still setting up some of the peripheral stuff (file downloads, etc.).
|
||||||
|
|
||||||
This guide provided by Pablo Nebrera, thanks!
|
This guide provided by Pablo Nebrera, thanks!
|
||||||
|
@ -19,12 +22,12 @@ We started with a minimal CentOS installation but you can use any other compatib
|
||||||
1. A Kafka Broker
|
1. A Kafka Broker
|
||||||
1. A single-node Zookeeper ensemble
|
1. A single-node Zookeeper ensemble
|
||||||
1. A single-node Riak-CS cluster
|
1. A single-node Riak-CS cluster
|
||||||
1. A Druid [[Master]]
|
1. A Druid [Master](Master.html)
|
||||||
1. A Druid [[Broker]]
|
1. A Druid [Broker](Broker.html)
|
||||||
1. A Druid [[Compute]]
|
1. A Druid [Compute](Compute.html)
|
||||||
1. A Druid [[Realtime]]
|
1. A Druid [Realtime](Realtime.html)
|
||||||
|
|
||||||
This just walks through getting the relevant software installed and running. You will then need to configure the [[Realtime]] node to take in your data.
|
This just walks through getting the relevant software installed and running. You will then need to configure the [Realtime](Realtime.html) node to take in your data.
|
||||||
|
|
||||||
### Configure System
|
### Configure System
|
||||||
|
|
||||||
|
|
|
@ -1,3 +1,6 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
Numerous backend engineers at [Metamarkets](http://www.metamarkets.com) work on Druid full-time. If you any questions about usage or code, feel free to contact any of us.
|
Numerous backend engineers at [Metamarkets](http://www.metamarkets.com) work on Druid full-time. If you any questions about usage or code, feel free to contact any of us.
|
||||||
|
|
||||||
Google Groups Mailing List
|
Google Groups Mailing List
|
||||||
|
|
|
@ -1,3 +1,6 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
Tasks are run on workers and always operate on a single datasource. Once an indexer coordinator node accepts a task, a lock is created for the datasource and interval specified in the task. Tasks do not need to explicitly release locks, they are released upon task completion. Tasks may potentially release locks early if they desire. Tasks ids are unique by naming them using UUIDs or the timestamp in which the task was created. Tasks are also part of a “task group”, which is a set of tasks that can share interval locks.
|
Tasks are run on workers and always operate on a single datasource. Once an indexer coordinator node accepts a task, a lock is created for the datasource and interval specified in the task. Tasks do not need to explicitly release locks, they are released upon task completion. Tasks may potentially release locks early if they desire. Tasks ids are unique by naming them using UUIDs or the timestamp in which the task was created. Tasks are also part of a “task group”, which is a set of tasks that can share interval locks.
|
||||||
|
|
||||||
There are several different types of tasks.
|
There are several different types of tasks.
|
||||||
|
|
|
@ -1,3 +1,6 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
YourKit supports the Druid open source projects with its
|
YourKit supports the Druid open source projects with its
|
||||||
full-featured Java Profiler.
|
full-featured Java Profiler.
|
||||||
YourKit, LLC is the creator of innovative and intelligent tools for profiling
|
YourKit, LLC is the creator of innovative and intelligent tools for profiling
|
||||||
|
|
|
@ -1,3 +1,6 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
Time boundary queries return the earliest and latest data points of a data set. The grammar is:
|
Time boundary queries return the earliest and latest data points of a data set. The grammar is:
|
||||||
|
|
||||||
<code>{
|
<code>{
|
||||||
|
|
|
@ -1,3 +1,6 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
Timeseries queries
|
Timeseries queries
|
||||||
==================
|
==================
|
||||||
|
|
||||||
|
@ -81,10 +84,10 @@ There are 7 main parts to a timeseries query:
|
||||||
|--------|-----------|---------|
|
|--------|-----------|---------|
|
||||||
|queryType|This String should always be “timeseries”; this is the first thing Druid looks at to figure out how to interpret the query|yes|
|
|queryType|This String should always be “timeseries”; this is the first thing Druid looks at to figure out how to interpret the query|yes|
|
||||||
|dataSource|A String defining the data source to query, very similar to a table in a relational database|yes|
|
|dataSource|A String defining the data source to query, very similar to a table in a relational database|yes|
|
||||||
|granularity|Defines the granularity of the query. See [[Granularities]]|yes|
|
|granularity|Defines the granularity of the query. See [Granularities](Granularities.html)|yes|
|
||||||
|filter|See [[Filters]]|no|
|
|filter|See [Filters](Filters.html)|no|
|
||||||
|aggregations|See [[Aggregations]]|yes|
|
|aggregations|See [Aggregations](Aggregations.html)|yes|
|
||||||
|postAggregations|See [[Post Aggregations]]|no|
|
|postAggregations|See [Post Aggregations](Post-Aggregations.html)|no|
|
||||||
|intervals|A JSON Object representing ISO-8601 Intervals. This defines the time ranges to run the query over.|yes|
|
|intervals|A JSON Object representing ISO-8601 Intervals. This defines the time ranges to run the query over.|yes|
|
||||||
|context|An additional JSON Object which can be used to specify certain flags.|no|
|
|context|An additional JSON Object which can be used to specify certain flags.|no|
|
||||||
|
|
||||||
|
|
|
@ -1,3 +1,6 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
Greetings! This tutorial will help clarify some core Druid concepts. We will use a realtime dataset and issue some basic Druid queries. If you are ready to explore Druid, and learn a thing or two, read on!
|
Greetings! This tutorial will help clarify some core Druid concepts. We will use a realtime dataset and issue some basic Druid queries. If you are ready to explore Druid, and learn a thing or two, read on!
|
||||||
|
|
||||||
About the data
|
About the data
|
||||||
|
@ -38,7 +41,7 @@ These metrics track the number of characters added, deleted, and changed.
|
||||||
Setting Up
|
Setting Up
|
||||||
----------
|
----------
|
||||||
|
|
||||||
There are two ways to setup Druid: download a tarball, or [[Build From Source]]. You only need to do one of these.
|
There are two ways to setup Druid: download a tarball, or [Build From Source](Build From Source.html). You only need to do one of these.
|
||||||
|
|
||||||
### Download a Tarball
|
### Download a Tarball
|
||||||
|
|
||||||
|
@ -61,7 +64,7 @@ You should see a bunch of files:
|
||||||
Running Example Scripts
|
Running Example Scripts
|
||||||
-----------------------
|
-----------------------
|
||||||
|
|
||||||
Let’s start doing stuff. You can start a Druid [[Realtime]] node by issuing:
|
Let’s start doing stuff. You can start a Druid [Realtime](Realtime.html) node by issuing:
|
||||||
|
|
||||||
./run_example_server.sh
|
./run_example_server.sh
|
||||||
|
|
||||||
|
@ -173,7 +176,7 @@ As you can probably tell, the result is indicating the maximum and minimum times
|
||||||
Return to your favorite editor and create the file:
|
Return to your favorite editor and create the file:
|
||||||
<pre>timeseries_query.body</pre>
|
<pre>timeseries_query.body</pre>
|
||||||
|
|
||||||
We are going to make a slightly more complicated query, the [[TimeseriesQuery]]. Copy and paste the following into the file:
|
We are going to make a slightly more complicated query, the [TimeseriesQuery](TimeseriesQuery.html). Copy and paste the following into the file:
|
||||||
<pre><code>
|
<pre><code>
|
||||||
{
|
{
|
||||||
"queryType": "timeseries",
|
"queryType": "timeseries",
|
||||||
|
@ -197,7 +200,7 @@ We are going to make a slightly more complicated query, the [[TimeseriesQuery]].
|
||||||
}
|
}
|
||||||
</code></pre>
|
</code></pre>
|
||||||
|
|
||||||
You are probably wondering, what are these [[Granularities]] and [[Aggregations]] things? What the query is doing is aggregating some metrics over some span of time.
|
You are probably wondering, what are these [Granularities](Granularities.html) and [Aggregations](Aggregations.html) things? What the query is doing is aggregating some metrics over some span of time.
|
||||||
To issue the query and get some results, run the following in your command line:
|
To issue the query and get some results, run the following in your command line:
|
||||||
<pre><code>curl -X POST 'http://localhost:8083/druid/v2/?pretty' -H 'content-type: application/json' -d ````timeseries\_query.body</code>
|
<pre><code>curl -X POST 'http://localhost:8083/druid/v2/?pretty' -H 'content-type: application/json' -d ````timeseries\_query.body</code>
|
||||||
|
|
||||||
|
@ -272,7 +275,7 @@ This gives us something like the following:
|
||||||
Solving a Problem
|
Solving a Problem
|
||||||
-----------------
|
-----------------
|
||||||
|
|
||||||
One of Druid’s main powers is to provide answers to problems, so let’s pose a problem. What if we wanted to know what the top pages in the US are, ordered by the number of edits over the last few minutes you’ve been going through this tutorial? To solve this problem, we have to return to the query we introduced at the very beginning of this tutorial, the [[GroupByQuery]]. It would be nice if we could group by results by dimension value and somehow sort those results… and it turns out we can!
|
One of Druid’s main powers is to provide answers to problems, so let’s pose a problem. What if we wanted to know what the top pages in the US are, ordered by the number of edits over the last few minutes you’ve been going through this tutorial? To solve this problem, we have to return to the query we introduced at the very beginning of this tutorial, the [GroupByQuery](GroupByQuery.html). It would be nice if we could group by results by dimension value and somehow sort those results… and it turns out we can!
|
||||||
|
|
||||||
Let’s create the file:
|
Let’s create the file:
|
||||||
|
|
||||||
|
@ -314,7 +317,7 @@ Let’s create the file:
|
||||||
}
|
}
|
||||||
</code>
|
</code>
|
||||||
|
|
||||||
Woah! Our query just got a way more complicated. Now we have these [[Filters]] things and this [[OrderBy]] thing. Fear not, it turns out the new objects we’ve introduced to our query can help define the format of our results and provide an answer to our question.
|
Woah! Our query just got a way more complicated. Now we have these [Filters](Filters.html) things and this [OrderBy](OrderBy.html) thing. Fear not, it turns out the new objects we’ve introduced to our query can help define the format of our results and provide an answer to our question.
|
||||||
|
|
||||||
If you issue the query:
|
If you issue the query:
|
||||||
|
|
||||||
|
@ -354,9 +357,9 @@ Feel free to tweak other query parameters to answer other questions you may have
|
||||||
Next Steps
|
Next Steps
|
||||||
----------
|
----------
|
||||||
|
|
||||||
What to know even more information about the Druid Cluster? Check out [[Tutorial: The Druid Cluster]]
|
What to know even more information about the Druid Cluster? Check out [Tutorial: The Druid Cluster](Tutorial:-The-Druid-Cluster.html)
|
||||||
|
|
||||||
Druid is even more fun if you load your own data into it! To learn how to load your data, see [[Loading Your Data]].
|
Druid is even more fun if you load your own data into it! To learn how to load your data, see [Loading Your Data](Loading-Your-Data.html).
|
||||||
|
|
||||||
Additional Information
|
Additional Information
|
||||||
----------------------
|
----------------------
|
||||||
|
|
|
@ -1,3 +1,6 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
Welcome back! In our first [tutorial](https://github.com/metamx/druid/wiki/Tutorial%3A-A-First-Look-at-Druid), we introduced you to the most basic Druid setup: a single realtime node. We streamed in some data and queried it. Realtime nodes collect very recent data and periodically hand that data off to the rest of the Druid cluster. Some questions about the architecture must naturally come to mind. What does the rest of Druid cluster look like? How does Druid load available static data?
|
Welcome back! In our first [tutorial](https://github.com/metamx/druid/wiki/Tutorial%3A-A-First-Look-at-Druid), we introduced you to the most basic Druid setup: a single realtime node. We streamed in some data and queried it. Realtime nodes collect very recent data and periodically hand that data off to the rest of the Druid cluster. Some questions about the architecture must naturally come to mind. What does the rest of Druid cluster look like? How does Druid load available static data?
|
||||||
|
|
||||||
This tutorial will hopefully answer these questions!
|
This tutorial will hopefully answer these questions!
|
||||||
|
@ -16,7 +19,7 @@ tar -zxvf druid-services-*-bin.tar.gz
|
||||||
cd druid-services-*
|
cd druid-services-*
|
||||||
```
|
```
|
||||||
|
|
||||||
You can also [[Build From Source]].
|
You can also [Build From Source](Build-From-Source.html).
|
||||||
|
|
||||||
## External Dependencies ##
|
## External Dependencies ##
|
||||||
|
|
||||||
|
|
|
@ -1,3 +1,6 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
Greetings! This tutorial will help clarify some core Druid concepts. We will use a realtime dataset and issue some basic Druid queries. If you are ready to explore Druid, and learn a thing or two, read on!
|
Greetings! This tutorial will help clarify some core Druid concepts. We will use a realtime dataset and issue some basic Druid queries. If you are ready to explore Druid, and learn a thing or two, read on!
|
||||||
|
|
||||||
About the data
|
About the data
|
||||||
|
@ -142,7 +145,7 @@ As you can probably tell, the result is indicating the maximum and minimum times
|
||||||
Return to your favorite editor and create the file:
|
Return to your favorite editor and create the file:
|
||||||
<pre>timeseries_query.body</pre>
|
<pre>timeseries_query.body</pre>
|
||||||
|
|
||||||
We are going to make a slightly more complicated query, the [[TimeseriesQuery]]. Copy and paste the following into the file:
|
We are going to make a slightly more complicated query, the [TimeseriesQuery](TimeseriesQuery.html). Copy and paste the following into the file:
|
||||||
<pre><code>
|
<pre><code>
|
||||||
{
|
{
|
||||||
"queryType": "timeseries",
|
"queryType": "timeseries",
|
||||||
|
@ -165,7 +168,7 @@ We are going to make a slightly more complicated query, the [[TimeseriesQuery]].
|
||||||
}
|
}
|
||||||
</code></pre>
|
</code></pre>
|
||||||
|
|
||||||
You are probably wondering, what are these [[Granularities]] and [[Aggregations]] things? What the query is doing is aggregating some metrics over some span of time.
|
You are probably wondering, what are these [Granularities](Granularities.html) and [Aggregations](Aggregations.html) things? What the query is doing is aggregating some metrics over some span of time.
|
||||||
To issue the query and get some results, run the following in your command line:
|
To issue the query and get some results, run the following in your command line:
|
||||||
<pre><code>curl -X POST 'http://localhost:8083/druid/v2/?pretty' -H 'content-type: application/json' -d ````timeseries\_query.body</code>
|
<pre><code>curl -X POST 'http://localhost:8083/druid/v2/?pretty' -H 'content-type: application/json' -d ````timeseries\_query.body</code>
|
||||||
|
|
||||||
|
@ -243,7 +246,7 @@ This gives us something like the following:
|
||||||
Solving a Problem
|
Solving a Problem
|
||||||
-----------------
|
-----------------
|
||||||
|
|
||||||
One of Druid’s main powers is to provide answers to problems, so let’s pose a problem. What if we wanted to know what the top states in the US are, ordered by the number of visits by known users over the last few minutes? To solve this problem, we have to return to the query we introduced at the very beginning of this tutorial, the [[GroupByQuery]]. It would be nice if we could group by results by dimension value and somehow sort those results… and it turns out we can!
|
One of Druid’s main powers is to provide answers to problems, so let’s pose a problem. What if we wanted to know what the top states in the US are, ordered by the number of visits by known users over the last few minutes? To solve this problem, we have to return to the query we introduced at the very beginning of this tutorial, the [GroupByQuery](GroupByQuery.html). It would be nice if we could group by results by dimension value and somehow sort those results… and it turns out we can!
|
||||||
|
|
||||||
Let’s create the file:
|
Let’s create the file:
|
||||||
|
|
||||||
|
@ -289,7 +292,7 @@ Let’s create the file:
|
||||||
}
|
}
|
||||||
</code>
|
</code>
|
||||||
|
|
||||||
Woah! Our query just got a way more complicated. Now we have these [[Filters]] things and this [[OrderBy]] thing. Fear not, it turns out the new objects we’ve introduced to our query can help define the format of our results and provide an answer to our question.
|
Woah! Our query just got a way more complicated. Now we have these [Filters](Filters.html) things and this [OrderBy](OrderBy.html) thing. Fear not, it turns out the new objects we’ve introduced to our query can help define the format of our results and provide an answer to our question.
|
||||||
|
|
||||||
If you issue the query:
|
If you issue the query:
|
||||||
|
|
||||||
|
@ -343,8 +346,8 @@ Feel free to tweak other query parameters to answer other questions you may have
|
||||||
Next Steps
|
Next Steps
|
||||||
----------
|
----------
|
||||||
|
|
||||||
What to know even more information about the Druid Cluster? Check out [[Tutorial: The Druid Cluster]]
|
What to know even more information about the Druid Cluster? Check out [Tutorial: The Druid Cluster](Tutorial:-The-Druid-Cluster.html)
|
||||||
Druid is even more fun if you load your own data into it! To learn how to load your data, see [[Loading Your Data]].
|
Druid is even more fun if you load your own data into it! To learn how to load your data, see [Loading Your Data](Loading-Your-Data.html).
|
||||||
|
|
||||||
Additional Information
|
Additional Information
|
||||||
----------------------
|
----------------------
|
||||||
|
|
|
@ -1,4 +1,7 @@
|
||||||
Greetings! We see you’ve taken an interest in Druid. That’s awesome! Hopefully this tutorial will help clarify some core Druid concepts. We will go through one of the Real-time [[Examples]], and issue some basic Druid queries. The data source we’ll be working with is the [Twitter spritzer stream](https://dev.twitter.com/docs/streaming-apis/streams/public). If you are ready to explore Druid, brave its challenges, and maybe learn a thing or two, read on!
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
|
Greetings! We see you’ve taken an interest in Druid. That’s awesome! Hopefully this tutorial will help clarify some core Druid concepts. We will go through one of the Real-time [Examples](Examples.html), and issue some basic Druid queries. The data source we’ll be working with is the [Twitter spritzer stream](https://dev.twitter.com/docs/streaming-apis/streams/public). If you are ready to explore Druid, brave its challenges, and maybe learn a thing or two, read on!
|
||||||
|
|
||||||
Setting Up
|
Setting Up
|
||||||
----------
|
----------
|
||||||
|
@ -49,7 +52,7 @@ You can find the example executables in the examples/bin directory:
|
||||||
Running Example Scripts
|
Running Example Scripts
|
||||||
-----------------------
|
-----------------------
|
||||||
|
|
||||||
Let’s start doing stuff. You can start a Druid [[Realtime]] node by issuing:
|
Let’s start doing stuff. You can start a Druid [Realtime](Realtime.html) node by issuing:
|
||||||
|
|
||||||
./run_example_server.sh
|
./run_example_server.sh
|
||||||
|
|
||||||
|
@ -172,7 +175,7 @@ If you said the result is indicating the maximum and minimum timestamps we've se
|
||||||
Return to your favorite editor and create the file:
|
Return to your favorite editor and create the file:
|
||||||
<pre>timeseries_query.body</pre>
|
<pre>timeseries_query.body</pre>
|
||||||
|
|
||||||
We are going to make a slightly more complicated query, the [[TimeseriesQuery]]. Copy and paste the following into the file:
|
We are going to make a slightly more complicated query, the [TimeseriesQuery](TimeseriesQuery.html). Copy and paste the following into the file:
|
||||||
<pre><code>{
|
<pre><code>{
|
||||||
"queryType":"timeseries",
|
"queryType":"timeseries",
|
||||||
"dataSource":"twitterstream",
|
"dataSource":"twitterstream",
|
||||||
|
@ -185,7 +188,7 @@ We are going to make a slightly more complicated query, the [[TimeseriesQuery]].
|
||||||
}
|
}
|
||||||
</code></pre>
|
</code></pre>
|
||||||
|
|
||||||
You are probably wondering, what are these [[Granularities]] and [[Aggregations]] things? What the query is doing is aggregating some metrics over some span of time.
|
You are probably wondering, what are these [Granularities](Granularities.html) and [Aggregations](Aggregations.html) things? What the query is doing is aggregating some metrics over some span of time.
|
||||||
To issue the query and get some results, run the following in your command line:
|
To issue the query and get some results, run the following in your command line:
|
||||||
<pre><code>curl -X POST 'http://localhost:8080/druid/v2/?pretty' -H 'content-type: application/json' -d ````timeseries\_query.body</code>
|
<pre><code>curl -X POST 'http://localhost:8080/druid/v2/?pretty' -H 'content-type: application/json' -d ````timeseries\_query.body</code>
|
||||||
|
|
||||||
|
@ -249,7 +252,7 @@ This gives us something like the following:
|
||||||
Solving a Problem
|
Solving a Problem
|
||||||
-----------------
|
-----------------
|
||||||
|
|
||||||
One of Druid’s main powers (see what we did there?) is to provide answers to problems, so let’s pose a problem. What if we wanted to know what the top hash tags are, ordered by the number tweets, where the language is english, over the last few minutes you’ve been reading this tutorial? To solve this problem, we have to return to the query we introduced at the very beginning of this tutorial, the [[GroupByQuery]]. It would be nice if we could group by results by dimension value and somehow sort those results… and it turns out we can!
|
One of Druid’s main powers (see what we did there?) is to provide answers to problems, so let’s pose a problem. What if we wanted to know what the top hash tags are, ordered by the number tweets, where the language is english, over the last few minutes you’ve been reading this tutorial? To solve this problem, we have to return to the query we introduced at the very beginning of this tutorial, the [GroupByQuery](GroupByQuery.html). It would be nice if we could group by results by dimension value and somehow sort those results… and it turns out we can!
|
||||||
|
|
||||||
Let’s create the file:
|
Let’s create the file:
|
||||||
|
|
||||||
|
@ -269,7 +272,7 @@ Let’s create the file:
|
||||||
}
|
}
|
||||||
</code>
|
</code>
|
||||||
|
|
||||||
Woah! Our query just got a way more complicated. Now we have these [[Filters]] things and this [[OrderBy]] thing. Fear not, it turns out the new objects we’ve introduced to our query can help define the format of our results and provide an answer to our question.
|
Woah! Our query just got a way more complicated. Now we have these [Filters](Filters.html) things and this [OrderBy](OrderBy.html) thing. Fear not, it turns out the new objects we’ve introduced to our query can help define the format of our results and provide an answer to our question.
|
||||||
|
|
||||||
If you issue the query:
|
If you issue the query:
|
||||||
|
|
||||||
|
@ -321,6 +324,6 @@ Feel free to tweak other query parameters to answer other questions you may have
|
||||||
Additional Information
|
Additional Information
|
||||||
----------------------
|
----------------------
|
||||||
|
|
||||||
This tutorial is merely showcasing a small fraction of what Druid can do. Next, continue on to [[Loading Your Data]].
|
This tutorial is merely showcasing a small fraction of what Druid can do. Next, continue on to [Loading Your Data](Loading Your Data.html).
|
||||||
|
|
||||||
And thus concludes our journey! Hopefully you learned a thing or two about Druid real-time ingestion, querying Druid, and how Druid can be used to solve problems. If you have additional questions, feel free to post in our [google groups page](http://www.groups.google.com/forum/#!forum/druid-development).
|
And thus concludes our journey! Hopefully you learned a thing or two about Druid real-time ingestion, querying Druid, and how Druid can be used to solve problems. If you have additional questions, feel free to post in our [google groups page](http://www.groups.google.com/forum/#!forum/druid-development).
|
||||||
|
|
|
@ -1,3 +1,6 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
This page discusses how we do versioning and provides information on our stable releases.
|
This page discusses how we do versioning and provides information on our stable releases.
|
||||||
|
|
||||||
Versioning Strategy
|
Versioning Strategy
|
||||||
|
@ -18,4 +21,4 @@ For external deployments, we recommend running the stable release tag. Releases
|
||||||
Tagging strategy
|
Tagging strategy
|
||||||
----------------
|
----------------
|
||||||
|
|
||||||
Tags of the codebase are equivalent to release candidates. We tag the code every time we want to take it through our release process, which includes some QA cycles and deployments. So, it is not safe to assume that a tag is a stable release, it is a solidification of the code as it goes through our production QA cycle and deployment. Tags will never change, but we often go through a number of iterations of tags before actually getting a stable release onto production. So, it is recommended that if you are not aware of what is on a tag, to stick to the stable releases listed on the [[Download]] page.
|
Tags of the codebase are equivalent to release candidates. We tag the code every time we want to take it through our release process, which includes some QA cycles and deployments. So, it is not safe to assume that a tag is a stable release, it is a solidification of the code as it goes through our production QA cycle and deployment. Tags will never change, but we often go through a number of iterations of tags before actually getting a stable release onto production. So, it is recommended that if you are not aware of what is on a tag, to stick to the stable releases listed on the [Download](Download.html) page.
|
||||||
|
|
|
@ -1,8 +1,11 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
Druid uses ZooKeeper (ZK) for management of current cluster state. The operations that happen over ZK are
|
Druid uses ZooKeeper (ZK) for management of current cluster state. The operations that happen over ZK are
|
||||||
|
|
||||||
1. [[Master]] leader election
|
1. [Master](Master.html) leader election
|
||||||
2. Segment “publishing” protocol from [[Compute]] and [[Realtime]]
|
2. Segment “publishing” protocol from [Compute](Compute.html) and [Realtime](Realtime.html)
|
||||||
3. Segment load/drop protocol between [[Master]] and [[Compute]]
|
3. Segment load/drop protocol between [Master](Master.html) and [Compute](Compute.html)
|
||||||
|
|
||||||
### Property Configuration
|
### Property Configuration
|
||||||
|
|
||||||
|
@ -38,7 +41,7 @@ We use the Curator LeadershipLatch recipe to do leader election at path
|
||||||
|
|
||||||
The `announcementsPath` and `servedSegmentsPath` are used for this.
|
The `announcementsPath` and `servedSegmentsPath` are used for this.
|
||||||
|
|
||||||
All [[Compute]] and [[Realtime]] nodes publish themselves on the `announcementsPath`, specifically, they will create an ephemeral znode at
|
All [Compute](Compute.html) and [Realtime](Realtime.html) nodes publish themselves on the `announcementsPath`, specifically, they will create an ephemeral znode at
|
||||||
|
|
||||||
${druid.zk.paths.announcementsPath}/${druid.host}
|
${druid.zk.paths.announcementsPath}/${druid.host}
|
||||||
|
|
||||||
|
@ -50,13 +53,13 @@ And as they load up segments, they will attach ephemeral znodes that look like
|
||||||
|
|
||||||
${druid.zk.paths.servedSegmentsPath}/${druid.host}/_segment_identifier_
|
${druid.zk.paths.servedSegmentsPath}/${druid.host}/_segment_identifier_
|
||||||
|
|
||||||
Nodes like the [[Master]] and [[Broker]] can then watch these paths to see which nodes are currently serving which segments.
|
Nodes like the [Master](Master.html) and [Broker](Broker.html) can then watch these paths to see which nodes are currently serving which segments.
|
||||||
|
|
||||||
### Segment load/drop protocol between Master and Compute
|
### Segment load/drop protocol between Master and Compute
|
||||||
|
|
||||||
The `loadQueuePath` is used for this.
|
The `loadQueuePath` is used for this.
|
||||||
|
|
||||||
When the [[Master]] decides that a [[Compute]] node should load or drop a segment, it writes an ephemeral znode to
|
When the [Master](Master.html) decides that a [Compute](Compute.html) node should load or drop a segment, it writes an ephemeral znode to
|
||||||
|
|
||||||
${druid.zk.paths.loadQueuePath}/_host_of_compute_node/_segment_identifier
|
${druid.zk.paths.loadQueuePath}/_host_of_compute_node/_segment_identifier
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,5 @@
|
||||||
|
name: Your New Jekyll Site
|
||||||
|
pygments: true
|
||||||
|
markdown: redcarpet
|
||||||
|
redcarpet:
|
||||||
|
extensions: ["no_intra_emphasis", "fenced_code_blocks", "autolink", "tables", "with_toc_data"]
|
|
@ -0,0 +1,147 @@
|
||||||
|
<!DOCTYPE html>
|
||||||
|
<html lang="en">
|
||||||
|
<head>
|
||||||
|
<meta charset="UTF-8" />
|
||||||
|
<title>Druid | {{page.title}}</title>
|
||||||
|
<link rel="stylesheet" type="text/css" href="/css/bootstrap.css" media="all" />
|
||||||
|
<link rel="stylesheet" type="text/css" href="/css/bootstrap-responsive.css" media="all" />
|
||||||
|
<link rel="stylesheet" type="text/css" href="/css/syntax.css" media="all" />
|
||||||
|
<link href='http://fonts.googleapis.com/css?family=Open+Sans:400,600,300,700,800' rel='stylesheet' type='text/css'>
|
||||||
|
<link rel="stylesheet" type="text/css" href="/css/custom.css" media="all" />
|
||||||
|
<link rel="alternate" type="application/atom+xml" href="http://druid.io/feed">
|
||||||
|
<script src="http://code.jquery.com/jquery.js"></script>
|
||||||
|
<script src="/js/bootstrap.min.js"></script>
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
<div class="wrapper">
|
||||||
|
<header{% if page.id == 'home' %} class="index-head"{% endif %}>
|
||||||
|
<div class="container custom">
|
||||||
|
<div class="row-fluid">
|
||||||
|
<div class="span12">
|
||||||
|
<div class="navbar navbar-inverse custom">
|
||||||
|
<div class="navbar-inner">
|
||||||
|
<button type="button" class="btn btn-navbar" data-toggle="collapse" data-target=".nav-collapse">
|
||||||
|
<span class="icon-bar"></span>
|
||||||
|
<span class="icon-bar"></span>
|
||||||
|
<span class="icon-bar"></span>
|
||||||
|
</button>
|
||||||
|
<a class="brand {% if page.id == 'home' %}active{% endif %}" href="/">Home</a>
|
||||||
|
<div class="nav-collapse collapse">
|
||||||
|
<ul class="nav">
|
||||||
|
<li {% if page.sectionid == 'druid' %} class="active"{% endif %}>
|
||||||
|
<a href="/druid.html">What is Druid?</a>
|
||||||
|
</li>
|
||||||
|
<li {% if page.sectionid == 'downloads' %} class="active"{% endif %}>
|
||||||
|
<a href="/downloads.html">Downloads</a>
|
||||||
|
</li>
|
||||||
|
<li {% if page.sectionid == 'docs' %} class="active"{% endif %}>
|
||||||
|
<a class="doc-link" target="_blank" href="https://github.com/metamx/druid/wiki">Documentation <span></span></a>
|
||||||
|
</li>
|
||||||
|
<li {% if page.sectionid == 'community' %} class="active"{% endif %}>
|
||||||
|
<a href="/community.html">Community</a>
|
||||||
|
</li>
|
||||||
|
<li {% if page.sectionid == 'faq' %} class="active"{% endif %}>
|
||||||
|
<a href="/faq.html">FAQ</a>
|
||||||
|
</li>
|
||||||
|
<li {% if page.sectionid == 'blog' %} class="active"{% endif %}>
|
||||||
|
<a href="/blog">Blog</a>
|
||||||
|
</li>
|
||||||
|
<li class="pull-right">
|
||||||
|
<span>BETA</span>
|
||||||
|
</li>
|
||||||
|
</ul>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
{% if page.id == 'home' %}
|
||||||
|
<h3>Druid is open-source infrastructure for real²time exploratory analytics on large datasets.</h3>
|
||||||
|
<button class="btn" type="button"><a href="downloads.html">Download</a></button>
|
||||||
|
{% endif %}
|
||||||
|
</div>
|
||||||
|
</header>
|
||||||
|
<div class="container custom main-cont">
|
||||||
|
|
||||||
|
{{ content }}
|
||||||
|
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<footer>
|
||||||
|
<div class="container custom">
|
||||||
|
<div class="row-fluid">
|
||||||
|
<div class="span3">
|
||||||
|
<div class="contact-item">
|
||||||
|
<span>CONTACT US</span>
|
||||||
|
<a href="mailto:info@druid.io">info@druid.io</a>
|
||||||
|
</div>
|
||||||
|
<div class="contact-item">
|
||||||
|
<span>Metamarkets</span>
|
||||||
|
625 2nd Street, Suite #230<br/>
|
||||||
|
San Francisco, CA 94017
|
||||||
|
<div class="soc">
|
||||||
|
<a href="https://twitter.com/druidio"></a>
|
||||||
|
<a href="https://github.com/metamx/druid" class="github"></a>
|
||||||
|
<a href="http://www.meetup.com/Open-Druid/" class="meet"></a>
|
||||||
|
<a href="http://druid.io/feed/" class="rss" target="_blank"></a>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div class="span9">
|
||||||
|
<ul class="unstyled">
|
||||||
|
<li>
|
||||||
|
<a href="/">DRUID</a>
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
<a href="/druid.html">What is Druid?</a>
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
<a href="/downloads.html">Downloads</a>
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
<a target="_blank" href="https://github.com/metamx/druid/wiki">Documentation </a>
|
||||||
|
</li>
|
||||||
|
</ul>
|
||||||
|
<ul class="unstyled">
|
||||||
|
<li>
|
||||||
|
<a href="/community.html">SUPPORT</a>
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
<a href="/community.html">Community</a>
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
<a href="/faq.html">FAQ</a>
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
<a href="/licensing.html">Licensing</a>
|
||||||
|
</li>
|
||||||
|
</ul>
|
||||||
|
<ul class="unstyled">
|
||||||
|
<li>
|
||||||
|
<a href="/blog">BLOG</a>
|
||||||
|
</li>
|
||||||
|
</ul>
|
||||||
|
<div class="logo-block">
|
||||||
|
<span class="logo custom">
|
||||||
|
<a href="/"></a>
|
||||||
|
</span>
|
||||||
|
<p>is an open source project sponsored by<br/> Metamarkets.</p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</footer>
|
||||||
|
<script type="text/javascript">
|
||||||
|
var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www.");
|
||||||
|
document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E"));
|
||||||
|
</script>
|
||||||
|
<script type="text/javascript">
|
||||||
|
try {
|
||||||
|
var pageTracker = _gat._getTracker("UA-40280432-1");
|
||||||
|
pageTracker._trackPageview();
|
||||||
|
} catch(err) {}
|
||||||
|
</script>
|
||||||
|
</body>
|
||||||
|
</html>
|
||||||
|
|
|
@ -0,0 +1,8 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
|
<div class="row-fluid">
|
||||||
|
|
||||||
|
{{ content }}
|
||||||
|
|
||||||
|
</div>
|
|
@ -0,0 +1,11 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
|
|
||||||
|
<div class="row-fluid">
|
||||||
|
<div class="span10 offset1{% if page.id != 'home' %} simple-page{% endif %}{% if page.sectionid == 'faq' %} faq-page{% endif %}">
|
||||||
|
|
||||||
|
{{ content }}
|
||||||
|
|
||||||
|
</div>
|
||||||
|
</div>
|
|
@ -0,0 +1,44 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
sectionid: blog
|
||||||
|
---
|
||||||
|
|
||||||
|
<div class="row-fluid">
|
||||||
|
<div class="span4 recent">
|
||||||
|
<h3>Recent posts</h3>
|
||||||
|
<ul class="unstyled">
|
||||||
|
{% for post in site.posts limit: 5 %}
|
||||||
|
<li{% if page.title == post.title %} class="active"{% endif %}><a href="{{ post.url }}">{{ post.title }}</a></li>
|
||||||
|
{% endfor %}
|
||||||
|
</ul>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="span8 simple-page">
|
||||||
|
<div class="text-item blog inner">
|
||||||
|
<h2 class="date">
|
||||||
|
<span>{{ page.title }}</span>
|
||||||
|
<span>{{ page.date | date: "%B %e, %Y" }} · {{ page.author | upcase }}</span>
|
||||||
|
</h2>
|
||||||
|
|
||||||
|
{% if page.image %}<img src="{{ page.image }}" alt="{{ page.title }}" class="text-img" />{% endif %}
|
||||||
|
|
||||||
|
{{ content }}
|
||||||
|
|
||||||
|
<div id="disqus_thread"></div>
|
||||||
|
<script type="text/javascript">
|
||||||
|
/* * * CONFIGURATION VARIABLES: EDIT BEFORE PASTING INTO YOUR WEBPAGE * * */
|
||||||
|
var disqus_shortname = 'druidio'; // required: replace example with your forum shortname
|
||||||
|
|
||||||
|
/* * * DON'T EDIT BELOW THIS LINE * * */
|
||||||
|
(function() {
|
||||||
|
var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true;
|
||||||
|
dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js';
|
||||||
|
(document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq);
|
||||||
|
})();
|
||||||
|
</script>
|
||||||
|
<noscript>Please enable JavaScript to view the <a href="http://disqus.com/?ref_noscript">comments powered by Disqus.</a></noscript>
|
||||||
|
<a href="http://disqus.com" class="dsq-brlink">comments powered by <span class="logo-disqus">Disqus</span></a>
|
||||||
|
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
|
@ -0,0 +1,24 @@
|
||||||
|
---
|
||||||
|
layout: post
|
||||||
|
title: "Welcome to Jekyll!"
|
||||||
|
date: 2013-09-16 13:06:49
|
||||||
|
categories: jekyll update
|
||||||
|
---
|
||||||
|
|
||||||
|
You'll find this post in your `_posts` directory - edit this post and re-build (or run with the `-w` switch) to see your changes!
|
||||||
|
To add new posts, simply add a file in the `_posts` directory that follows the convention: YYYY-MM-DD-name-of-post.ext.
|
||||||
|
|
||||||
|
Jekyll also offers powerful support for code snippets:
|
||||||
|
|
||||||
|
{% highlight ruby %}
|
||||||
|
def print_hi(name)
|
||||||
|
puts "Hi, #{name}"
|
||||||
|
end
|
||||||
|
print_hi('Tom')
|
||||||
|
#=> prints 'Hi, Tom' to STDOUT.
|
||||||
|
{% endhighlight %}
|
||||||
|
|
||||||
|
Check out the [Jekyll docs][jekyll] for more info on how to get the most out of Jekyll. File all bugs/feature requests at [Jekyll's GitHub repo][jekyll-gh].
|
||||||
|
|
||||||
|
[jekyll-gh]: https://github.com/mojombo/jekyll
|
||||||
|
[jekyll]: http://jekyllrb.com
|
|
@ -1,68 +1,71 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
Contents
|
Contents
|
||||||
\* [[Introduction|Home]]
|
\* [Introduction|Home](Introduction|Home.html)
|
||||||
\* [[Download]]
|
\* [Download](Download.html)
|
||||||
\* [[Support]]
|
\* [Support](Support.html)
|
||||||
\* [[Contribute]]
|
\* [Contribute](Contribute.html)
|
||||||
========================
|
========================
|
||||||
|
|
||||||
Getting Started
|
Getting Started
|
||||||
\* [[Tutorial: A First Look at Druid]]
|
\* [Tutorial: A First Look at Druid](Tutorial:-A-First-Look-at-Druid.html)
|
||||||
\* [[Tutorial: The Druid Cluster]]
|
\* [Tutorial: The Druid Cluster](Tutorial:-The-Druid-Cluster.html)
|
||||||
\* [[Loading Your Data]]
|
\* [Loading Your Data](Loading-Your-Data.html)
|
||||||
\* [[Querying Your Data]]
|
\* [Querying Your Data](Querying-Your-Data.html)
|
||||||
\* [[Booting a Production Cluster]]
|
\* [Booting a Production Cluster](Booting-a-Production-Cluster.html)
|
||||||
\* [[Examples]]
|
\* [Examples](Examples.html)
|
||||||
\* [[Cluster Setup]]
|
\* [Cluster Setup](Cluster-Setup.html)
|
||||||
\* [[Configuration]]
|
\* [Configuration](Configuration.html)
|
||||||
--------------------------------------
|
--------------------------------------
|
||||||
|
|
||||||
Data Ingestion
|
Data Ingestion
|
||||||
\* [[Realtime]]
|
\* [Realtime](Realtime.html)
|
||||||
\* [[Batch|Batch Ingestion]]
|
\* [Batch|Batch Ingestion](Batch|Batch-Ingestion.html)
|
||||||
\* [[Indexing Service]]
|
\* [Indexing Service](Indexing-Service.html)
|
||||||
----------------------------
|
----------------------------
|
||||||
|
|
||||||
Querying
|
Querying
|
||||||
\* [[Querying]]
|
\* [Querying](Querying.html)
|
||||||
**\* ]
|
**\* ]
|
||||||
**\* [[Aggregations]]
|
**\* [Aggregations](Aggregations.html)
|
||||||
**\* ]
|
**\* ]
|
||||||
**\* [[Granularities]]
|
**\* [Granularities](Granularities.html)
|
||||||
\* Query Types
|
\* Query Types
|
||||||
**\* ]
|
**\* ]
|
||||||
****\* ]
|
****\* ]
|
||||||
****\* ]
|
****\* ]
|
||||||
**\* [[SearchQuery]]
|
**\* [SearchQuery](SearchQuery.html)
|
||||||
**\* ]
|
**\* ]
|
||||||
** [[SegmentMetadataQuery]]
|
** [SegmentMetadataQuery](SegmentMetadataQuery.html)
|
||||||
**\* ]
|
**\* ]
|
||||||
**\* [[TimeseriesQuery]]
|
**\* [TimeseriesQuery](TimeseriesQuery.html)
|
||||||
---------------------------
|
---------------------------
|
||||||
|
|
||||||
Architecture
|
Architecture
|
||||||
\* [[Design]]
|
\* [Design](Design.html)
|
||||||
\* [[Segments]]
|
\* [Segments](Segments.html)
|
||||||
\* Node Types
|
\* Node Types
|
||||||
**\* ]
|
**\* ]
|
||||||
**\* [[Broker]]
|
**\* [Broker](Broker.html)
|
||||||
**\* ]
|
**\* ]
|
||||||
****\* ]
|
****\* ]
|
||||||
**\* [[Realtime]]
|
**\* [Realtime](Realtime.html)
|
||||||
**\* ]
|
**\* ]
|
||||||
**\* [[Plumber]]
|
**\* [Plumber](Plumber.html)
|
||||||
\* External Dependencies
|
\* External Dependencies
|
||||||
**\* ]
|
**\* ]
|
||||||
**\* [[MySQL]]
|
**\* [MySQL](MySQL.html)
|
||||||
**\* ]
|
**\* ]
|
||||||
** [[Concepts and Terminology]]
|
** [Concepts and Terminology](Concepts-and-Terminology.html)
|
||||||
-------------------------------
|
-------------------------------
|
||||||
|
|
||||||
Development
|
Development
|
||||||
\* [[Versioning]]
|
\* [Versioning](Versioning.html)
|
||||||
\* [[Build From Source]]
|
\* [Build From Source](Build-From-Source.html)
|
||||||
\* [[Libraries]]
|
\* [Libraries](Libraries.html)
|
||||||
------------------------
|
------------------------
|
||||||
|
|
||||||
Misc
|
Misc
|
||||||
\* [[Thanks]]
|
\* [Thanks](Thanks.html)
|
||||||
-------------
|
-------------
|
||||||
|
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because one or more lines are too long
File diff suppressed because it is too large
Load Diff
File diff suppressed because one or more lines are too long
|
@ -0,0 +1,592 @@
|
||||||
|
@font-face {
|
||||||
|
font-family: 'Conv_framd';
|
||||||
|
src: url('../fonts/framd.eot');
|
||||||
|
src: url('../fonts/framd.eot?#iefix') format('embedded-opentype'),
|
||||||
|
url('../fonts/framd.woff') format('woff'),
|
||||||
|
url('../fonts/framd.ttf') format('truetype'),
|
||||||
|
url('../fonts/framd.svg#heroregular') format('svg');
|
||||||
|
font-weight: normal;
|
||||||
|
font-style: normal;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
html, body {
|
||||||
|
position:relative;
|
||||||
|
height:100%;
|
||||||
|
min-height:100%;
|
||||||
|
height:100%;
|
||||||
|
color:#252525;
|
||||||
|
font:400 18px/26px 'Open Sans', Arial, Helvetica, sans-serif;
|
||||||
|
padding:0;
|
||||||
|
margin:0;
|
||||||
|
word-wrap:break-word;
|
||||||
|
}
|
||||||
|
a {
|
||||||
|
color:#6ab6dd;
|
||||||
|
position:relative;
|
||||||
|
}
|
||||||
|
a:hover {
|
||||||
|
text-decoration:underline;
|
||||||
|
color:#2c79a1;
|
||||||
|
}
|
||||||
|
.wrapper {
|
||||||
|
min-height:100%;
|
||||||
|
}
|
||||||
|
header {
|
||||||
|
margin:0 0 100px;
|
||||||
|
}
|
||||||
|
header .span12 {
|
||||||
|
padding:0 0 7px 0;
|
||||||
|
}
|
||||||
|
.logo.custom {
|
||||||
|
display:inline-block;
|
||||||
|
margin:0;
|
||||||
|
vertical-align:25px;
|
||||||
|
}
|
||||||
|
.logo.custom a {
|
||||||
|
background:url(../img/logo.png) no-repeat;
|
||||||
|
width: 110px;
|
||||||
|
height: 49px;
|
||||||
|
display:block;
|
||||||
|
text-indent:-9999px;
|
||||||
|
}
|
||||||
|
.custom.navbar {
|
||||||
|
margin:10px 0;
|
||||||
|
}
|
||||||
|
.custom.navbar .nav li {
|
||||||
|
padding:0 !important;
|
||||||
|
}
|
||||||
|
.custom.navbar .nav li a, .navbar .brand, .custom.navbar .nav li.pull-right span {
|
||||||
|
font:300 14px/20px 'Open Sans', Arial, Helvetica, sans-serif;
|
||||||
|
padding:10px 10px;
|
||||||
|
}
|
||||||
|
.navbar .brand.active {
|
||||||
|
color:#fff;
|
||||||
|
}
|
||||||
|
.custom.navbar .nav li.pull-right span {
|
||||||
|
padding-right:0;
|
||||||
|
color:#e76d4c !important;
|
||||||
|
display:block;
|
||||||
|
}
|
||||||
|
.custom.navbar .nav li a.doc-link {
|
||||||
|
padding:5px 10px 0;
|
||||||
|
}
|
||||||
|
.custom.navbar .nav li a.doc-link span {
|
||||||
|
display:inline-block;
|
||||||
|
background:url(../img/icon-git.png) no-repeat;
|
||||||
|
width: 28px;
|
||||||
|
height: 28px;
|
||||||
|
vertical-align:-7px;
|
||||||
|
margin-left:5px;
|
||||||
|
}
|
||||||
|
.custom.navbar .nav li a.doc-link:hover span, .custom.navbar .nav li.active a.doc-link span {
|
||||||
|
background-position:0 -28px;
|
||||||
|
}
|
||||||
|
.custom.navbar .nav {
|
||||||
|
float:none;
|
||||||
|
}
|
||||||
|
.navbar .nav > li.pull-right {
|
||||||
|
float:right;
|
||||||
|
padding:10px 0;
|
||||||
|
}
|
||||||
|
h1 {
|
||||||
|
font:300 48px/48px 'Open Sans', Arial, Helvetica, sans-serif;
|
||||||
|
margin:0 0 45px;
|
||||||
|
}
|
||||||
|
h1.index {
|
||||||
|
text-align:center;
|
||||||
|
}
|
||||||
|
h1 span {
|
||||||
|
display:block;
|
||||||
|
font:400 14px/28px 'Open Sans', Arial, Helvetica, sans-serif;
|
||||||
|
}
|
||||||
|
h2 {
|
||||||
|
font:30px/30px 'Conv_framd', Arial, Helvetica, sans-serif;
|
||||||
|
margin:0 0 20px;
|
||||||
|
color:#0f1e35;
|
||||||
|
}
|
||||||
|
h3 {
|
||||||
|
font:300 30px/36px 'Open Sans', Arial, Helvetica, sans-serif;
|
||||||
|
margin:0 0 33px;
|
||||||
|
text-align:center;
|
||||||
|
}
|
||||||
|
.btn {
|
||||||
|
display:block;
|
||||||
|
margin:0 auto 65px;
|
||||||
|
background:#6ab6dd;
|
||||||
|
border:none;
|
||||||
|
box-shadow:inset -3px -3px 3px #5592b1;
|
||||||
|
height:53px;
|
||||||
|
width:205px;
|
||||||
|
font:30px/53px 'Conv_framd', Arial, Helvetica, sans-serif;
|
||||||
|
color:#252424;
|
||||||
|
text-shadow:none;
|
||||||
|
padding:0;
|
||||||
|
z-index:100;
|
||||||
|
position:relative;
|
||||||
|
}
|
||||||
|
.btn a {
|
||||||
|
color:#252424;
|
||||||
|
}
|
||||||
|
.btn:hover {
|
||||||
|
background:#83c6e9;
|
||||||
|
}
|
||||||
|
.index-content {
|
||||||
|
margin:0 auto 60px;
|
||||||
|
text-align:center;
|
||||||
|
}
|
||||||
|
.third {
|
||||||
|
margin:0 auto 35px;
|
||||||
|
}
|
||||||
|
.third-item {
|
||||||
|
text-align:left;
|
||||||
|
}
|
||||||
|
.third-item:first-child {
|
||||||
|
margin:0;
|
||||||
|
}
|
||||||
|
.third-item a {
|
||||||
|
display:block;
|
||||||
|
font-family:'Open Sans', Arial, Helvetica, sans-serif;
|
||||||
|
font-weight:700;
|
||||||
|
font-size:30px;
|
||||||
|
margin:0 auto 20px;
|
||||||
|
color:#252424;
|
||||||
|
text-align:center;
|
||||||
|
}
|
||||||
|
.container.custom {
|
||||||
|
padding:0;
|
||||||
|
margin:0 auto;
|
||||||
|
}
|
||||||
|
.container.custom.main-cont {
|
||||||
|
padding-bottom:230px;
|
||||||
|
}
|
||||||
|
.text-part {
|
||||||
|
padding-top:70px;
|
||||||
|
}
|
||||||
|
.row-fluid.index-page {
|
||||||
|
padding-top:100px;
|
||||||
|
}
|
||||||
|
.index-page .content {
|
||||||
|
padding:15px 0 0;
|
||||||
|
}
|
||||||
|
.index-page h3 {
|
||||||
|
text-align:left;
|
||||||
|
}
|
||||||
|
.index-page .sidebar {
|
||||||
|
padding:65px 0 30px;
|
||||||
|
}
|
||||||
|
.container.custom p {
|
||||||
|
margin:0 0 17px;
|
||||||
|
}
|
||||||
|
.homepage .index-page .content h2 {
|
||||||
|
margin:0 0 20px;
|
||||||
|
}
|
||||||
|
.container.custom .unstyled {
|
||||||
|
margin:0;
|
||||||
|
color:#353535;
|
||||||
|
}
|
||||||
|
.container.custom .unstyled li {
|
||||||
|
margin:0 0 17px;
|
||||||
|
}
|
||||||
|
.container.custom .unstyled li span {
|
||||||
|
font-family:'Open Sans', Arial, Helvetica, sans-serif;
|
||||||
|
font-weight:700;
|
||||||
|
display:block;
|
||||||
|
}
|
||||||
|
.container.custom .unstyled li a {
|
||||||
|
display:inline;
|
||||||
|
}
|
||||||
|
.homepage h4 {
|
||||||
|
font:24px/24px 'Conv_framd', Arial, Helvetica, sans-serif;
|
||||||
|
margin:0 0 15px;
|
||||||
|
}
|
||||||
|
.container.custom .sidebar .unstyled {
|
||||||
|
margin:0 0 100px;
|
||||||
|
}
|
||||||
|
.container.custom .sidebar .unstyled li {
|
||||||
|
margin:0 0 12px;
|
||||||
|
border-bottom:1px solid #adadad;
|
||||||
|
padding: 5px 7px;
|
||||||
|
}
|
||||||
|
.grey-box {
|
||||||
|
background:#e5e4e3;
|
||||||
|
border-radius:3px;
|
||||||
|
-moz-border-radius:3px;
|
||||||
|
-webkit-border-radius:3px;
|
||||||
|
position:relative;
|
||||||
|
padding:20px 10px 130px;
|
||||||
|
color:#000;
|
||||||
|
}
|
||||||
|
footer {
|
||||||
|
text-align:center;
|
||||||
|
font-size:14px;
|
||||||
|
color:#000;
|
||||||
|
margin:-135px 0 0;
|
||||||
|
}
|
||||||
|
footer .container.custom {
|
||||||
|
border-top:1px solid #e1e1e1;
|
||||||
|
padding:20px 0 25px;
|
||||||
|
}
|
||||||
|
footer .span9 {
|
||||||
|
text-align:left;
|
||||||
|
}
|
||||||
|
footer .container.custom ul.unstyled {
|
||||||
|
display:inline-block;
|
||||||
|
margin:0 120px 30px 30px;
|
||||||
|
text-align:left;
|
||||||
|
vertical-align:top;
|
||||||
|
}
|
||||||
|
footer .container.custom ul.unstyled li {
|
||||||
|
font:300 14px/26px 'Open Sans', Arial, Helvetica, sans-serif;
|
||||||
|
margin:0;
|
||||||
|
}
|
||||||
|
footer .container.custom .unstyled li a {
|
||||||
|
color:#000;
|
||||||
|
font-weight:300;
|
||||||
|
}
|
||||||
|
footer .container.custom .unstyled li:first-child a {
|
||||||
|
font-weight:400;
|
||||||
|
}
|
||||||
|
footer ul li a:hover {
|
||||||
|
text-decoration:underline;
|
||||||
|
color:#fff;
|
||||||
|
}
|
||||||
|
footer .logo-block {
|
||||||
|
text-align:right;
|
||||||
|
}
|
||||||
|
footer .container.custom p {
|
||||||
|
display:inline-block;
|
||||||
|
margin:28px 0 0 10px;
|
||||||
|
text-align:left;
|
||||||
|
}
|
||||||
|
.contact-item {
|
||||||
|
margin:0 0 30px 30px;
|
||||||
|
text-align:left;
|
||||||
|
font-weight:300;
|
||||||
|
}
|
||||||
|
.contact-item a {
|
||||||
|
color:#000;
|
||||||
|
}
|
||||||
|
footer .contact-item span {
|
||||||
|
font-weight:400;
|
||||||
|
display:block;
|
||||||
|
}
|
||||||
|
footer .contact-item:first-child span {
|
||||||
|
text-transform:uppercase;
|
||||||
|
}
|
||||||
|
footer .span4 {
|
||||||
|
text-align:left;
|
||||||
|
}
|
||||||
|
footer .span5 {
|
||||||
|
padding-top: 75px;
|
||||||
|
}
|
||||||
|
.soc {
|
||||||
|
text-align:left;
|
||||||
|
margin:5px 0 0 0;
|
||||||
|
}
|
||||||
|
.soc a {
|
||||||
|
display:inline-block;
|
||||||
|
width:35px;
|
||||||
|
height:34px;
|
||||||
|
background:url(../img/icons-soc.png) no-repeat;
|
||||||
|
}
|
||||||
|
.soc a.github {
|
||||||
|
background-position:0 -34px;
|
||||||
|
}
|
||||||
|
.soc a.meet {
|
||||||
|
background-position:0 -68px;
|
||||||
|
}
|
||||||
|
.soc a.rss {
|
||||||
|
background-position:0 -102px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.text-item {
|
||||||
|
margin:0 0 75px;
|
||||||
|
}
|
||||||
|
.container.custom p.note {
|
||||||
|
text-align:center;
|
||||||
|
padding:30px 0 0;
|
||||||
|
}
|
||||||
|
.text-item strong {
|
||||||
|
font-weight:normal;
|
||||||
|
font-family:'Open Sans', Arial, Helvetica, sans-serif;
|
||||||
|
font-weight:700;
|
||||||
|
}
|
||||||
|
h2.date {
|
||||||
|
font-family:'Open Sans', Arial, Helvetica, sans-serif;
|
||||||
|
font-weight:400;
|
||||||
|
}
|
||||||
|
.blog h2.date {
|
||||||
|
margin:0 0 25px;
|
||||||
|
}
|
||||||
|
h2.date span {
|
||||||
|
display:block;
|
||||||
|
margin:0 0 5px;
|
||||||
|
padding:0 0 15px;
|
||||||
|
font-size:20px;
|
||||||
|
border-bottom:1px solid #ccc;
|
||||||
|
}
|
||||||
|
.blog h2.date a {
|
||||||
|
font-weight:700;
|
||||||
|
}
|
||||||
|
.blog.inner h2.date span:first-child {
|
||||||
|
display:block;
|
||||||
|
font-size:30px;
|
||||||
|
font-weight:700;
|
||||||
|
padding:0;
|
||||||
|
border:none;
|
||||||
|
}
|
||||||
|
.blog.inner h3 {
|
||||||
|
text-align:left;
|
||||||
|
font-size:25px;
|
||||||
|
font-weight:700;
|
||||||
|
margin:0 0 15px;
|
||||||
|
}
|
||||||
|
.blog.inner ul li {
|
||||||
|
margin-left: 50px;
|
||||||
|
line-height:26px;
|
||||||
|
}
|
||||||
|
.recent h3 {
|
||||||
|
font-size: 25px;
|
||||||
|
font-weight: 700;
|
||||||
|
margin: 0 0 15px;
|
||||||
|
text-align: left;
|
||||||
|
}
|
||||||
|
.recent ul li.active a {
|
||||||
|
color:#252525;
|
||||||
|
}
|
||||||
|
.border {
|
||||||
|
width:130px;
|
||||||
|
margin: 45px auto;
|
||||||
|
border-top:1px solid #dfdfdf;
|
||||||
|
border-top:1px solid #81807f;
|
||||||
|
}
|
||||||
|
.text-img {
|
||||||
|
display:block;
|
||||||
|
margin:0 auto 17px;
|
||||||
|
}
|
||||||
|
.indent p, .indent ul {
|
||||||
|
padding:0 0 0 50px;
|
||||||
|
}
|
||||||
|
.span3 {
|
||||||
|
margin-left:0;
|
||||||
|
}
|
||||||
|
.nav.nav-list.bs-docs-sidenav {
|
||||||
|
border:1px solid #e5e5e5;
|
||||||
|
border-radius:5px;
|
||||||
|
box-shadow:0 0 3px #f9f9f9;
|
||||||
|
padding:0;
|
||||||
|
width:auto;
|
||||||
|
}
|
||||||
|
.nav.nav-list.bs-docs-sidenav li {
|
||||||
|
border-top:1px solid #e5e5e5;
|
||||||
|
}
|
||||||
|
.nav.nav-list.bs-docs-sidenav li:first-child {
|
||||||
|
border:none;
|
||||||
|
}
|
||||||
|
.nav.nav-list.bs-docs-sidenav li:first-child a {
|
||||||
|
border-radius:5px 5px 0 0;
|
||||||
|
-moz-border-radius:5px 5px 0 0;
|
||||||
|
-webkit-border-radius:5px 5px 0 0;
|
||||||
|
}
|
||||||
|
.nav.nav-list.bs-docs-sidenav li:last-child, .nav.nav-list.bs-docs-sidenav li:last-child a {
|
||||||
|
border-radius:0 0 5px 5px;
|
||||||
|
-moz-border-radius:0 0 5px 5px;
|
||||||
|
-webkit-border-radius:0 0 5px 5px;
|
||||||
|
}
|
||||||
|
.nav.nav-list.bs-docs-sidenav li a {
|
||||||
|
padding:10px;
|
||||||
|
margin:0;
|
||||||
|
font-weight:400;
|
||||||
|
font-size:14px;
|
||||||
|
line-height:18px;
|
||||||
|
}
|
||||||
|
.icon-chevron-right {
|
||||||
|
float: right;
|
||||||
|
margin-right: -6px;
|
||||||
|
margin-top: 2px;
|
||||||
|
opacity: 0.25;
|
||||||
|
}
|
||||||
|
.indent ul li {
|
||||||
|
line-height:26px;
|
||||||
|
}
|
||||||
|
.span8 h3 {
|
||||||
|
text-align:left;
|
||||||
|
margin:0 0 50px;
|
||||||
|
}
|
||||||
|
.span8 h3 a {
|
||||||
|
font-weight:800;
|
||||||
|
}
|
||||||
|
.span8 h4 {
|
||||||
|
font:700 18px/26px 'Open Sans', Arial, Helvetica, sans-serif;
|
||||||
|
margin:0 0 20px;
|
||||||
|
}
|
||||||
|
.span8 p span {
|
||||||
|
font-weight:700;
|
||||||
|
}
|
||||||
|
header.index-head {
|
||||||
|
background:#f9f9f9;
|
||||||
|
margin:0 0 30px;
|
||||||
|
}
|
||||||
|
header.index-head .span12 {
|
||||||
|
margin-bottom:80px;
|
||||||
|
}
|
||||||
|
.index-content h2 {
|
||||||
|
text-align:center;
|
||||||
|
}
|
||||||
|
.third-item img {
|
||||||
|
display:block;
|
||||||
|
margin:0 auto 70px;
|
||||||
|
}
|
||||||
|
.container.custom .third-item p {
|
||||||
|
margin:0 0 0 20px;
|
||||||
|
}
|
||||||
|
.row-fluid {
|
||||||
|
margin:0;
|
||||||
|
padding:0;
|
||||||
|
}
|
||||||
|
.nav-list [class^="icon-"] {
|
||||||
|
margin-right:-2px;
|
||||||
|
}
|
||||||
|
@media (min-width: 1200px) {
|
||||||
|
.custom.navbar .nav li a, .navbar .brand, .custom.navbar .nav li.pull-right span {
|
||||||
|
padding: 10px 20px;
|
||||||
|
font-size:16px;
|
||||||
|
}
|
||||||
|
.nav.nav-list.bs-docs-sidenav {
|
||||||
|
width:258px;
|
||||||
|
}
|
||||||
|
.container.custom .recent ul.unstyled {
|
||||||
|
margin-right:100px;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
@media (max-width: 980px) {
|
||||||
|
.container.custom {
|
||||||
|
width:95%;
|
||||||
|
}
|
||||||
|
.bs-docs-sidenav.affix {
|
||||||
|
position: static;
|
||||||
|
top: 0;
|
||||||
|
width: 100%;
|
||||||
|
}
|
||||||
|
.nav.nav-list.bs-docs-sidenav {
|
||||||
|
width:100%;
|
||||||
|
margin-bottom:20px;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
@media only screen
|
||||||
|
and (min-device-width : 710px)
|
||||||
|
and (max-device-width : 770px) {
|
||||||
|
.container.custom {
|
||||||
|
width:700px;
|
||||||
|
position:relative;
|
||||||
|
}
|
||||||
|
.custom.navbar .nav li {
|
||||||
|
font-size:22px;
|
||||||
|
padding:0 10px;
|
||||||
|
}
|
||||||
|
.nav.nav-list.bs-docs-sidenav.affix {
|
||||||
|
position:fixed;
|
||||||
|
top:175px;
|
||||||
|
width:218px;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
@media only screen and (min-device-width : 770px)
|
||||||
|
and (max-device-width : 810px) {
|
||||||
|
.container.custom {
|
||||||
|
width:700px;
|
||||||
|
}
|
||||||
|
.custom.navbar .nav li {
|
||||||
|
font-size:15px;
|
||||||
|
padding:0 15px;
|
||||||
|
}
|
||||||
|
.custom.navbar .nav {
|
||||||
|
margin-left:30px;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
@media only screen
|
||||||
|
and (min-device-width : 320px)
|
||||||
|
and (max-device-width : 480px) {
|
||||||
|
.container.custom {
|
||||||
|
width:100%;
|
||||||
|
margin:0;
|
||||||
|
}
|
||||||
|
footer .logo-block {
|
||||||
|
text-align: left;
|
||||||
|
padding-left:30px;
|
||||||
|
}
|
||||||
|
.offset1, .row-fluid .offset1:first-child {
|
||||||
|
margin:0;
|
||||||
|
}
|
||||||
|
.indent p, .indent ul {
|
||||||
|
padding:0;
|
||||||
|
}
|
||||||
|
.indent, .blog, .recent h3, .recent ul, .text-part {
|
||||||
|
padding:0 20px;
|
||||||
|
}
|
||||||
|
.index-head h3 {
|
||||||
|
font-size:20px;
|
||||||
|
}
|
||||||
|
.index-content {
|
||||||
|
padding:0 20px;
|
||||||
|
}
|
||||||
|
.content h2, .content h3, .content ul.unstyled, .sidebar h3 {
|
||||||
|
padding:0 20px;
|
||||||
|
}
|
||||||
|
h1 {
|
||||||
|
padding:0 20px;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
@media (max-width: 320px) {
|
||||||
|
.container.custom {
|
||||||
|
width:100%;
|
||||||
|
}
|
||||||
|
footer .logo-block {
|
||||||
|
text-align: left;
|
||||||
|
padding-left:30px;
|
||||||
|
}
|
||||||
|
footer .container.custom p {
|
||||||
|
margin-top:10px;
|
||||||
|
}
|
||||||
|
.offset1, .row-fluid .offset1:first-child {
|
||||||
|
margin:0;
|
||||||
|
}
|
||||||
|
.indent p, .indent ul {
|
||||||
|
padding:0;
|
||||||
|
}
|
||||||
|
.indent, .blog, .recent h3, .recent ul, .text-part {
|
||||||
|
padding:0 20px;
|
||||||
|
}
|
||||||
|
.index-head h3 {
|
||||||
|
font-size:25px;
|
||||||
|
line-height:30px;
|
||||||
|
padding:0 20px;
|
||||||
|
}
|
||||||
|
.index-content {
|
||||||
|
padding:0 20px;
|
||||||
|
}
|
||||||
|
.content h2, .content h3, .content ul.unstyled, .sidebar h3 {
|
||||||
|
padding:0 20px;
|
||||||
|
}
|
||||||
|
h1 {
|
||||||
|
padding:0 20px;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
.container.custom .faq-page p {
|
||||||
|
margin-bottom:10px;
|
||||||
|
}
|
||||||
|
.index-head h3 {
|
||||||
|
margin-bottom:50px;
|
||||||
|
}
|
||||||
|
h1.center {
|
||||||
|
text-align:center;
|
||||||
|
}
|
||||||
|
.btn.btn-navbar {
|
||||||
|
height:auto;
|
||||||
|
width:auto;
|
||||||
|
margin:10px 0 0;
|
||||||
|
}
|
||||||
|
.navbar-inner {
|
||||||
|
z-index:100000;
|
||||||
|
position:relative;
|
||||||
|
}
|
|
@ -0,0 +1,147 @@
|
||||||
|
<!DOCTYPE html>
|
||||||
|
<html lang="en">
|
||||||
|
<head>
|
||||||
|
<meta charset="UTF-8" />
|
||||||
|
<title>Druid | {{page.title}}</title>
|
||||||
|
<link rel="stylesheet" type="text/css" href="/css/bootstrap.css" media="all" />
|
||||||
|
<link rel="stylesheet" type="text/css" href="/css/bootstrap-responsive.css" media="all" />
|
||||||
|
<link rel="stylesheet" type="text/css" href="/css/syntax.css" media="all" />
|
||||||
|
<link href='http://fonts.googleapis.com/css?family=Open+Sans:400,600,300,700,800' rel='stylesheet' type='text/css'>
|
||||||
|
<link rel="stylesheet" type="text/css" href="/css/custom.css" media="all" />
|
||||||
|
<link rel="alternate" type="application/atom+xml" href="http://druid.io/feed">
|
||||||
|
<script src="http://code.jquery.com/jquery.js"></script>
|
||||||
|
<script src="/js/bootstrap.min.js"></script>
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
<div class="wrapper">
|
||||||
|
<header{% if page.id == 'home' %} class="index-head"{% endif %}>
|
||||||
|
<div class="container custom">
|
||||||
|
<div class="row-fluid">
|
||||||
|
<div class="span12">
|
||||||
|
<div class="navbar navbar-inverse custom">
|
||||||
|
<div class="navbar-inner">
|
||||||
|
<button type="button" class="btn btn-navbar" data-toggle="collapse" data-target=".nav-collapse">
|
||||||
|
<span class="icon-bar"></span>
|
||||||
|
<span class="icon-bar"></span>
|
||||||
|
<span class="icon-bar"></span>
|
||||||
|
</button>
|
||||||
|
<a class="brand {% if page.id == 'home' %}active{% endif %}" href="/">Home</a>
|
||||||
|
<div class="nav-collapse collapse">
|
||||||
|
<ul class="nav">
|
||||||
|
<li {% if page.sectionid == 'druid' %} class="active"{% endif %}>
|
||||||
|
<a href="/druid.html">What is Druid?</a>
|
||||||
|
</li>
|
||||||
|
<li {% if page.sectionid == 'downloads' %} class="active"{% endif %}>
|
||||||
|
<a href="/downloads.html">Downloads</a>
|
||||||
|
</li>
|
||||||
|
<li {% if page.sectionid == 'docs' %} class="active"{% endif %}>
|
||||||
|
<a class="doc-link" target="_blank" href="https://github.com/metamx/druid/wiki">Documentation <span></span></a>
|
||||||
|
</li>
|
||||||
|
<li {% if page.sectionid == 'community' %} class="active"{% endif %}>
|
||||||
|
<a href="/community.html">Community</a>
|
||||||
|
</li>
|
||||||
|
<li {% if page.sectionid == 'faq' %} class="active"{% endif %}>
|
||||||
|
<a href="/faq.html">FAQ</a>
|
||||||
|
</li>
|
||||||
|
<li {% if page.sectionid == 'blog' %} class="active"{% endif %}>
|
||||||
|
<a href="/blog">Blog</a>
|
||||||
|
</li>
|
||||||
|
<li class="pull-right">
|
||||||
|
<span>BETA</span>
|
||||||
|
</li>
|
||||||
|
</ul>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
{% if page.id == 'home' %}
|
||||||
|
<h3>Druid is open-source infrastructure for real²time exploratory analytics on large datasets.</h3>
|
||||||
|
<button class="btn" type="button"><a href="downloads.html">Download</a></button>
|
||||||
|
{% endif %}
|
||||||
|
</div>
|
||||||
|
</header>
|
||||||
|
<div class="container custom main-cont">
|
||||||
|
|
||||||
|
{{ content }}
|
||||||
|
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<footer>
|
||||||
|
<div class="container custom">
|
||||||
|
<div class="row-fluid">
|
||||||
|
<div class="span3">
|
||||||
|
<div class="contact-item">
|
||||||
|
<span>CONTACT US</span>
|
||||||
|
<a href="mailto:info@druid.io">info@druid.io</a>
|
||||||
|
</div>
|
||||||
|
<div class="contact-item">
|
||||||
|
<span>Metamarkets</span>
|
||||||
|
625 2nd Street, Suite #230<br/>
|
||||||
|
San Francisco, CA 94017
|
||||||
|
<div class="soc">
|
||||||
|
<a href="https://twitter.com/druidio"></a>
|
||||||
|
<a href="https://github.com/metamx/druid" class="github"></a>
|
||||||
|
<a href="http://www.meetup.com/Open-Druid/" class="meet"></a>
|
||||||
|
<a href="http://druid.io/feed/" class="rss" target="_blank"></a>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div class="span9">
|
||||||
|
<ul class="unstyled">
|
||||||
|
<li>
|
||||||
|
<a href="/">DRUID</a>
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
<a href="/druid.html">What is Druid?</a>
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
<a href="/downloads.html">Downloads</a>
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
<a target="_blank" href="https://github.com/metamx/druid/wiki">Documentation </a>
|
||||||
|
</li>
|
||||||
|
</ul>
|
||||||
|
<ul class="unstyled">
|
||||||
|
<li>
|
||||||
|
<a href="/community.html">SUPPORT</a>
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
<a href="/community.html">Community</a>
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
<a href="/faq.html">FAQ</a>
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
<a href="/licensing.html">Licensing</a>
|
||||||
|
</li>
|
||||||
|
</ul>
|
||||||
|
<ul class="unstyled">
|
||||||
|
<li>
|
||||||
|
<a href="/blog">BLOG</a>
|
||||||
|
</li>
|
||||||
|
</ul>
|
||||||
|
<div class="logo-block">
|
||||||
|
<span class="logo custom">
|
||||||
|
<a href="/"></a>
|
||||||
|
</span>
|
||||||
|
<p>is an open source project sponsored by<br/> Metamarkets.</p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</footer>
|
||||||
|
<script type="text/javascript">
|
||||||
|
var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www.");
|
||||||
|
document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E"));
|
||||||
|
</script>
|
||||||
|
<script type="text/javascript">
|
||||||
|
try {
|
||||||
|
var pageTracker = _gat._getTracker("UA-40280432-1");
|
||||||
|
pageTracker._trackPageview();
|
||||||
|
} catch(err) {}
|
||||||
|
</script>
|
||||||
|
</body>
|
||||||
|
</html>
|
||||||
|
|
|
@ -0,0 +1,8 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
|
<div class="row-fluid">
|
||||||
|
|
||||||
|
{{ content }}
|
||||||
|
|
||||||
|
</div>
|
|
@ -0,0 +1,160 @@
|
||||||
|
/*****************************************************************************/
|
||||||
|
/*
|
||||||
|
/* Common
|
||||||
|
/*
|
||||||
|
/*****************************************************************************/
|
||||||
|
|
||||||
|
/* Global Reset */
|
||||||
|
* {
|
||||||
|
margin: 0;
|
||||||
|
padding: 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
html, body { height: 100%; }
|
||||||
|
|
||||||
|
body {
|
||||||
|
background-color: #FFF;
|
||||||
|
font: 13.34px Helvetica, Arial, sans-serif;
|
||||||
|
font-size: small;
|
||||||
|
text-align: center;
|
||||||
|
}
|
||||||
|
|
||||||
|
h1, h2, h3, h4, h5, h6 {
|
||||||
|
font-size: 100%; }
|
||||||
|
|
||||||
|
h1 { margin-bottom: 1em; }
|
||||||
|
p { margin: 1em 0; }
|
||||||
|
|
||||||
|
a { color: #00a; }
|
||||||
|
a:hover { color: #000; }
|
||||||
|
a:visited { color: #a0a; }
|
||||||
|
|
||||||
|
/*****************************************************************************/
|
||||||
|
/*
|
||||||
|
/* Home
|
||||||
|
/*
|
||||||
|
/*****************************************************************************/
|
||||||
|
ul.posts {
|
||||||
|
list-style-type: none;
|
||||||
|
margin-bottom: 2em;
|
||||||
|
}
|
||||||
|
|
||||||
|
ul.posts li {
|
||||||
|
line-height: 1.75em;
|
||||||
|
}
|
||||||
|
|
||||||
|
ul.posts span {
|
||||||
|
color: #aaa;
|
||||||
|
font-family: Monaco, "Courier New", monospace;
|
||||||
|
font-size: 80%;
|
||||||
|
}
|
||||||
|
|
||||||
|
/*****************************************************************************/
|
||||||
|
/*
|
||||||
|
/* Site
|
||||||
|
/*
|
||||||
|
/*****************************************************************************/
|
||||||
|
|
||||||
|
.site {
|
||||||
|
font-size: 115%;
|
||||||
|
text-align: justify;
|
||||||
|
width: 42em;
|
||||||
|
margin: 3em auto 2em;
|
||||||
|
line-height: 1.5em;
|
||||||
|
}
|
||||||
|
|
||||||
|
.site .header a {
|
||||||
|
font-weight: bold;
|
||||||
|
text-decoration: none;
|
||||||
|
}
|
||||||
|
|
||||||
|
.site .header h1.title {
|
||||||
|
display: inline-block;
|
||||||
|
margin-bottom: 2em;
|
||||||
|
}
|
||||||
|
|
||||||
|
.site .header h1.title a {
|
||||||
|
color: #a00;
|
||||||
|
}
|
||||||
|
|
||||||
|
.site .header h1.title a:hover {
|
||||||
|
color: #000;
|
||||||
|
}
|
||||||
|
|
||||||
|
.site .header a.extra {
|
||||||
|
color: #aaa;
|
||||||
|
margin-left: 1em;
|
||||||
|
}
|
||||||
|
|
||||||
|
.site .header a.extra:hover {
|
||||||
|
color: #000;
|
||||||
|
}
|
||||||
|
|
||||||
|
.site .meta {
|
||||||
|
color: #aaa;
|
||||||
|
}
|
||||||
|
|
||||||
|
.site .footer {
|
||||||
|
font-size: 80%;
|
||||||
|
color: #666;
|
||||||
|
border-top: 4px solid #eee;
|
||||||
|
margin-top: 2em;
|
||||||
|
overflow: hidden;
|
||||||
|
}
|
||||||
|
|
||||||
|
.site .footer .contact {
|
||||||
|
float: left;
|
||||||
|
margin-right: 3em;
|
||||||
|
}
|
||||||
|
|
||||||
|
.site .footer .contact a {
|
||||||
|
color: #8085C1;
|
||||||
|
}
|
||||||
|
|
||||||
|
.site .footer .rss {
|
||||||
|
margin-top: 1.1em;
|
||||||
|
margin-right: -.2em;
|
||||||
|
float: right;
|
||||||
|
}
|
||||||
|
|
||||||
|
.site .footer .rss img {
|
||||||
|
border: 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
/*****************************************************************************/
|
||||||
|
/*
|
||||||
|
/* Posts
|
||||||
|
/*
|
||||||
|
/*****************************************************************************/
|
||||||
|
|
||||||
|
/* standard */
|
||||||
|
.post pre {
|
||||||
|
border: 1px solid #ddd;
|
||||||
|
background-color: #eef;
|
||||||
|
padding: 0 .4em;
|
||||||
|
}
|
||||||
|
|
||||||
|
.post ul, .post ol {
|
||||||
|
margin-left: 1.35em;
|
||||||
|
}
|
||||||
|
|
||||||
|
.post code {
|
||||||
|
border: 1px solid #ddd;
|
||||||
|
background-color: #eef;
|
||||||
|
padding: 0 .2em;
|
||||||
|
}
|
||||||
|
|
||||||
|
.post pre code {
|
||||||
|
border: none;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* terminal */
|
||||||
|
.post pre.terminal {
|
||||||
|
border: 1px solid #000;
|
||||||
|
background-color: #333;
|
||||||
|
color: #FFF;
|
||||||
|
}
|
||||||
|
|
||||||
|
.post pre.terminal code {
|
||||||
|
background-color: #333;
|
||||||
|
}
|
|
@ -0,0 +1,11 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
---
|
||||||
|
|
||||||
|
<div class="row-fluid">
|
||||||
|
<div class="span10 offset1{% if page.id != 'home' %} simple-page{% endif %}{% if page.sectionid == 'faq' %} faq-page{% endif %}">
|
||||||
|
|
||||||
|
{{ content }}
|
||||||
|
|
||||||
|
</div>
|
||||||
|
</div>
|
|
@ -0,0 +1,96 @@
|
||||||
|
<!--
|
||||||
|
PIE: CSS3 rendering for IE
|
||||||
|
Version 1.0.0
|
||||||
|
http://css3pie.com
|
||||||
|
Dual-licensed for use under the Apache License Version 2.0 or the General Public License (GPL) Version 2.
|
||||||
|
-->
|
||||||
|
<PUBLIC:COMPONENT lightWeight="true">
|
||||||
|
<!-- saved from url=(0014)about:internet -->
|
||||||
|
<PUBLIC:ATTACH EVENT="oncontentready" FOR="element" ONEVENT="init()" />
|
||||||
|
<PUBLIC:ATTACH EVENT="ondocumentready" FOR="element" ONEVENT="init()" />
|
||||||
|
<PUBLIC:ATTACH EVENT="ondetach" FOR="element" ONEVENT="cleanup()" />
|
||||||
|
|
||||||
|
<script type="text/javascript">
|
||||||
|
var doc = element.document;var f=window.PIE;
|
||||||
|
if(!f){f=window.PIE={F:"-pie-",nb:"Pie",La:"pie_",Ac:{TD:1,TH:1},cc:{TABLE:1,THEAD:1,TBODY:1,TFOOT:1,TR:1,INPUT:1,TEXTAREA:1,SELECT:1,OPTION:1,IMG:1,HR:1},fc:{A:1,INPUT:1,TEXTAREA:1,SELECT:1,BUTTON:1},Gd:{submit:1,button:1,reset:1},aa:function(){}};try{doc.execCommand("BackgroundImageCache",false,true)}catch(aa){}for(var ba=4,Z=doc.createElement("div"),ca=Z.getElementsByTagName("i"),ga;Z.innerHTML="<!--[if gt IE "+ ++ba+"]><i></i><![endif]--\>",ca[0];);f.O=ba;if(ba===6)f.F=f.F.replace(/^-/,"");f.ja=
|
||||||
|
doc.documentMode||f.O;Z.innerHTML='<v:shape adj="1"/>';ga=Z.firstChild;ga.style.behavior="url(#default#VML)";f.zc=typeof ga.adj==="object";(function(){var a,b=0,c={};f.p={Za:function(d){if(!a){a=doc.createDocumentFragment();a.namespaces.add("css3vml","urn:schemas-microsoft-com:vml")}return a.createElement("css3vml:"+d)},Ba:function(d){return d&&d._pieId||(d._pieId="_"+ ++b)},Eb:function(d){var e,g,j,i,h=arguments;e=1;for(g=h.length;e<g;e++){i=h[e];for(j in i)if(i.hasOwnProperty(j))d[j]=i[j]}return d},
|
||||||
|
Rb:function(d,e,g){var j=c[d],i,h;if(j)Object.prototype.toString.call(j)==="[object Array]"?j.push([e,g]):e.call(g,j);else{h=c[d]=[[e,g]];i=new Image;i.onload=function(){j=c[d]={h:i.width,f:i.height};for(var k=0,n=h.length;k<n;k++)h[k][0].call(h[k][1],j);i.onload=null};i.src=d}}}})();f.Na={gc:function(a,b,c,d){function e(){k=j>=90&&j<270?b:0;n=j<180?c:0;m=b-k;p=c-n}function g(){for(;j<0;)j+=360;j%=360}var j=d.sa;d=d.zb;var i,h,k,n,m,p,r,t;if(d){d=d.coords(a,b,c);i=d.x;h=d.y}if(j){j=j.jd();g();e();
|
||||||
|
if(!d){i=k;h=n}d=f.Na.tc(i,h,j,m,p);a=d[0];d=d[1]}else if(d){a=b-i;d=c-h}else{i=h=a=0;d=c}r=a-i;t=d-h;if(j===void 0){j=!r?t<0?90:270:!t?r<0?180:0:-Math.atan2(t,r)/Math.PI*180;g();e()}return{sa:j,xc:i,yc:h,td:a,ud:d,Wd:k,Xd:n,rd:m,sd:p,kd:r,ld:t,rc:f.Na.dc(i,h,a,d)}},tc:function(a,b,c,d,e){if(c===0||c===180)return[d,b];else if(c===90||c===270)return[a,e];else{c=Math.tan(-c*Math.PI/180);a=c*a-b;b=-1/c;d=b*d-e;e=b-c;return[(d-a)/e,(c*d-b*a)/e]}},dc:function(a,b,c,d){a=c-a;b=d-b;return Math.abs(a===0?
|
||||||
|
b:b===0?a:Math.sqrt(a*a+b*b))}};f.ea=function(){this.Gb=[];this.oc={}};f.ea.prototype={ba:function(a){var b=f.p.Ba(a),c=this.oc,d=this.Gb;if(!(b in c)){c[b]=d.length;d.push(a)}},Ha:function(a){a=f.p.Ba(a);var b=this.oc;if(a&&a in b){delete this.Gb[b[a]];delete b[a]}},xa:function(){for(var a=this.Gb,b=a.length;b--;)a[b]&&a[b]()}};f.Oa=new f.ea;f.Oa.Rd=function(){var a=this,b;if(!a.Sd){b=doc.documentElement.currentStyle.getAttribute(f.F+"poll-interval")||250;(function c(){a.xa();setTimeout(c,b)})();
|
||||||
|
a.Sd=1}};(function(){function a(){f.L.xa();window.detachEvent("onunload",a);window.PIE=null}f.L=new f.ea;window.attachEvent("onunload",a);f.L.ta=function(b,c,d){b.attachEvent(c,d);this.ba(function(){b.detachEvent(c,d)})}})();f.Qa=new f.ea;f.L.ta(window,"onresize",function(){f.Qa.xa()});(function(){function a(){f.mb.xa()}f.mb=new f.ea;f.L.ta(window,"onscroll",a);f.Qa.ba(a)})();(function(){function a(){c=f.kb.md()}function b(){if(c){for(var d=0,e=c.length;d<e;d++)f.attach(c[d]);c=0}}var c;if(f.ja<9){f.L.ta(window,
|
||||||
|
"onbeforeprint",a);f.L.ta(window,"onafterprint",b)}})();f.lb=new f.ea;f.L.ta(doc,"onmouseup",function(){f.lb.xa()});f.he=function(){function a(h){this.Y=h}var b=doc.createElement("length-calc"),c=doc.body||doc.documentElement,d=b.style,e={},g=["mm","cm","in","pt","pc"],j=g.length,i={};d.position="absolute";d.top=d.left="-9999px";for(c.appendChild(b);j--;){d.width="100"+g[j];e[g[j]]=b.offsetWidth/100}c.removeChild(b);d.width="1em";a.prototype={Kb:/(px|em|ex|mm|cm|in|pt|pc|%)$/,ic:function(){var h=
|
||||||
|
this.Jd;if(h===void 0)h=this.Jd=parseFloat(this.Y);return h},yb:function(){var h=this.ae;if(!h)h=this.ae=(h=this.Y.match(this.Kb))&&h[0]||"px";return h},a:function(h,k){var n=this.ic(),m=this.yb();switch(m){case "px":return n;case "%":return n*(typeof k==="function"?k():k)/100;case "em":return n*this.xb(h);case "ex":return n*this.xb(h)/2;default:return n*e[m]}},xb:function(h){var k=h.currentStyle.fontSize,n,m;if(k.indexOf("px")>0)return parseFloat(k);else if(h.tagName in f.cc){m=this;n=h.parentNode;
|
||||||
|
return f.n(k).a(n,function(){return m.xb(n)})}else{h.appendChild(b);k=b.offsetWidth;b.parentNode===h&&h.removeChild(b);return k}}};f.n=function(h){return i[h]||(i[h]=new a(h))};return a}();f.Ja=function(){function a(e){this.X=e}var b=f.n("50%"),c={top:1,center:1,bottom:1},d={left:1,center:1,right:1};a.prototype={zd:function(){if(!this.ac){var e=this.X,g=e.length,j=f.v,i=j.qa,h=f.n("0");i=i.na;h=["left",h,"top",h];if(g===1){e.push(new j.ob(i,"center"));g++}if(g===2){i&(e[0].k|e[1].k)&&e[0].d in c&&
|
||||||
|
e[1].d in d&&e.push(e.shift());if(e[0].k&i)if(e[0].d==="center")h[1]=b;else h[0]=e[0].d;else if(e[0].W())h[1]=f.n(e[0].d);if(e[1].k&i)if(e[1].d==="center")h[3]=b;else h[2]=e[1].d;else if(e[1].W())h[3]=f.n(e[1].d)}this.ac=h}return this.ac},coords:function(e,g,j){var i=this.zd(),h=i[1].a(e,g);e=i[3].a(e,j);return{x:i[0]==="right"?g-h:h,y:i[2]==="bottom"?j-e:e}}};return a}();f.Ka=function(){function a(b,c){this.h=b;this.f=c}a.prototype={a:function(b,c,d,e,g){var j=this.h,i=this.f,h=c/d;e=e/g;if(j===
|
||||||
|
"contain"){j=e>h?c:d*e;i=e>h?c/e:d}else if(j==="cover"){j=e<h?c:d*e;i=e<h?c/e:d}else if(j==="auto"){i=i==="auto"?g:i.a(b,d);j=i*e}else{j=j.a(b,c);i=i==="auto"?j/e:i.a(b,d)}return{h:j,f:i}}};a.Kc=new a("auto","auto");return a}();f.Ec=function(){function a(b){this.Y=b}a.prototype={Kb:/[a-z]+$/i,yb:function(){return this.ad||(this.ad=this.Y.match(this.Kb)[0].toLowerCase())},jd:function(){var b=this.Vc,c;if(b===undefined){b=this.yb();c=parseFloat(this.Y,10);b=this.Vc=b==="deg"?c:b==="rad"?c/Math.PI*180:
|
||||||
|
b==="grad"?c/400*360:b==="turn"?c*360:0}return b}};return a}();f.Jc=function(){function a(c){this.Y=c}var b={};a.Qd=/\s*rgba\(\s*(\d{1,3})\s*,\s*(\d{1,3})\s*,\s*(\d{1,3})\s*,\s*(\d+|\d*\.\d+)\s*\)\s*/;a.Fb={aliceblue:"F0F8FF",antiquewhite:"FAEBD7",aqua:"0FF",aquamarine:"7FFFD4",azure:"F0FFFF",beige:"F5F5DC",bisque:"FFE4C4",black:"000",blanchedalmond:"FFEBCD",blue:"00F",blueviolet:"8A2BE2",brown:"A52A2A",burlywood:"DEB887",cadetblue:"5F9EA0",chartreuse:"7FFF00",chocolate:"D2691E",coral:"FF7F50",cornflowerblue:"6495ED",
|
||||||
|
cornsilk:"FFF8DC",crimson:"DC143C",cyan:"0FF",darkblue:"00008B",darkcyan:"008B8B",darkgoldenrod:"B8860B",darkgray:"A9A9A9",darkgreen:"006400",darkkhaki:"BDB76B",darkmagenta:"8B008B",darkolivegreen:"556B2F",darkorange:"FF8C00",darkorchid:"9932CC",darkred:"8B0000",darksalmon:"E9967A",darkseagreen:"8FBC8F",darkslateblue:"483D8B",darkslategray:"2F4F4F",darkturquoise:"00CED1",darkviolet:"9400D3",deeppink:"FF1493",deepskyblue:"00BFFF",dimgray:"696969",dodgerblue:"1E90FF",firebrick:"B22222",floralwhite:"FFFAF0",
|
||||||
|
forestgreen:"228B22",fuchsia:"F0F",gainsboro:"DCDCDC",ghostwhite:"F8F8FF",gold:"FFD700",goldenrod:"DAA520",gray:"808080",green:"008000",greenyellow:"ADFF2F",honeydew:"F0FFF0",hotpink:"FF69B4",indianred:"CD5C5C",indigo:"4B0082",ivory:"FFFFF0",khaki:"F0E68C",lavender:"E6E6FA",lavenderblush:"FFF0F5",lawngreen:"7CFC00",lemonchiffon:"FFFACD",lightblue:"ADD8E6",lightcoral:"F08080",lightcyan:"E0FFFF",lightgoldenrodyellow:"FAFAD2",lightgreen:"90EE90",lightgrey:"D3D3D3",lightpink:"FFB6C1",lightsalmon:"FFA07A",
|
||||||
|
lightseagreen:"20B2AA",lightskyblue:"87CEFA",lightslategray:"789",lightsteelblue:"B0C4DE",lightyellow:"FFFFE0",lime:"0F0",limegreen:"32CD32",linen:"FAF0E6",magenta:"F0F",maroon:"800000",mediumauqamarine:"66CDAA",mediumblue:"0000CD",mediumorchid:"BA55D3",mediumpurple:"9370D8",mediumseagreen:"3CB371",mediumslateblue:"7B68EE",mediumspringgreen:"00FA9A",mediumturquoise:"48D1CC",mediumvioletred:"C71585",midnightblue:"191970",mintcream:"F5FFFA",mistyrose:"FFE4E1",moccasin:"FFE4B5",navajowhite:"FFDEAD",
|
||||||
|
navy:"000080",oldlace:"FDF5E6",olive:"808000",olivedrab:"688E23",orange:"FFA500",orangered:"FF4500",orchid:"DA70D6",palegoldenrod:"EEE8AA",palegreen:"98FB98",paleturquoise:"AFEEEE",palevioletred:"D87093",papayawhip:"FFEFD5",peachpuff:"FFDAB9",peru:"CD853F",pink:"FFC0CB",plum:"DDA0DD",powderblue:"B0E0E6",purple:"800080",red:"F00",rosybrown:"BC8F8F",royalblue:"4169E1",saddlebrown:"8B4513",salmon:"FA8072",sandybrown:"F4A460",seagreen:"2E8B57",seashell:"FFF5EE",sienna:"A0522D",silver:"C0C0C0",skyblue:"87CEEB",
|
||||||
|
slateblue:"6A5ACD",slategray:"708090",snow:"FFFAFA",springgreen:"00FF7F",steelblue:"4682B4",tan:"D2B48C",teal:"008080",thistle:"D8BFD8",tomato:"FF6347",turquoise:"40E0D0",violet:"EE82EE",wheat:"F5DEB3",white:"FFF",whitesmoke:"F5F5F5",yellow:"FF0",yellowgreen:"9ACD32"};a.prototype={parse:function(){if(!this.Ua){var c=this.Y,d;if(d=c.match(a.Qd)){this.Ua="rgb("+d[1]+","+d[2]+","+d[3]+")";this.Yb=parseFloat(d[4])}else{if((d=c.toLowerCase())in a.Fb)c="#"+a.Fb[d];this.Ua=c;this.Yb=c==="transparent"?0:
|
||||||
|
1}}},U:function(c){this.parse();return this.Ua==="currentColor"?c.currentStyle.color:this.Ua},fa:function(){this.parse();return this.Yb}};f.ha=function(c){return b[c]||(b[c]=new a(c))};return a}();f.v=function(){function a(c){this.$a=c;this.ch=0;this.X=[];this.Ga=0}var b=a.qa={Ia:1,Wb:2,z:4,Lc:8,Xb:16,na:32,K:64,oa:128,pa:256,Ra:512,Tc:1024,URL:2048};a.ob=function(c,d){this.k=c;this.d=d};a.ob.prototype={Ca:function(){return this.k&b.K||this.k&b.oa&&this.d==="0"},W:function(){return this.Ca()||this.k&
|
||||||
|
b.Ra}};a.prototype={de:/\s/,Kd:/^[\+\-]?(\d*\.)?\d+/,url:/^url\(\s*("([^"]*)"|'([^']*)'|([!#$%&*-~]*))\s*\)/i,nc:/^\-?[_a-z][\w-]*/i,Yd:/^("([^"]*)"|'([^']*)')/,Bd:/^#([\da-f]{6}|[\da-f]{3})/i,be:{px:b.K,em:b.K,ex:b.K,mm:b.K,cm:b.K,"in":b.K,pt:b.K,pc:b.K,deg:b.Ia,rad:b.Ia,grad:b.Ia},fd:{rgb:1,rgba:1,hsl:1,hsla:1},next:function(c){function d(p,r){p=new a.ob(p,r);if(!c){k.X.push(p);k.Ga++}return p}function e(){k.Ga++;return null}var g,j,i,h,k=this;if(this.Ga<this.X.length)return this.X[this.Ga++];for(;this.de.test(this.$a.charAt(this.ch));)this.ch++;
|
||||||
|
if(this.ch>=this.$a.length)return e();j=this.ch;g=this.$a.substring(this.ch);i=g.charAt(0);switch(i){case "#":if(h=g.match(this.Bd)){this.ch+=h[0].length;return d(b.z,h[0])}break;case '"':case "'":if(h=g.match(this.Yd)){this.ch+=h[0].length;return d(b.Tc,h[2]||h[3]||"")}break;case "/":case ",":this.ch++;return d(b.pa,i);case "u":if(h=g.match(this.url)){this.ch+=h[0].length;return d(b.URL,h[2]||h[3]||h[4]||"")}}if(h=g.match(this.Kd)){i=h[0];this.ch+=i.length;if(g.charAt(i.length)==="%"){this.ch++;
|
||||||
|
return d(b.Ra,i+"%")}if(h=g.substring(i.length).match(this.nc)){i+=h[0];this.ch+=h[0].length;return d(this.be[h[0].toLowerCase()]||b.Lc,i)}return d(b.oa,i)}if(h=g.match(this.nc)){i=h[0];this.ch+=i.length;if(i.toLowerCase()in f.Jc.Fb||i==="currentColor"||i==="transparent")return d(b.z,i);if(g.charAt(i.length)==="("){this.ch++;if(i.toLowerCase()in this.fd){g=function(p){return p&&p.k&b.oa};h=function(p){return p&&p.k&(b.oa|b.Ra)};var n=function(p,r){return p&&p.d===r},m=function(){return k.next(1)};
|
||||||
|
if((i.charAt(0)==="r"?h(m()):g(m()))&&n(m(),",")&&h(m())&&n(m(),",")&&h(m())&&(i==="rgb"||i==="hsa"||n(m(),",")&&g(m()))&&n(m(),")"))return d(b.z,this.$a.substring(j,this.ch));return e()}return d(b.Xb,i)}return d(b.na,i)}this.ch++;return d(b.Wb,i)},D:function(){return this.X[this.Ga-- -2]},all:function(){for(;this.next(););return this.X},ma:function(c,d){for(var e=[],g,j;g=this.next();){if(c(g)){j=true;this.D();break}e.push(g)}return d&&!j?null:e}};return a}();var ha=function(a){this.e=a};ha.prototype=
|
||||||
|
{Z:0,Od:function(){var a=this.qb,b;return!a||(b=this.o())&&(a.x!==b.x||a.y!==b.y)},Td:function(){var a=this.qb,b;return!a||(b=this.o())&&(a.h!==b.h||a.f!==b.f)},hc:function(){var a=this.e,b=a.getBoundingClientRect(),c=f.ja===9,d=f.O===7,e=b.right-b.left;return{x:b.left,y:b.top,h:c||d?a.offsetWidth:e,f:c||d?a.offsetHeight:b.bottom-b.top,Hd:d&&e?a.offsetWidth/e:1}},o:function(){return this.Z?this.Va||(this.Va=this.hc()):this.hc()},Ad:function(){return!!this.qb},cb:function(){++this.Z},hb:function(){if(!--this.Z){if(this.Va)this.qb=
|
||||||
|
this.Va;this.Va=null}}};(function(){function a(b){var c=f.p.Ba(b);return function(){if(this.Z){var d=this.$b||(this.$b={});return c in d?d[c]:(d[c]=b.call(this))}else return b.call(this)}}f.B={Z:0,ka:function(b){function c(d){this.e=d;this.Zb=this.ia()}f.p.Eb(c.prototype,f.B,b);c.$c={};return c},j:function(){var b=this.ia(),c=this.constructor.$c;return b?b in c?c[b]:(c[b]=this.la(b)):null},ia:a(function(){var b=this.e,c=this.constructor,d=b.style;b=b.currentStyle;var e=this.wa,g=this.Fa,j=c.Yc||(c.Yc=
|
||||||
|
f.F+e);c=c.Zc||(c.Zc=f.nb+g.charAt(0).toUpperCase()+g.substring(1));return d[c]||b.getAttribute(j)||d[g]||b.getAttribute(e)}),i:a(function(){return!!this.j()}),H:a(function(){var b=this.ia(),c=b!==this.Zb;this.Zb=b;return c}),va:a,cb:function(){++this.Z},hb:function(){--this.Z||delete this.$b}}})();f.Sb=f.B.ka({wa:f.F+"background",Fa:f.nb+"Background",cd:{scroll:1,fixed:1,local:1},fb:{"repeat-x":1,"repeat-y":1,repeat:1,"no-repeat":1},sc:{"padding-box":1,"border-box":1,"content-box":1},Pd:{top:1,right:1,
|
||||||
|
bottom:1,left:1,center:1},Ud:{contain:1,cover:1},eb:{Ma:"backgroundClip",z:"backgroundColor",da:"backgroundImage",Pa:"backgroundOrigin",S:"backgroundPosition",T:"backgroundRepeat",Sa:"backgroundSize"},la:function(a){function b(s){return s&&s.W()||s.k&k&&s.d in t}function c(s){return s&&(s.W()&&f.n(s.d)||s.d==="auto"&&"auto")}var d=this.e.currentStyle,e,g,j,i=f.v.qa,h=i.pa,k=i.na,n=i.z,m,p,r=0,t=this.Pd,v,l,q={M:[]};if(this.wb()){e=new f.v(a);for(j={};g=e.next();){m=g.k;p=g.d;if(!j.P&&m&i.Xb&&p===
|
||||||
|
"linear-gradient"){v={ca:[],P:p};for(l={};g=e.next();){m=g.k;p=g.d;if(m&i.Wb&&p===")"){l.color&&v.ca.push(l);v.ca.length>1&&f.p.Eb(j,v);break}if(m&n){if(v.sa||v.zb){g=e.D();if(g.k!==h)break;e.next()}l={color:f.ha(p)};g=e.next();if(g.W())l.db=f.n(g.d);else e.D()}else if(m&i.Ia&&!v.sa&&!l.color&&!v.ca.length)v.sa=new f.Ec(g.d);else if(b(g)&&!v.zb&&!l.color&&!v.ca.length){e.D();v.zb=new f.Ja(e.ma(function(s){return!b(s)},false))}else if(m&h&&p===","){if(l.color){v.ca.push(l);l={}}}else break}}else if(!j.P&&
|
||||||
|
m&i.URL){j.Ab=p;j.P="image"}else if(b(g)&&!j.$){e.D();j.$=new f.Ja(e.ma(function(s){return!b(s)},false))}else if(m&k)if(p in this.fb&&!j.bb)j.bb=p;else if(p in this.sc&&!j.Wa){j.Wa=p;if((g=e.next())&&g.k&k&&g.d in this.sc)j.ub=g.d;else{j.ub=p;e.D()}}else if(p in this.cd&&!j.bc)j.bc=p;else return null;else if(m&n&&!q.color)q.color=f.ha(p);else if(m&h&&p==="/"&&!j.Xa&&j.$){g=e.next();if(g.k&k&&g.d in this.Ud)j.Xa=new f.Ka(g.d);else if(g=c(g)){m=c(e.next());if(!m){m=g;e.D()}j.Xa=new f.Ka(g,m)}else return null}else if(m&
|
||||||
|
h&&p===","&&j.P){j.Hb=a.substring(r,e.ch-1);r=e.ch;q.M.push(j);j={}}else return null}if(j.P){j.Hb=a.substring(r);q.M.push(j)}}else this.Bc(f.ja<9?function(){var s=this.eb,o=d[s.S+"X"],u=d[s.S+"Y"],x=d[s.da],y=d[s.z];if(y!=="transparent")q.color=f.ha(y);if(x!=="none")q.M=[{P:"image",Ab:(new f.v(x)).next().d,bb:d[s.T],$:new f.Ja((new f.v(o+" "+u)).all())}]}:function(){var s=this.eb,o=/\s*,\s*/,u=d[s.da].split(o),x=d[s.z],y,z,B,E,D,C;if(x!=="transparent")q.color=f.ha(x);if((E=u.length)&&u[0]!=="none"){x=
|
||||||
|
d[s.T].split(o);y=d[s.S].split(o);z=d[s.Pa].split(o);B=d[s.Ma].split(o);s=d[s.Sa].split(o);q.M=[];for(o=0;o<E;o++)if((D=u[o])&&D!=="none"){C=s[o].split(" ");q.M.push({Hb:D+" "+x[o]+" "+y[o]+" / "+s[o]+" "+z[o]+" "+B[o],P:"image",Ab:(new f.v(D)).next().d,bb:x[o],$:new f.Ja((new f.v(y[o])).all()),Wa:z[o],ub:B[o],Xa:new f.Ka(C[0],C[1])})}}});return q.color||q.M[0]?q:null},Bc:function(a){var b=f.ja>8,c=this.eb,d=this.e.runtimeStyle,e=d[c.da],g=d[c.z],j=d[c.T],i,h,k,n;if(e)d[c.da]="";if(g)d[c.z]="";if(j)d[c.T]=
|
||||||
|
"";if(b){i=d[c.Ma];h=d[c.Pa];n=d[c.S];k=d[c.Sa];if(i)d[c.Ma]="";if(h)d[c.Pa]="";if(n)d[c.S]="";if(k)d[c.Sa]=""}a=a.call(this);if(e)d[c.da]=e;if(g)d[c.z]=g;if(j)d[c.T]=j;if(b){if(i)d[c.Ma]=i;if(h)d[c.Pa]=h;if(n)d[c.S]=n;if(k)d[c.Sa]=k}return a},ia:f.B.va(function(){return this.wb()||this.Bc(function(){var a=this.e.currentStyle,b=this.eb;return a[b.z]+" "+a[b.da]+" "+a[b.T]+" "+a[b.S+"X"]+" "+a[b.S+"Y"]})}),wb:f.B.va(function(){var a=this.e;return a.style[this.Fa]||a.currentStyle.getAttribute(this.wa)}),
|
||||||
|
qc:function(){var a=0;if(f.O<7){a=this.e;a=""+(a.style[f.nb+"PngFix"]||a.currentStyle.getAttribute(f.F+"png-fix"))==="true"}return a},i:f.B.va(function(){return(this.wb()||this.qc())&&!!this.j()})});f.Vb=f.B.ka({wc:["Top","Right","Bottom","Left"],Id:{thin:"1px",medium:"3px",thick:"5px"},la:function(){var a={},b={},c={},d=false,e=true,g=true,j=true;this.Cc(function(){for(var i=this.e.currentStyle,h=0,k,n,m,p,r,t,v;h<4;h++){m=this.wc[h];v=m.charAt(0).toLowerCase();k=b[v]=i["border"+m+"Style"];n=i["border"+
|
||||||
|
m+"Color"];m=i["border"+m+"Width"];if(h>0){if(k!==p)g=false;if(n!==r)e=false;if(m!==t)j=false}p=k;r=n;t=m;c[v]=f.ha(n);m=a[v]=f.n(b[v]==="none"?"0":this.Id[m]||m);if(m.a(this.e)>0)d=true}});return d?{J:a,Zd:b,gd:c,ee:j,hd:e,$d:g}:null},ia:f.B.va(function(){var a=this.e,b=a.currentStyle,c;a.tagName in f.Ac&&a.offsetParent.currentStyle.borderCollapse==="collapse"||this.Cc(function(){c=b.borderWidth+"|"+b.borderStyle+"|"+b.borderColor});return c}),Cc:function(a){var b=this.e.runtimeStyle,c=b.borderWidth,
|
||||||
|
d=b.borderColor;if(c)b.borderWidth="";if(d)b.borderColor="";a=a.call(this);if(c)b.borderWidth=c;if(d)b.borderColor=d;return a}});(function(){f.jb=f.B.ka({wa:"border-radius",Fa:"borderRadius",la:function(b){var c=null,d,e,g,j,i=false;if(b){e=new f.v(b);var h=function(){for(var k=[],n;(g=e.next())&&g.W();){j=f.n(g.d);n=j.ic();if(n<0)return null;if(n>0)i=true;k.push(j)}return k.length>0&&k.length<5?{tl:k[0],tr:k[1]||k[0],br:k[2]||k[0],bl:k[3]||k[1]||k[0]}:null};if(b=h()){if(g){if(g.k&f.v.qa.pa&&g.d===
|
||||||
|
"/")d=h()}else d=b;if(i&&b&&d)c={x:b,y:d}}}return c}});var a=f.n("0");a={tl:a,tr:a,br:a,bl:a};f.jb.Dc={x:a,y:a}})();f.Ub=f.B.ka({wa:"border-image",Fa:"borderImage",fb:{stretch:1,round:1,repeat:1,space:1},la:function(a){var b=null,c,d,e,g,j,i,h=0,k=f.v.qa,n=k.na,m=k.oa,p=k.Ra;if(a){c=new f.v(a);b={};for(var r=function(l){return l&&l.k&k.pa&&l.d==="/"},t=function(l){return l&&l.k&n&&l.d==="fill"},v=function(){g=c.ma(function(l){return!(l.k&(m|p))});if(t(c.next())&&!b.fill)b.fill=true;else c.D();if(r(c.next())){h++;
|
||||||
|
j=c.ma(function(l){return!l.W()&&!(l.k&n&&l.d==="auto")});if(r(c.next())){h++;i=c.ma(function(l){return!l.Ca()})}}else c.D()};a=c.next();){d=a.k;e=a.d;if(d&(m|p)&&!g){c.D();v()}else if(t(a)&&!b.fill){b.fill=true;v()}else if(d&n&&this.fb[e]&&!b.repeat){b.repeat={f:e};if(a=c.next())if(a.k&n&&this.fb[a.d])b.repeat.Ob=a.d;else c.D()}else if(d&k.URL&&!b.src)b.src=e;else return null}if(!b.src||!g||g.length<1||g.length>4||j&&j.length>4||h===1&&j.length<1||i&&i.length>4||h===2&&i.length<1)return null;if(!b.repeat)b.repeat=
|
||||||
|
{f:"stretch"};if(!b.repeat.Ob)b.repeat.Ob=b.repeat.f;a=function(l,q){return{t:q(l[0]),r:q(l[1]||l[0]),b:q(l[2]||l[0]),l:q(l[3]||l[1]||l[0])}};b.slice=a(g,function(l){return f.n(l.k&m?l.d+"px":l.d)});if(j&&j[0])b.J=a(j,function(l){return l.W()?f.n(l.d):l.d});if(i&&i[0])b.Da=a(i,function(l){return l.Ca()?f.n(l.d):l.d})}return b}});f.Ic=f.B.ka({wa:"box-shadow",Fa:"boxShadow",la:function(a){var b,c=f.n,d=f.v.qa,e;if(a){e=new f.v(a);b={Da:[],Bb:[]};for(a=function(){for(var g,j,i,h,k,n;g=e.next();){i=g.d;
|
||||||
|
j=g.k;if(j&d.pa&&i===",")break;else if(g.Ca()&&!k){e.D();k=e.ma(function(m){return!m.Ca()})}else if(j&d.z&&!h)h=i;else if(j&d.na&&i==="inset"&&!n)n=true;else return false}g=k&&k.length;if(g>1&&g<5){(n?b.Bb:b.Da).push({fe:c(k[0].d),ge:c(k[1].d),blur:c(k[2]?k[2].d:"0"),Vd:c(k[3]?k[3].d:"0"),color:f.ha(h||"currentColor")});return true}return false};a(););}return b&&(b.Bb.length||b.Da.length)?b:null}});f.Uc=f.B.ka({ia:f.B.va(function(){var a=this.e.currentStyle;return a.visibility+"|"+a.display}),la:function(){var a=
|
||||||
|
this.e,b=a.runtimeStyle;a=a.currentStyle;var c=b.visibility,d;b.visibility="";d=a.visibility;b.visibility=c;return{ce:d!=="hidden",nd:a.display!=="none"}},i:function(){return false}});f.u={R:function(a){function b(c,d,e,g){this.e=c;this.s=d;this.g=e;this.parent=g}f.p.Eb(b.prototype,f.u,a);return b},Cb:false,Q:function(){return false},Ea:f.aa,Lb:function(){this.m();this.i()&&this.V()},ib:function(){this.Cb=true},Mb:function(){this.i()?this.V():this.m()},sb:function(a,b){this.vc(a);for(var c=this.ra||
|
||||||
|
(this.ra=[]),d=a+1,e=c.length,g;d<e;d++)if(g=c[d])break;c[a]=b;this.I().insertBefore(b,g||null)},za:function(a){var b=this.ra;return b&&b[a]||null},vc:function(a){var b=this.za(a),c=this.Ta;if(b&&c){c.removeChild(b);this.ra[a]=null}},Aa:function(a,b,c,d){var e=this.rb||(this.rb={}),g=e[a];if(!g){g=e[a]=f.p.Za("shape");if(b)g.appendChild(g[b]=f.p.Za(b));if(d){c=this.za(d);if(!c){this.sb(d,doc.createElement("group"+d));c=this.za(d)}}c.appendChild(g);a=g.style;a.position="absolute";a.left=a.top=0;a.behavior=
|
||||||
|
"url(#default#VML)"}return g},vb:function(a){var b=this.rb,c=b&&b[a];if(c){c.parentNode.removeChild(c);delete b[a]}return!!c},kc:function(a){var b=this.e,c=this.s.o(),d=c.h,e=c.f,g,j,i,h,k,n;c=a.x.tl.a(b,d);g=a.y.tl.a(b,e);j=a.x.tr.a(b,d);i=a.y.tr.a(b,e);h=a.x.br.a(b,d);k=a.y.br.a(b,e);n=a.x.bl.a(b,d);a=a.y.bl.a(b,e);d=Math.min(d/(c+j),e/(i+k),d/(n+h),e/(g+a));if(d<1){c*=d;g*=d;j*=d;i*=d;h*=d;k*=d;n*=d;a*=d}return{x:{tl:c,tr:j,br:h,bl:n},y:{tl:g,tr:i,br:k,bl:a}}},ya:function(a,b,c){b=b||1;var d,e,
|
||||||
|
g=this.s.o();e=g.h*b;g=g.f*b;var j=this.g.G,i=Math.floor,h=Math.ceil,k=a?a.Jb*b:0,n=a?a.Ib*b:0,m=a?a.tb*b:0;a=a?a.Db*b:0;var p,r,t,v,l;if(c||j.i()){d=this.kc(c||j.j());c=d.x.tl*b;j=d.y.tl*b;p=d.x.tr*b;r=d.y.tr*b;t=d.x.br*b;v=d.y.br*b;l=d.x.bl*b;b=d.y.bl*b;e="m"+i(a)+","+i(j)+"qy"+i(c)+","+i(k)+"l"+h(e-p)+","+i(k)+"qx"+h(e-n)+","+i(r)+"l"+h(e-n)+","+h(g-v)+"qy"+h(e-t)+","+h(g-m)+"l"+i(l)+","+h(g-m)+"qx"+i(a)+","+h(g-b)+" x e"}else e="m"+i(a)+","+i(k)+"l"+h(e-n)+","+i(k)+"l"+h(e-n)+","+h(g-m)+"l"+i(a)+
|
||||||
|
","+h(g-m)+"xe";return e},I:function(){var a=this.parent.za(this.N),b;if(!a){a=doc.createElement(this.Ya);b=a.style;b.position="absolute";b.top=b.left=0;this.parent.sb(this.N,a)}return a},mc:function(){var a=this.e,b=a.currentStyle,c=a.runtimeStyle,d=a.tagName,e=f.O===6,g;if(e&&(d in f.cc||d==="FIELDSET")||d==="BUTTON"||d==="INPUT"&&a.type in f.Gd){c.borderWidth="";d=this.g.w.wc;for(g=d.length;g--;){e=d[g];c["padding"+e]="";c["padding"+e]=f.n(b["padding"+e]).a(a)+f.n(b["border"+e+"Width"]).a(a)+(f.O!==
|
||||||
|
8&&g%2?1:0)}c.borderWidth=0}else if(e){if(a.childNodes.length!==1||a.firstChild.tagName!=="ie6-mask"){b=doc.createElement("ie6-mask");d=b.style;d.visibility="visible";for(d.zoom=1;d=a.firstChild;)b.appendChild(d);a.appendChild(b);c.visibility="hidden"}}else c.borderColor="transparent"},ie:function(){},m:function(){this.parent.vc(this.N);delete this.rb;delete this.ra}};f.Rc=f.u.R({i:function(){var a=this.ed;for(var b in a)if(a.hasOwnProperty(b)&&a[b].i())return true;return false},Q:function(){return this.g.Pb.H()},
|
||||||
|
ib:function(){if(this.i()){var a=this.jc(),b=a,c;a=a.currentStyle;var d=a.position,e=this.I().style,g=0,j=0;j=this.s.o();var i=j.Hd;if(d==="fixed"&&f.O>6){g=j.x*i;j=j.y*i;b=d}else{do b=b.offsetParent;while(b&&b.currentStyle.position==="static");if(b){c=b.getBoundingClientRect();b=b.currentStyle;g=(j.x-c.left)*i-(parseFloat(b.borderLeftWidth)||0);j=(j.y-c.top)*i-(parseFloat(b.borderTopWidth)||0)}else{b=doc.documentElement;g=(j.x+b.scrollLeft-b.clientLeft)*i;j=(j.y+b.scrollTop-b.clientTop)*i}b="absolute"}e.position=
|
||||||
|
b;e.left=g;e.top=j;e.zIndex=d==="static"?-1:a.zIndex;this.Cb=true}},Mb:f.aa,Nb:function(){var a=this.g.Pb.j();this.I().style.display=a.ce&&a.nd?"":"none"},Lb:function(){this.i()?this.Nb():this.m()},jc:function(){var a=this.e;return a.tagName in f.Ac?a.offsetParent:a},I:function(){var a=this.Ta,b;if(!a){b=this.jc();a=this.Ta=doc.createElement("css3-container");a.style.direction="ltr";this.Nb();b.parentNode.insertBefore(a,b)}return a},ab:f.aa,m:function(){var a=this.Ta,b;if(a&&(b=a.parentNode))b.removeChild(a);
|
||||||
|
delete this.Ta;delete this.ra}});f.Fc=f.u.R({N:2,Ya:"background",Q:function(){var a=this.g;return a.C.H()||a.G.H()},i:function(){var a=this.g;return a.q.i()||a.G.i()||a.C.i()||a.ga.i()&&a.ga.j().Bb},V:function(){var a=this.s.o();if(a.h&&a.f){this.od();this.pd()}},od:function(){var a=this.g.C.j(),b=this.s.o(),c=this.e,d=a&&a.color,e,g;if(d&&d.fa()>0){this.lc();a=this.Aa("bgColor","fill",this.I(),1);e=b.h;b=b.f;a.stroked=false;a.coordsize=e*2+","+b*2;a.coordorigin="1,1";a.path=this.ya(null,2);g=a.style;
|
||||||
|
g.width=e;g.height=b;a.fill.color=d.U(c);c=d.fa();if(c<1)a.fill.opacity=c}else this.vb("bgColor")},pd:function(){var a=this.g.C.j(),b=this.s.o();a=a&&a.M;var c,d,e,g,j;if(a){this.lc();d=b.h;e=b.f;for(j=a.length;j--;){b=a[j];c=this.Aa("bgImage"+j,"fill",this.I(),2);c.stroked=false;c.fill.type="tile";c.fillcolor="none";c.coordsize=d*2+","+e*2;c.coordorigin="1,1";c.path=this.ya(0,2);g=c.style;g.width=d;g.height=e;if(b.P==="linear-gradient")this.bd(c,b);else{c.fill.src=b.Ab;this.Nd(c,j)}}}for(j=a?a.length:
|
||||||
|
0;this.vb("bgImage"+j++););},Nd:function(a,b){var c=this;f.p.Rb(a.fill.src,function(d){var e=c.e,g=c.s.o(),j=g.h;g=g.f;if(j&&g){var i=a.fill,h=c.g,k=h.w.j(),n=k&&k.J;k=n?n.t.a(e):0;var m=n?n.r.a(e):0,p=n?n.b.a(e):0;n=n?n.l.a(e):0;h=h.C.j().M[b];e=h.$?h.$.coords(e,j-d.h-n-m,g-d.f-k-p):{x:0,y:0};h=h.bb;p=m=0;var r=j+1,t=g+1,v=f.O===8?0:1;n=Math.round(e.x)+n+0.5;k=Math.round(e.y)+k+0.5;i.position=n/j+","+k/g;i.size.x=1;i.size=d.h+"px,"+d.f+"px";if(h&&h!=="repeat"){if(h==="repeat-x"||h==="no-repeat"){m=
|
||||||
|
k+1;t=k+d.f+v}if(h==="repeat-y"||h==="no-repeat"){p=n+1;r=n+d.h+v}a.style.clip="rect("+m+"px,"+r+"px,"+t+"px,"+p+"px)"}}})},bd:function(a,b){var c=this.e,d=this.s.o(),e=d.h,g=d.f;a=a.fill;d=b.ca;var j=d.length,i=Math.PI,h=f.Na,k=h.tc,n=h.dc;b=h.gc(c,e,g,b);h=b.sa;var m=b.xc,p=b.yc,r=b.Wd,t=b.Xd,v=b.rd,l=b.sd,q=b.kd,s=b.ld;b=b.rc;e=h%90?Math.atan2(q*e/g,s)/i*180:h+90;e+=180;e%=360;v=k(r,t,h,v,l);g=n(r,t,v[0],v[1]);i=[];v=k(m,p,h,r,t);n=n(m,p,v[0],v[1])/g*100;k=[];for(h=0;h<j;h++)k.push(d[h].db?d[h].db.a(c,
|
||||||
|
b):h===0?0:h===j-1?b:null);for(h=1;h<j;h++){if(k[h]===null){m=k[h-1];b=h;do p=k[++b];while(p===null);k[h]=m+(p-m)/(b-h+1)}k[h]=Math.max(k[h],k[h-1])}for(h=0;h<j;h++)i.push(n+k[h]/g*100+"% "+d[h].color.U(c));a.angle=e;a.type="gradient";a.method="sigma";a.color=d[0].color.U(c);a.color2=d[j-1].color.U(c);if(a.colors)a.colors.value=i.join(",");else a.colors=i.join(",")},lc:function(){var a=this.e.runtimeStyle;a.backgroundImage="url(about:blank)";a.backgroundColor="transparent"},m:function(){f.u.m.call(this);
|
||||||
|
var a=this.e.runtimeStyle;a.backgroundImage=a.backgroundColor=""}});f.Gc=f.u.R({N:4,Ya:"border",Q:function(){var a=this.g;return a.w.H()||a.G.H()},i:function(){var a=this.g;return a.G.i()&&!a.q.i()&&a.w.i()},V:function(){var a=this.e,b=this.g.w.j(),c=this.s.o(),d=c.h;c=c.f;var e,g,j,i,h;if(b){this.mc();b=this.wd(2);i=0;for(h=b.length;i<h;i++){j=b[i];e=this.Aa("borderPiece"+i,j.stroke?"stroke":"fill",this.I());e.coordsize=d*2+","+c*2;e.coordorigin="1,1";e.path=j.path;g=e.style;g.width=d;g.height=c;
|
||||||
|
e.filled=!!j.fill;e.stroked=!!j.stroke;if(j.stroke){e=e.stroke;e.weight=j.Qb+"px";e.color=j.color.U(a);e.dashstyle=j.stroke==="dashed"?"2 2":j.stroke==="dotted"?"1 1":"solid";e.linestyle=j.stroke==="double"&&j.Qb>2?"ThinThin":"Single"}else e.fill.color=j.fill.U(a)}for(;this.vb("borderPiece"+i++););}},wd:function(a){var b=this.e,c,d,e,g=this.g.w,j=[],i,h,k,n,m=Math.round,p,r,t;if(g.i()){c=g.j();g=c.J;r=c.Zd;t=c.gd;if(c.ee&&c.$d&&c.hd){if(t.t.fa()>0){c=g.t.a(b);k=c/2;j.push({path:this.ya({Jb:k,Ib:k,
|
||||||
|
tb:k,Db:k},a),stroke:r.t,color:t.t,Qb:c})}}else{a=a||1;c=this.s.o();d=c.h;e=c.f;c=m(g.t.a(b));k=m(g.r.a(b));n=m(g.b.a(b));b=m(g.l.a(b));var v={t:c,r:k,b:n,l:b};b=this.g.G;if(b.i())p=this.kc(b.j());i=Math.floor;h=Math.ceil;var l=function(o,u){return p?p[o][u]:0},q=function(o,u,x,y,z,B){var E=l("x",o),D=l("y",o),C=o.charAt(1)==="r";o=o.charAt(0)==="b";return E>0&&D>0?(B?"al":"ae")+(C?h(d-E):i(E))*a+","+(o?h(e-D):i(D))*a+","+(i(E)-u)*a+","+(i(D)-x)*a+","+y*65535+","+2949075*(z?1:-1):(B?"m":"l")+(C?d-
|
||||||
|
u:u)*a+","+(o?e-x:x)*a},s=function(o,u,x,y){var z=o==="t"?i(l("x","tl"))*a+","+h(u)*a:o==="r"?h(d-u)*a+","+i(l("y","tr"))*a:o==="b"?h(d-l("x","br"))*a+","+i(e-u)*a:i(u)*a+","+h(e-l("y","bl"))*a;o=o==="t"?h(d-l("x","tr"))*a+","+h(u)*a:o==="r"?h(d-u)*a+","+h(e-l("y","br"))*a:o==="b"?i(l("x","bl"))*a+","+i(e-u)*a:i(u)*a+","+i(l("y","tl"))*a;return x?(y?"m"+o:"")+"l"+z:(y?"m"+z:"")+"l"+o};b=function(o,u,x,y,z,B){var E=o==="l"||o==="r",D=v[o],C,F;if(D>0&&r[o]!=="none"&&t[o].fa()>0){C=v[E?o:u];u=v[E?u:
|
||||||
|
o];F=v[E?o:x];x=v[E?x:o];if(r[o]==="dashed"||r[o]==="dotted"){j.push({path:q(y,C,u,B+45,0,1)+q(y,0,0,B,1,0),fill:t[o]});j.push({path:s(o,D/2,0,1),stroke:r[o],Qb:D,color:t[o]});j.push({path:q(z,F,x,B,0,1)+q(z,0,0,B-45,1,0),fill:t[o]})}else j.push({path:q(y,C,u,B+45,0,1)+s(o,D,0,0)+q(z,F,x,B,0,0)+(r[o]==="double"&&D>2?q(z,F-i(F/3),x-i(x/3),B-45,1,0)+s(o,h(D/3*2),1,0)+q(y,C-i(C/3),u-i(u/3),B,1,0)+"x "+q(y,i(C/3),i(u/3),B+45,0,1)+s(o,i(D/3),1,0)+q(z,i(F/3),i(x/3),B,0,0):"")+q(z,0,0,B-45,1,0)+s(o,0,1,
|
||||||
|
0)+q(y,0,0,B,1,0),fill:t[o]})}};b("t","l","r","tl","tr",90);b("r","t","b","tr","br",0);b("b","r","l","br","bl",-90);b("l","b","t","bl","tl",-180)}}return j},m:function(){if(this.ec||!this.g.q.i())this.e.runtimeStyle.borderColor="";f.u.m.call(this)}});f.Tb=f.u.R({N:5,Md:["t","tr","r","br","b","bl","l","tl","c"],Q:function(){return this.g.q.H()},i:function(){return this.g.q.i()},V:function(){this.I();var a=this.g.q.j(),b=this.g.w.j(),c=this.s.o(),d=this.e,e=this.uc;f.p.Rb(a.src,function(g){function j(s,
|
||||||
|
o,u,x,y){s=e[s].style;var z=Math.max;s.width=z(o,0);s.height=z(u,0);s.left=x;s.top=y}function i(s,o,u){for(var x=0,y=s.length;x<y;x++)e[s[x]].imagedata[o]=u}var h=c.h,k=c.f,n=f.n("0"),m=a.J||(b?b.J:{t:n,r:n,b:n,l:n});n=m.t.a(d);var p=m.r.a(d),r=m.b.a(d);m=m.l.a(d);var t=a.slice,v=t.t.a(d),l=t.r.a(d),q=t.b.a(d);t=t.l.a(d);j("tl",m,n,0,0);j("t",h-m-p,n,m,0);j("tr",p,n,h-p,0);j("r",p,k-n-r,h-p,n);j("br",p,r,h-p,k-r);j("b",h-m-p,r,m,k-r);j("bl",m,r,0,k-r);j("l",m,k-n-r,0,n);j("c",h-m-p,k-n-r,m,n);i(["tl",
|
||||||
|
"t","tr"],"cropBottom",(g.f-v)/g.f);i(["tl","l","bl"],"cropRight",(g.h-t)/g.h);i(["bl","b","br"],"cropTop",(g.f-q)/g.f);i(["tr","r","br"],"cropLeft",(g.h-l)/g.h);i(["l","r","c"],"cropTop",v/g.f);i(["l","r","c"],"cropBottom",q/g.f);i(["t","b","c"],"cropLeft",t/g.h);i(["t","b","c"],"cropRight",l/g.h);e.c.style.display=a.fill?"":"none"},this)},I:function(){var a=this.parent.za(this.N),b,c,d,e=this.Md,g=e.length;if(!a){a=doc.createElement("border-image");b=a.style;b.position="absolute";this.uc={};for(d=
|
||||||
|
0;d<g;d++){c=this.uc[e[d]]=f.p.Za("rect");c.appendChild(f.p.Za("imagedata"));b=c.style;b.behavior="url(#default#VML)";b.position="absolute";b.top=b.left=0;c.imagedata.src=this.g.q.j().src;c.stroked=false;c.filled=false;a.appendChild(c)}this.parent.sb(this.N,a)}return a},Ea:function(){if(this.i()){var a=this.e,b=a.runtimeStyle,c=this.g.q.j().J;b.borderStyle="solid";if(c){b.borderTopWidth=c.t.a(a)+"px";b.borderRightWidth=c.r.a(a)+"px";b.borderBottomWidth=c.b.a(a)+"px";b.borderLeftWidth=c.l.a(a)+"px"}this.mc()}},
|
||||||
|
m:function(){var a=this.e.runtimeStyle;a.borderStyle="";if(this.ec||!this.g.w.i())a.borderColor=a.borderWidth="";f.u.m.call(this)}});f.Hc=f.u.R({N:1,Ya:"outset-box-shadow",Q:function(){var a=this.g;return a.ga.H()||a.G.H()},i:function(){var a=this.g.ga;return a.i()&&a.j().Da[0]},V:function(){function a(C,F,O,H,M,P,I){C=b.Aa("shadow"+C+F,"fill",d,j-C);F=C.fill;C.coordsize=n*2+","+m*2;C.coordorigin="1,1";C.stroked=false;C.filled=true;F.color=M.U(c);if(P){F.type="gradienttitle";F.color2=F.color;F.opacity=
|
||||||
|
0}C.path=I;l=C.style;l.left=O;l.top=H;l.width=n;l.height=m;return C}var b=this,c=this.e,d=this.I(),e=this.g,g=e.ga.j().Da;e=e.G.j();var j=g.length,i=j,h,k=this.s.o(),n=k.h,m=k.f;k=f.O===8?1:0;for(var p=["tl","tr","br","bl"],r,t,v,l,q,s,o,u,x,y,z,B,E,D;i--;){t=g[i];q=t.fe.a(c);s=t.ge.a(c);h=t.Vd.a(c);o=t.blur.a(c);t=t.color;u=-h-o;if(!e&&o)e=f.jb.Dc;u=this.ya({Jb:u,Ib:u,tb:u,Db:u},2,e);if(o){x=(h+o)*2+n;y=(h+o)*2+m;z=x?o*2/x:0;B=y?o*2/y:0;if(o-h>n/2||o-h>m/2)for(h=4;h--;){r=p[h];E=r.charAt(0)==="b";
|
||||||
|
D=r.charAt(1)==="r";r=a(i,r,q,s,t,o,u);v=r.fill;v.focusposition=(D?1-z:z)+","+(E?1-B:B);v.focussize="0,0";r.style.clip="rect("+((E?y/2:0)+k)+"px,"+(D?x:x/2)+"px,"+(E?y:y/2)+"px,"+((D?x/2:0)+k)+"px)"}else{r=a(i,"",q,s,t,o,u);v=r.fill;v.focusposition=z+","+B;v.focussize=1-z*2+","+(1-B*2)}}else{r=a(i,"",q,s,t,o,u);q=t.fa();if(q<1)r.fill.opacity=q}}}});f.Pc=f.u.R({N:6,Ya:"imgEl",Q:function(){var a=this.g;return this.e.src!==this.Xc||a.G.H()},i:function(){var a=this.g;return a.G.i()||a.C.qc()},V:function(){this.Xc=
|
||||||
|
j;this.Cd();var a=this.Aa("img","fill",this.I()),b=a.fill,c=this.s.o(),d=c.h;c=c.f;var e=this.g.w.j(),g=e&&e.J;e=this.e;var j=e.src,i=Math.round,h=e.currentStyle,k=f.n;if(!g||f.O<7){g=f.n("0");g={t:g,r:g,b:g,l:g}}a.stroked=false;b.type="frame";b.src=j;b.position=(d?0.5/d:0)+","+(c?0.5/c:0);a.coordsize=d*2+","+c*2;a.coordorigin="1,1";a.path=this.ya({Jb:i(g.t.a(e)+k(h.paddingTop).a(e)),Ib:i(g.r.a(e)+k(h.paddingRight).a(e)),tb:i(g.b.a(e)+k(h.paddingBottom).a(e)),Db:i(g.l.a(e)+k(h.paddingLeft).a(e))},
|
||||||
|
2);a=a.style;a.width=d;a.height=c},Cd:function(){this.e.runtimeStyle.filter="alpha(opacity=0)"},m:function(){f.u.m.call(this);this.e.runtimeStyle.filter=""}});f.Oc=f.u.R({ib:f.aa,Mb:f.aa,Nb:f.aa,Lb:f.aa,Ld:/^,+|,+$/g,Fd:/,+/g,gb:function(a,b){(this.pb||(this.pb=[]))[a]=b||void 0},ab:function(){var a=this.pb,b;if(a&&(b=a.join(",").replace(this.Ld,"").replace(this.Fd,","))!==this.Wc)this.Wc=this.e.runtimeStyle.background=b},m:function(){this.e.runtimeStyle.background="";delete this.pb}});f.Mc=f.u.R({ua:1,
|
||||||
|
Q:function(){return this.g.C.H()},i:function(){var a=this.g;return a.C.i()||a.q.i()},V:function(){var a=this.g.C.j(),b,c,d=0,e,g;if(a){b=[];if(c=a.M)for(;e=c[d++];)if(e.P==="linear-gradient"){g=this.vd(e.Wa);g=(e.Xa||f.Ka.Kc).a(this.e,g.h,g.f,g.h,g.f);b.push("url(data:image/svg+xml,"+escape(this.xd(e,g.h,g.f))+") "+this.dd(e.$)+" / "+g.h+"px "+g.f+"px "+(e.bc||"")+" "+(e.Wa||"")+" "+(e.ub||""))}else b.push(e.Hb);a.color&&b.push(a.color.Y);this.parent.gb(this.ua,b.join(","))}},dd:function(a){return a?
|
||||||
|
a.X.map(function(b){return b.d}).join(" "):"0 0"},vd:function(a){var b=this.e,c=this.s.o(),d=c.h;c=c.f;var e;if(a!=="border-box")if((e=this.g.w.j())&&(e=e.J)){d-=e.l.a(b)+e.l.a(b);c-=e.t.a(b)+e.b.a(b)}if(a==="content-box"){a=f.n;e=b.currentStyle;d-=a(e.paddingLeft).a(b)+a(e.paddingRight).a(b);c-=a(e.paddingTop).a(b)+a(e.paddingBottom).a(b)}return{h:d,f:c}},xd:function(a,b,c){var d=this.e,e=a.ca,g=e.length,j=f.Na.gc(d,b,c,a);a=j.xc;var i=j.yc,h=j.td,k=j.ud;j=j.rc;var n,m,p,r,t;n=[];for(m=0;m<g;m++)n.push(e[m].db?
|
||||||
|
e[m].db.a(d,j):m===0?0:m===g-1?j:null);for(m=1;m<g;m++)if(n[m]===null){r=n[m-1];p=m;do t=n[++p];while(t===null);n[m]=r+(t-r)/(p-m+1)}b=['<svg width="'+b+'" height="'+c+'" xmlns="http://www.w3.org/2000/svg"><defs><linearGradient id="g" gradientUnits="userSpaceOnUse" x1="'+a/b*100+'%" y1="'+i/c*100+'%" x2="'+h/b*100+'%" y2="'+k/c*100+'%">'];for(m=0;m<g;m++)b.push('<stop offset="'+n[m]/j+'" stop-color="'+e[m].color.U(d)+'" stop-opacity="'+e[m].color.fa()+'"/>');b.push('</linearGradient></defs><rect width="100%" height="100%" fill="url(#g)"/></svg>');
|
||||||
|
return b.join("")},m:function(){this.parent.gb(this.ua)}});f.Nc=f.u.R({T:"repeat",Sc:"stretch",Qc:"round",ua:0,Q:function(){return this.g.q.H()},i:function(){return this.g.q.i()},V:function(){var a=this,b=a.g.q.j(),c=a.g.w.j(),d=a.s.o(),e=b.repeat,g=e.f,j=e.Ob,i=a.e,h=0;f.p.Rb(b.src,function(k){function n(Q,R,U,V,W,Y,X,S,w,A){K.push('<pattern patternUnits="userSpaceOnUse" id="pattern'+G+'" x="'+(g===l?Q+U/2-w/2:Q)+'" y="'+(j===l?R+V/2-A/2:R)+'" width="'+w+'" height="'+A+'"><svg width="'+w+'" height="'+
|
||||||
|
A+'" viewBox="'+W+" "+Y+" "+X+" "+S+'" preserveAspectRatio="none"><image xlink:href="'+v+'" x="0" y="0" width="'+r+'" height="'+t+'" /></svg></pattern>');J.push('<rect x="'+Q+'" y="'+R+'" width="'+U+'" height="'+V+'" fill="url(#pattern'+G+')" />');G++}var m=d.h,p=d.f,r=k.h,t=k.f,v=a.Dd(b.src,r,t),l=a.T,q=a.Sc;k=a.Qc;var s=Math.ceil,o=f.n("0"),u=b.J||(c?c.J:{t:o,r:o,b:o,l:o});o=u.t.a(i);var x=u.r.a(i),y=u.b.a(i);u=u.l.a(i);var z=b.slice,B=z.t.a(i),E=z.r.a(i),D=z.b.a(i);z=z.l.a(i);var C=m-u-x,F=p-o-
|
||||||
|
y,O=r-z-E,H=t-B-D,M=g===q?C:O*o/B,P=j===q?F:H*x/E,I=g===q?C:O*y/D;q=j===q?F:H*u/z;var K=[],J=[],G=0;if(g===k){M-=(M-(C%M||M))/s(C/M);I-=(I-(C%I||I))/s(C/I)}if(j===k){P-=(P-(F%P||P))/s(F/P);q-=(q-(F%q||q))/s(F/q)}k=['<svg width="'+m+'" height="'+p+'" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">'];n(0,0,u,o,0,0,z,B,u,o);n(u,0,C,o,z,0,O,B,M,o);n(m-x,0,x,o,r-E,0,E,B,x,o);n(0,o,u,F,0,B,z,H,u,q);if(b.fill)n(u,o,C,F,z,B,O,H,M||I||O,q||P||H);n(m-x,o,x,F,r-E,B,E,H,x,P);n(0,
|
||||||
|
p-y,u,y,0,t-D,z,D,u,y);n(u,p-y,C,y,z,t-D,O,D,I,y);n(m-x,p-y,x,y,r-E,t-D,E,D,x,y);k.push("<defs>"+K.join("\n")+"</defs>"+J.join("\n")+"</svg>");a.parent.gb(a.ua,"url(data:image/svg+xml,"+escape(k.join(""))+") no-repeat border-box border-box");h&&a.parent.ab()},a);h=1},Dd:function(){var a={};return function(b,c,d){var e=a[b],g;if(!e){e=new Image;g=doc.createElement("canvas");e.src=b;g.width=c;g.height=d;g.getContext("2d").drawImage(e,0,0);e=a[b]=g.toDataURL()}return e}}(),Ea:f.Tb.prototype.Ea,m:function(){var a=
|
||||||
|
this.e.runtimeStyle;this.parent.gb(this.ua);a.borderColor=a.borderStyle=a.borderWidth=""}});f.kb=function(){function a(l,q){l.className+=" "+q}function b(l){var q=v.slice.call(arguments,1),s=q.length;setTimeout(function(){if(l)for(;s--;)a(l,q[s])},0)}function c(l){var q=v.slice.call(arguments,1),s=q.length;setTimeout(function(){if(l)for(;s--;){var o=q[s];o=t[o]||(t[o]=new RegExp("\\b"+o+"\\b","g"));l.className=l.className.replace(o,"")}},0)}function d(l){function q(){if(!U){var w,A,L=f.ja,T=l.currentStyle,
|
||||||
|
N=T.getAttribute(g)==="true",da=T.getAttribute(i)!=="false",ea=T.getAttribute(h)!=="false";S=T.getAttribute(j);S=L>7?S!=="false":S==="true";if(!R){R=1;l.runtimeStyle.zoom=1;T=l;for(var fa=1;T=T.previousSibling;)if(T.nodeType===1){fa=0;break}fa&&a(l,p)}J.cb();if(N&&(A=J.o())&&(w=doc.documentElement||doc.body)&&(A.y>w.clientHeight||A.x>w.clientWidth||A.y+A.f<0||A.x+A.h<0)){if(!Y){Y=1;f.mb.ba(q)}}else{U=1;Y=R=0;f.mb.Ha(q);if(L===9){G={C:new f.Sb(l),q:new f.Ub(l),w:new f.Vb(l)};Q=[G.C,G.q];K=new f.Oc(l,
|
||||||
|
J,G);w=[new f.Mc(l,J,G,K),new f.Nc(l,J,G,K)]}else{G={C:new f.Sb(l),w:new f.Vb(l),q:new f.Ub(l),G:new f.jb(l),ga:new f.Ic(l),Pb:new f.Uc(l)};Q=[G.C,G.w,G.q,G.G,G.ga,G.Pb];K=new f.Rc(l,J,G);w=[new f.Hc(l,J,G,K),new f.Fc(l,J,G,K),new f.Gc(l,J,G,K),new f.Tb(l,J,G,K)];l.tagName==="IMG"&&w.push(new f.Pc(l,J,G,K));K.ed=w}I=[K].concat(w);if(w=l.currentStyle.getAttribute(f.F+"watch-ancestors")){w=parseInt(w,10);A=0;for(N=l.parentNode;N&&(w==="NaN"||A++<w);){H(N,"onpropertychange",C);H(N,"onmouseenter",x);
|
||||||
|
H(N,"onmouseleave",y);H(N,"onmousedown",z);if(N.tagName in f.fc){H(N,"onfocus",E);H(N,"onblur",D)}N=N.parentNode}}if(S){f.Oa.ba(o);f.Oa.Rd()}o(1)}if(!V){V=1;L<9&&H(l,"onmove",s);H(l,"onresize",s);H(l,"onpropertychange",u);ea&&H(l,"onmouseenter",x);if(ea||da)H(l,"onmouseleave",y);da&&H(l,"onmousedown",z);if(l.tagName in f.fc){H(l,"onfocus",E);H(l,"onblur",D)}f.Qa.ba(s);f.L.ba(M)}J.hb()}}function s(){J&&J.Ad()&&o()}function o(w){if(!X)if(U){var A,L=I.length;F();for(A=0;A<L;A++)I[A].Ea();if(w||J.Od())for(A=
|
||||||
|
0;A<L;A++)I[A].ib();if(w||J.Td())for(A=0;A<L;A++)I[A].Mb();K.ab();O()}else R||q()}function u(){var w,A=I.length,L;w=event;if(!X&&!(w&&w.propertyName in r))if(U){F();for(w=0;w<A;w++)I[w].Ea();for(w=0;w<A;w++){L=I[w];L.Cb||L.ib();L.Q()&&L.Lb()}K.ab();O()}else R||q()}function x(){b(l,k)}function y(){c(l,k,n)}function z(){b(l,n);f.lb.ba(B)}function B(){c(l,n);f.lb.Ha(B)}function E(){b(l,m)}function D(){c(l,m)}function C(){var w=event.propertyName;if(w==="className"||w==="id")u()}function F(){J.cb();for(var w=
|
||||||
|
Q.length;w--;)Q[w].cb()}function O(){for(var w=Q.length;w--;)Q[w].hb();J.hb()}function H(w,A,L){w.attachEvent(A,L);W.push([w,A,L])}function M(){if(V){for(var w=W.length,A;w--;){A=W[w];A[0].detachEvent(A[1],A[2])}f.L.Ha(M);V=0;W=[]}}function P(){if(!X){var w,A;M();X=1;if(I){w=0;for(A=I.length;w<A;w++){I[w].ec=1;I[w].m()}}S&&f.Oa.Ha(o);f.Qa.Ha(o);I=J=G=Q=l=null}}var I,K,J=new ha(l),G,Q,R,U,V,W=[],Y,X,S;this.Ed=q;this.update=o;this.m=P;this.qd=l}var e={},g=f.F+"lazy-init",j=f.F+"poll",i=f.F+"track-active",
|
||||||
|
h=f.F+"track-hover",k=f.La+"hover",n=f.La+"active",m=f.La+"focus",p=f.La+"first-child",r={background:1,bgColor:1,display:1},t={},v=[];d.yd=function(l){var q=f.p.Ba(l);return e[q]||(e[q]=new d(l))};d.m=function(l){l=f.p.Ba(l);var q=e[l];if(q){q.m();delete e[l]}};d.md=function(){var l=[],q;if(e){for(var s in e)if(e.hasOwnProperty(s)){q=e[s];l.push(q.qd);q.m()}e={}}return l};return d}();f.supportsVML=f.zc;f.attach=function(a){f.ja<10&&f.zc&&f.kb.yd(a).Ed()};f.detach=function(a){f.kb.m(a)}};
|
||||||
|
var $=element;function init(){if(doc.media!=="print"){var a=window.PIE;a&&a.attach($)}}function cleanup(){if(doc.media!=="print"){var a=window.PIE;if(a){a.detach($);$=0}}}$.readyState==="complete"&&init();
|
||||||
|
</script>
|
||||||
|
</PUBLIC:COMPONENT>
|
|
@ -0,0 +1,44 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
sectionid: blog
|
||||||
|
---
|
||||||
|
|
||||||
|
<div class="row-fluid">
|
||||||
|
<div class="span4 recent">
|
||||||
|
<h3>Recent posts</h3>
|
||||||
|
<ul class="unstyled">
|
||||||
|
{% for post in site.posts limit: 5 %}
|
||||||
|
<li{% if page.title == post.title %} class="active"{% endif %}><a href="{{ post.url }}">{{ post.title }}</a></li>
|
||||||
|
{% endfor %}
|
||||||
|
</ul>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="span8 simple-page">
|
||||||
|
<div class="text-item blog inner">
|
||||||
|
<h2 class="date">
|
||||||
|
<span>{{ page.title }}</span>
|
||||||
|
<span>{{ page.date | date: "%B %e, %Y" }} · {{ page.author | upcase }}</span>
|
||||||
|
</h2>
|
||||||
|
|
||||||
|
{% if page.image %}<img src="{{ page.image }}" alt="{{ page.title }}" class="text-img" />{% endif %}
|
||||||
|
|
||||||
|
{{ content }}
|
||||||
|
|
||||||
|
<div id="disqus_thread"></div>
|
||||||
|
<script type="text/javascript">
|
||||||
|
/* * * CONFIGURATION VARIABLES: EDIT BEFORE PASTING INTO YOUR WEBPAGE * * */
|
||||||
|
var disqus_shortname = 'druidio'; // required: replace example with your forum shortname
|
||||||
|
|
||||||
|
/* * * DON'T EDIT BELOW THIS LINE * * */
|
||||||
|
(function() {
|
||||||
|
var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true;
|
||||||
|
dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js';
|
||||||
|
(document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq);
|
||||||
|
})();
|
||||||
|
</script>
|
||||||
|
<noscript>Please enable JavaScript to view the <a href="http://disqus.com/?ref_noscript">comments powered by Disqus.</a></noscript>
|
||||||
|
<a href="http://disqus.com" class="dsq-brlink">comments powered by <span class="logo-disqus">Disqus</span></a>
|
||||||
|
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
|
@ -0,0 +1,60 @@
|
||||||
|
.highlight { background: #ffffff; }
|
||||||
|
.highlight .c { color: #999988; font-style: italic } /* Comment */
|
||||||
|
.highlight .err { color: #a61717; background-color: #e3d2d2 } /* Error */
|
||||||
|
.highlight .k { font-weight: bold } /* Keyword */
|
||||||
|
.highlight .o { font-weight: bold } /* Operator */
|
||||||
|
.highlight .cm { color: #999988; font-style: italic } /* Comment.Multiline */
|
||||||
|
.highlight .cp { color: #999999; font-weight: bold } /* Comment.Preproc */
|
||||||
|
.highlight .c1 { color: #999988; font-style: italic } /* Comment.Single */
|
||||||
|
.highlight .cs { color: #999999; font-weight: bold; font-style: italic } /* Comment.Special */
|
||||||
|
.highlight .gd { color: #000000; background-color: #ffdddd } /* Generic.Deleted */
|
||||||
|
.highlight .gd .x { color: #000000; background-color: #ffaaaa } /* Generic.Deleted.Specific */
|
||||||
|
.highlight .ge { font-style: italic } /* Generic.Emph */
|
||||||
|
.highlight .gr { color: #aa0000 } /* Generic.Error */
|
||||||
|
.highlight .gh { color: #999999 } /* Generic.Heading */
|
||||||
|
.highlight .gi { color: #000000; background-color: #ddffdd } /* Generic.Inserted */
|
||||||
|
.highlight .gi .x { color: #000000; background-color: #aaffaa } /* Generic.Inserted.Specific */
|
||||||
|
.highlight .go { color: #888888 } /* Generic.Output */
|
||||||
|
.highlight .gp { color: #555555 } /* Generic.Prompt */
|
||||||
|
.highlight .gs { font-weight: bold } /* Generic.Strong */
|
||||||
|
.highlight .gu { color: #aaaaaa } /* Generic.Subheading */
|
||||||
|
.highlight .gt { color: #aa0000 } /* Generic.Traceback */
|
||||||
|
.highlight .kc { font-weight: bold } /* Keyword.Constant */
|
||||||
|
.highlight .kd { font-weight: bold } /* Keyword.Declaration */
|
||||||
|
.highlight .kp { font-weight: bold } /* Keyword.Pseudo */
|
||||||
|
.highlight .kr { font-weight: bold } /* Keyword.Reserved */
|
||||||
|
.highlight .kt { color: #445588; font-weight: bold } /* Keyword.Type */
|
||||||
|
.highlight .m { color: #009999 } /* Literal.Number */
|
||||||
|
.highlight .s { color: #d14 } /* Literal.String */
|
||||||
|
.highlight .na { color: #008080 } /* Name.Attribute */
|
||||||
|
.highlight .nb { color: #0086B3 } /* Name.Builtin */
|
||||||
|
.highlight .nc { color: #445588; font-weight: bold } /* Name.Class */
|
||||||
|
.highlight .no { color: #008080 } /* Name.Constant */
|
||||||
|
.highlight .ni { color: #800080 } /* Name.Entity */
|
||||||
|
.highlight .ne { color: #990000; font-weight: bold } /* Name.Exception */
|
||||||
|
.highlight .nf { color: #990000; font-weight: bold } /* Name.Function */
|
||||||
|
.highlight .nn { color: #555555 } /* Name.Namespace */
|
||||||
|
.highlight .nt { color: #000080 } /* Name.Tag */
|
||||||
|
.highlight .nv { color: #008080 } /* Name.Variable */
|
||||||
|
.highlight .ow { font-weight: bold } /* Operator.Word */
|
||||||
|
.highlight .w { color: #bbbbbb } /* Text.Whitespace */
|
||||||
|
.highlight .mf { color: #009999 } /* Literal.Number.Float */
|
||||||
|
.highlight .mh { color: #009999 } /* Literal.Number.Hex */
|
||||||
|
.highlight .mi { color: #009999 } /* Literal.Number.Integer */
|
||||||
|
.highlight .mo { color: #009999 } /* Literal.Number.Oct */
|
||||||
|
.highlight .sb { color: #d14 } /* Literal.String.Backtick */
|
||||||
|
.highlight .sc { color: #d14 } /* Literal.String.Char */
|
||||||
|
.highlight .sd { color: #d14 } /* Literal.String.Doc */
|
||||||
|
.highlight .s2 { color: #d14 } /* Literal.String.Double */
|
||||||
|
.highlight .se { color: #d14 } /* Literal.String.Escape */
|
||||||
|
.highlight .sh { color: #d14 } /* Literal.String.Heredoc */
|
||||||
|
.highlight .si { color: #d14 } /* Literal.String.Interpol */
|
||||||
|
.highlight .sx { color: #d14 } /* Literal.String.Other */
|
||||||
|
.highlight .sr { color: #009926 } /* Literal.String.Regex */
|
||||||
|
.highlight .s1 { color: #d14 } /* Literal.String.Single */
|
||||||
|
.highlight .ss { color: #990073 } /* Literal.String.Symbol */
|
||||||
|
.highlight .bp { color: #999999 } /* Name.Builtin.Pseudo */
|
||||||
|
.highlight .vc { color: #008080 } /* Name.Variable.Class */
|
||||||
|
.highlight .vg { color: #008080 } /* Name.Variable.Global */
|
||||||
|
.highlight .vi { color: #008080 } /* Name.Variable.Instance */
|
||||||
|
.highlight .il { color: #009999 } /* Literal.Number.Integer.Long */
|
|
@ -0,0 +1,13 @@
|
||||||
|
---
|
||||||
|
layout: default
|
||||||
|
title: Your New Jekyll Site
|
||||||
|
---
|
||||||
|
|
||||||
|
<div id="home">
|
||||||
|
<h1>Blog Posts</h1>
|
||||||
|
<ul class="posts">
|
||||||
|
{% for post in site.posts %}
|
||||||
|
<li><span>{{ post.date | date_to_string }}</span> » <a href="{{ post.url }}">{{ post.title }}</a></li>
|
||||||
|
{% endfor %}
|
||||||
|
</ul>
|
||||||
|
</div>
|
Loading…
Reference in New Issue