diff --git a/docs/Aggregations.md b/docs/Aggregations.md
index ffdbd18a4f2..886dac7a572 100644
--- a/docs/Aggregations.md
+++ b/docs/Aggregations.md
@@ -1,87 +1,6 @@
-Aggregations are specifications of processing over metrics available in Druid.
-Available aggregations are:
-
-### Sum aggregators
-
-#### `longSum` aggregator
-
-computes the sum of values as a 64-bit, signed integer
-
- {
- "type" : "longSum",
- "name" : ,
- "fieldName" :
- }
-
-`name` – output name for the summed value
-`fieldName` – name of the metric column to sum over
-
-#### `doubleSum` aggregator
-
-Computes the sum of values as 64-bit floating point value. Similar to `longSum`
-
- {
- "type" : "doubleSum",
- "name" : ,
- "fieldName" :
- }
-
-### Count aggregator
-
-`count` computes the row count that match the filters
-
- {
- "type" : "count",
- "name" : ,
- }
-
-### Min / Max aggregators
-
-#### `min` aggregator
-
-`min` computes the minimum metric value
-
- {
- "type" : "min",
- "name" : ,
- "fieldName" :
- }
-
-#### `max` aggregator
-
-`max` computes the maximum metric value
-
- {
- "type" : "max",
- "name" : ,
- "fieldName" :
- }
-
-### JavaScript aggregator
-
-Computes an arbitrary JavaScript function over a set of columns (both metrics and dimensions).
-
-All JavaScript functions must return numerical values.
-
- {
- "type": "javascript",
- "name": "",
- "fieldNames" : [ , , ... ],
- "fnAggregate" : "function(current, column1, column2, ...) {
-
- return
- }"
- "fnCombine" : "function(partialA, partialB) { return ; }"
- "fnReset" : "function() { return ; }"
- }
-
-**Example**
-
- {
- "type": "javascript",
- "name": "sum(log(x)/y) + 10",
- "fieldNames": ["x", "y"],
- "fnAggregate" : "function(current, a, b) { return current + (Math.log(a) * b); }"
- "fnCombine" : "function(partialA, partialB) { return partialA + partialB; }"
- "fnReset" : "function() { return 10; }"
- }
+---
+layout: default
+---
+---
+layout: default
+---
diff --git a/docs/Batch-ingestion.md b/docs/Batch-ingestion.md
index 97212777bc4..f91f0dbb081 100644
--- a/docs/Batch-ingestion.md
+++ b/docs/Batch-ingestion.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
Batch Data Ingestion
====================
diff --git a/docs/Booting-a-production-cluster.md b/docs/Booting-a-production-cluster.md
index 32181fce2c1..c25ef25c607 100644
--- a/docs/Booting-a-production-cluster.md
+++ b/docs/Booting-a-production-cluster.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
# Booting a Single Node Cluster #
[[Loading Your Data]] and [[Querying Your Data]] contain recipes to boot a small druid cluster on localhost. Here we will boot a small cluster on EC2. You can checkout the code, or download a tarball from [here](http://static.druid.io/artifacts/druid-services-0.5.51-SNAPSHOT-bin.tar.gz).
diff --git a/docs/Broker.md b/docs/Broker.md
index 6d8f3db2ca0..e71100e9915 100644
--- a/docs/Broker.md
+++ b/docs/Broker.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
Broker
======
diff --git a/docs/Build-from-source.md b/docs/Build-from-source.md
index aaa5411368d..3f323259b80 100644
--- a/docs/Build-from-source.md
+++ b/docs/Build-from-source.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
### Clone and Build from Source
The other way to setup Druid is from source via git. To do so, run these commands:
diff --git a/docs/Cluster-setup.md b/docs/Cluster-setup.md
index 23cb806d4fc..29837e94296 100644
--- a/docs/Cluster-setup.md
+++ b/docs/Cluster-setup.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
A Druid cluster consists of various node types that need to be set up depending on your use case. See our [[Design]] docs for a description of the different node types.
Setup Scripts
diff --git a/docs/Compute.md b/docs/Compute.md
index 755f2475707..8df11f8ca2f 100644
--- a/docs/Compute.md
+++ b/docs/Compute.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
Compute
=======
diff --git a/docs/Concepts-and-Terminology.md b/docs/Concepts-and-Terminology.md
index a9accabf88a..1e7f535388d 100644
--- a/docs/Concepts-and-Terminology.md
+++ b/docs/Concepts-and-Terminology.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
Concepts and Terminology
========================
diff --git a/docs/Configuration.md b/docs/Configuration.md
index c3150d44805..353b8be77b3 100644
--- a/docs/Configuration.md
+++ b/docs/Configuration.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
This describes the basic server configuration that is loaded by all the server processes; the same file is loaded by all. See also the json “specFile” descriptions in [[Realtime]] and [[Batch-ingestion]].
JVM Configuration Best Practices
diff --git a/docs/Contribute.md b/docs/Contribute.md
index 8a5bcc75f99..58d53a6d224 100644
--- a/docs/Contribute.md
+++ b/docs/Contribute.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
If you are interested in contributing to the code, we accept [pull requests](https://help.github.com/articles/using-pull-requests). Note: we have only just completed decoupling our Metamarkets-specific code from the code base and we took some short-cuts in interface design to make it happen. So, there are a number of interfaces that exist right now which are likely to be in flux. If you are embedding Druid in your system, it will be safest for the time being to only extend/implement interfaces that this wiki describes, as those are intended as stable (unless otherwise mentioned).
For issue tracking, we are using the github issue tracker. Please fill out an issue from the Issues tab on the github screen.
diff --git a/docs/Deep-Storage.md b/docs/Deep-Storage.md
index f30aa50333e..bd9a0ec8a66 100644
--- a/docs/Deep-Storage.md
+++ b/docs/Deep-Storage.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
Deep storage is where segments are stored. It is a storage mechanism that Druid does not provide. This deep storage infrastructure defines the level of durability of your data, as long as Druid nodes can see this storage infrastructure and get at the segments stored on it, you will not lose data no matter how many Druid nodes you lose. If segments disappear from this storage layer, then you will lose whatever data those segments represented.
The currently supported types of deep storage follow.
diff --git a/docs/Design.md b/docs/Design.md
index 2d67f1e3139..888d0b871b3 100644
--- a/docs/Design.md
+++ b/docs/Design.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
For a comprehensive look at the architecture of Druid, read the [White Paper](http://static.druid.io/docs/druid.pdf).
What is Druid?
diff --git a/docs/Download.md b/docs/Download.md
index 1bdbe799c50..00de8597f11 100644
--- a/docs/Download.md
+++ b/docs/Download.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
A version may be declared as a release candidate if it has been deployed to a sizable production cluster. Release candidates are declared as stable after we feel fairly confident there are no major bugs in the version. Check out the [[Versioning]] section for how we describe software versions.
Release Candidate
diff --git a/docs/Druid-Personal-Demo-Cluster.md b/docs/Druid-Personal-Demo-Cluster.md
index 81a088226f5..ab49d828dbc 100644
--- a/docs/Druid-Personal-Demo-Cluster.md
+++ b/docs/Druid-Personal-Demo-Cluster.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
# Druid Personal Demo Cluster (DPDC)
Note, there are currently some issues with the CloudFormation. We are working through them and will update the documentation here when things work properly. In the meantime, the simplest way to get your feet wet with a cluster setup is to run through the instructions at [housejester/druid-test-harness](https://github.com/housejester/druid-test-harness), though it is based on an older version. If you just want to get a feel for the types of data and queries that you can issue, check out [[Realtime Examples]]
diff --git a/docs/Druid-vs-Cassandra.md b/docs/Druid-vs-Cassandra.md
index 4cac3922324..e191dde2af7 100644
--- a/docs/Druid-vs-Cassandra.md
+++ b/docs/Druid-vs-Cassandra.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
We are not experts on Cassandra, if anything is incorrect about our portrayal, please let us know on the mailing list or via some other means. We will fix this page.
Druid is highly optimized for scans and aggregations, it supports arbitrarily deep drill downs into data sets without the need to pre-compute, and it can ingest event streams in real-time and allow users to query events as they come in. Cassandra is a great key-value store and it has some features that allow you to use it to do more interesting things than what you can do with a pure key-value store. But, it is not built for the same use cases that Druid handles, namely regularly scanning over billions of entries per query.
diff --git a/docs/Druid-vs-Hadoop.md b/docs/Druid-vs-Hadoop.md
index 68744179b1e..37559b1da8f 100644
--- a/docs/Druid-vs-Hadoop.md
+++ b/docs/Druid-vs-Hadoop.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
Druid is a complementary addition to Hadoop. Hadoop is great at storing and making accessible large amounts of individually low-value data. Unfortunately, Hadoop is not great at providing query speed guarantees on top of that data, nor does it have very good operational characteristics for a customer-facing production system. Druid, on the other hand, excels at taking high-value summaries of the low-value data on Hadoop, making it available in a fast and always-on fashion, such that it could be exposed directly to a customer.
Druid also requires some infrastructure to exist for “deep storage”. HDFS is one of the implemented options for this “deep storage”.
diff --git a/docs/Druid-vs-Impala-or-Shark.md b/docs/Druid-vs-Impala-or-Shark.md
index e9a0c673b87..3174fbbea5f 100644
--- a/docs/Druid-vs-Impala-or-Shark.md
+++ b/docs/Druid-vs-Impala-or-Shark.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
The question of Druid versus Impala or Shark basically comes down to your product requirements and what the systems were designed to do.
Druid was designed to
diff --git a/docs/Druid-vs-redshift.md b/docs/Druid-vs-redshift.md
index 2b360a4668b..8469209b10b 100644
--- a/docs/Druid-vs-redshift.md
+++ b/docs/Druid-vs-redshift.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
###How does Druid compare to Redshift?
In terms of drawing a differentiation, Redshift is essentially ParAccel (Actian) which Amazon is licensing.
diff --git a/docs/Druid-vs-vertica.md b/docs/Druid-vs-vertica.md
index b35f62e9f03..b20976b74a6 100644
--- a/docs/Druid-vs-vertica.md
+++ b/docs/Druid-vs-vertica.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
How does Druid compare to Vertica?
Vertica is similar to ParAccel/Redshift ([[Druid-vs-Redshift]]) described above in that it wasn’t built for real-time streaming data ingestion and it supports full SQL.
diff --git a/docs/Examples.md b/docs/Examples.md
index 88ca41fb4fa..9ab10466e56 100644
--- a/docs/Examples.md
+++ b/docs/Examples.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
Examples
========
diff --git a/docs/Filters.md b/docs/Filters.md
index f655861d5fb..41ae91f93e2 100644
--- a/docs/Filters.md
+++ b/docs/Filters.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
A filter is a JSON object indicating which rows of data should be included in the computation for a query. It’s essentially the equivalent of the WHERE clause in SQL. Druid supports the following types of filters.
### Selector filter
diff --git a/docs/Firehose.md b/docs/Firehose.md
index ab9b2ac53d2..c571f035a10 100644
--- a/docs/Firehose.md
+++ b/docs/Firehose.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
Firehoses describe the data stream source. They are pluggable and thus the configuration schema can and will vary based on the `type` of the firehose.
|Field|Type|Description|Required|
diff --git a/docs/Granularities.md b/docs/Granularities.md
index ea568dd7d62..cf5283841c0 100644
--- a/docs/Granularities.md
+++ b/docs/Granularities.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
The granularity field determines how data gets bucketed across the time dimension, i.e how it gets aggregated by hour, day, minute, etc.
It can be specified either as a string for simple granularities or as an object for arbitrary granularities.
diff --git a/docs/GroupByQuery.md b/docs/GroupByQuery.md
index 735dd5c393a..656ff1a41a1 100644
--- a/docs/GroupByQuery.md
+++ b/docs/GroupByQuery.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
These types of queries take a groupBy query object and return an array of JSON objects where each object represents a grouping asked for by the query.
An example groupBy query object is shown below:
diff --git a/docs/Having.md b/docs/Having.md
index 47226f1b88e..62ab4644451 100644
--- a/docs/Having.md
+++ b/docs/Having.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
A having clause is a JSON object identifying which rows from a groupBy query should be returned, by specifying conditions on aggregated values.
It is essentially the equivalent of the HAVING clause in SQL.
diff --git a/docs/Home.md b/docs/Home.md
index 88e1c86b8aa..934f11b8c92 100644
--- a/docs/Home.md
+++ b/docs/Home.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
Druid is an open-source analytics datastore designed for realtime, exploratory, queries on large-scale data sets (100’s of Billions entries, 100’s TB data). Druid provides for cost effective, always-on, realtime data ingestion and arbitrary data exploration.
- Check out some [[Examples]]
diff --git a/docs/Indexing-Service.md b/docs/Indexing-Service.md
index 0e4ff939f4a..60abbd73b9f 100644
--- a/docs/Indexing-Service.md
+++ b/docs/Indexing-Service.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
Disclaimer: We are still in the process of finalizing the indexing service and these configs are prone to change at any time. We will announce when we feel the indexing service and the configurations described are stable.
The indexing service is a distributed task/job queue. It accepts requests in the form of [[Tasks]] and executes those tasks across a set of worker nodes. Worker capacity can be automatically adjusted based on the number of tasks pending in the system. The indexing service is highly available, has built in retry logic, and can backup per task logs in deep storage.
diff --git a/docs/Libraries.md b/docs/Libraries.md
index 41374e310c1..75bc17c633c 100644
--- a/docs/Libraries.md
+++ b/docs/Libraries.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
### R
- [RDruid](https://github.com/metamx/RDruid) - Druid connector for R
diff --git a/docs/Loading-Your-Data.md b/docs/Loading-Your-Data.md
index 568a20767ac..dd4b0f8a7fb 100644
--- a/docs/Loading-Your-Data.md
+++ b/docs/Loading-Your-Data.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
Once you have a realtime node working, it is time to load your own data to see how Druid performs.
Druid can ingest data in three ways: via Kafka and a realtime node, via the indexing service, and via the Hadoop batch loader. Data is ingested in realtime using a [[Firehose]].
diff --git a/docs/Master.md b/docs/Master.md
index 891f6b854ef..f7345524980 100644
--- a/docs/Master.md
+++ b/docs/Master.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
Master
======
@@ -12,7 +15,7 @@ Rules
Segments are loaded and dropped from the cluster based on a set of rules. Rules indicate how segments should be assigned to different compute node tiers and how many replicants of a segment should exist in each tier. Rules may also indicate when segments should be dropped entirely from the cluster. The master loads a set of rules from the database. Rules may be specific to a certain datasource and/or a default set of rules can be configured. Rules are read in order and hence the ordering of rules is important. The master will cycle through all available segments and match each segment with the first rule that applies. Each segment may only match a single rule
-For more information on rules, see [[Rule Configuration]].
+For more information on rules, see [[Rule Configuration.md]].
Cleaning Up Segments
--------------------
diff --git a/docs/MySQL.md b/docs/MySQL.md
index 79cf6ed6d8b..f7ee2ec4db1 100644
--- a/docs/MySQL.md
+++ b/docs/MySQL.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
MySQL is an external dependency of Druid. We use it to store various metadata about the system, but not to store the actual data. There are a number of tables used for various purposes described below.
Segments Table
diff --git a/docs/OrderBy.md b/docs/OrderBy.md
index 993df6f4674..9dcffff7886 100644
--- a/docs/OrderBy.md
+++ b/docs/OrderBy.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
The orderBy field provides the functionality to sort and limit the set of results from a groupBy query. Available options are:
### DefaultLimitSpec
diff --git a/docs/Plumber.md b/docs/Plumber.md
index cf650fb6cdd..b2123e94393 100644
--- a/docs/Plumber.md
+++ b/docs/Plumber.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
The Plumber is the thing that handles generated segments both while they are being generated and when they are “done”. This is also technically a pluggable interface and there are multiple implementations, but there are a lot of details handled by the plumber such that it is expected that there will only be a few implementations and only more advanced third-parties will implement their own. See [here](https://github.com/metamx/druid/wiki/Plumber#available-plumbers) for a description of the plumbers included with Druid.
|Field|Type|Description|Required|
diff --git a/docs/Post-aggregations.md b/docs/Post-aggregations.md
index 8ff7a91ecb5..4aa6c7f8db7 100644
--- a/docs/Post-aggregations.md
+++ b/docs/Post-aggregations.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
Post-aggregations are specifications of processing that should happen on aggregated values as they come out of Druid. If you include a post aggregation as part of a query, make sure to include all aggregators the post-aggregator requires.
There are several post-aggregators available.
diff --git a/docs/Querying-your-data.md b/docs/Querying-your-data.md
index 520edcaf613..39d22ab3a32 100644
--- a/docs/Querying-your-data.md
+++ b/docs/Querying-your-data.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
# Setup #
Before we start querying druid, we're going to finish setting up a complete cluster on localhost. In [[Loading Your Data]] we setup a [[Realtime]], [[Compute]] and [[Master]] node. If you've already completed that tutorial, you need only follow the directions for 'Booting a Broker Node'.
diff --git a/docs/Querying.md b/docs/Querying.md
index 21ed93c7bb5..db845bc694f 100644
--- a/docs/Querying.md
+++ b/docs/Querying.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
Querying
========
diff --git a/docs/Realtime.md b/docs/Realtime.md
index 1908a469f80..c92cc7f7175 100644
--- a/docs/Realtime.md
+++ b/docs/Realtime.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
Realtime
========
diff --git a/docs/Rule-Configuration.md b/docs/Rule-Configuration.md
index 1d2b4c03461..2695da646ab 100644
--- a/docs/Rule-Configuration.md
+++ b/docs/Rule-Configuration.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
Note: It is recommended that the master console is used to configure rules. However, the master node does have HTTP endpoints to programmatically configure rules.
Load Rules
diff --git a/docs/SearchQuery.md b/docs/SearchQuery.md
index af125889c32..7acf04419fa 100644
--- a/docs/SearchQuery.md
+++ b/docs/SearchQuery.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
A search query returns dimension values that match the search specification.
{
diff --git a/docs/SearchQuerySpec.md b/docs/SearchQuerySpec.md
index 48036c65d56..9b9db04b8e6 100644
--- a/docs/SearchQuerySpec.md
+++ b/docs/SearchQuerySpec.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
Search query specs define how a “match” is defined between a search value and a dimension value. The available search query specs are:
InsensitiveContainsSearchQuerySpec
diff --git a/docs/SegmentMetadataQuery.md b/docs/SegmentMetadataQuery.md
index 606d0800447..0e6eefb78e1 100644
--- a/docs/SegmentMetadataQuery.md
+++ b/docs/SegmentMetadataQuery.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
Segment metadata queries return per segment information about:
\* Cardinality of all columns in the segment
\* Estimated byte size for the segment columns in TSV format
diff --git a/docs/Segments.md b/docs/Segments.md
index 5bffdd30b10..7da12950d15 100644
--- a/docs/Segments.md
+++ b/docs/Segments.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
Segments
========
diff --git a/docs/Spatial-Filters.md b/docs/Spatial-Filters.md
index c9ce15d5cc9..2ca83b9a3f9 100644
--- a/docs/Spatial-Filters.md
+++ b/docs/Spatial-Filters.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
Note: This feature is highly experimental and only works with spatially indexed dimensions.
The grammar for a spatial filter is as follows:
diff --git a/docs/Spatial-Indexing.md b/docs/Spatial-Indexing.md
index 5f7dc2b174c..1df36593433 100644
--- a/docs/Spatial-Indexing.md
+++ b/docs/Spatial-Indexing.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
Note: This feature is highly experimental.
In any of the data specs, there is now the option of providing spatial dimensions. For example, for a JSON data spec, spatial dimensions can be specified as follows:
diff --git a/docs/Stand-Alone-With-Riak-CS.md b/docs/Stand-Alone-With-Riak-CS.md
index aaa77b3151c..505b59f9283 100644
--- a/docs/Stand-Alone-With-Riak-CS.md
+++ b/docs/Stand-Alone-With-Riak-CS.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
This page describes how to use Riak-CS for deep storage instead of S3. We are still setting up some of the peripheral stuff (file downloads, etc.).
This guide provided by Pablo Nebrera, thanks!
diff --git a/docs/Support.md b/docs/Support.md
index 1561e935381..3dd512e050f 100644
--- a/docs/Support.md
+++ b/docs/Support.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
Numerous backend engineers at [Metamarkets](http://www.metamarkets.com) work on Druid full-time. If you any questions about usage or code, feel free to contact any of us.
Google Groups Mailing List
diff --git a/docs/Tasks.md b/docs/Tasks.md
index 53f441696d9..95341f581ec 100644
--- a/docs/Tasks.md
+++ b/docs/Tasks.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
Tasks are run on workers and always operate on a single datasource. Once an indexer coordinator node accepts a task, a lock is created for the datasource and interval specified in the task. Tasks do not need to explicitly release locks, they are released upon task completion. Tasks may potentially release locks early if they desire. Tasks ids are unique by naming them using UUIDs or the timestamp in which the task was created. Tasks are also part of a “task group”, which is a set of tasks that can share interval locks.
There are several different types of tasks.
diff --git a/docs/Thanks.md b/docs/Thanks.md
index f84708fb6c8..cb1c873cca0 100644
--- a/docs/Thanks.md
+++ b/docs/Thanks.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
YourKit supports the Druid open source projects with its
full-featured Java Profiler.
YourKit, LLC is the creator of innovative and intelligent tools for profiling
diff --git a/docs/TimeBoundaryQuery.md b/docs/TimeBoundaryQuery.md
index 432df69961d..bde4ca1c812 100644
--- a/docs/TimeBoundaryQuery.md
+++ b/docs/TimeBoundaryQuery.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
Time boundary queries return the earliest and latest data points of a data set. The grammar is:
{
diff --git a/docs/TimeseriesQuery.md b/docs/TimeseriesQuery.md
index d189b176a01..56f2ce733b9 100644
--- a/docs/TimeseriesQuery.md
+++ b/docs/TimeseriesQuery.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
Timeseries queries
==================
diff --git a/docs/Tutorial:-A-First-Look-at-Druid.md b/docs/Tutorial:-A-First-Look-at-Druid.md
index ef725135aa4..4722dd173c0 100644
--- a/docs/Tutorial:-A-First-Look-at-Druid.md
+++ b/docs/Tutorial:-A-First-Look-at-Druid.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
Greetings! This tutorial will help clarify some core Druid concepts. We will use a realtime dataset and issue some basic Druid queries. If you are ready to explore Druid, and learn a thing or two, read on!
About the data
diff --git a/docs/Tutorial:-The-Druid-Cluster.md b/docs/Tutorial:-The-Druid-Cluster.md
index b01824e52a2..e2eff84f505 100644
--- a/docs/Tutorial:-The-Druid-Cluster.md
+++ b/docs/Tutorial:-The-Druid-Cluster.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
Welcome back! In our first [tutorial](https://github.com/metamx/druid/wiki/Tutorial%3A-A-First-Look-at-Druid), we introduced you to the most basic Druid setup: a single realtime node. We streamed in some data and queried it. Realtime nodes collect very recent data and periodically hand that data off to the rest of the Druid cluster. Some questions about the architecture must naturally come to mind. What does the rest of Druid cluster look like? How does Druid load available static data?
This tutorial will hopefully answer these questions!
diff --git a/docs/Tutorial:-Webstream.md b/docs/Tutorial:-Webstream.md
index c8b0bcada8b..973204f31d4 100644
--- a/docs/Tutorial:-Webstream.md
+++ b/docs/Tutorial:-Webstream.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
Greetings! This tutorial will help clarify some core Druid concepts. We will use a realtime dataset and issue some basic Druid queries. If you are ready to explore Druid, and learn a thing or two, read on!
About the data
diff --git a/docs/Twitter-Tutorial.md b/docs/Twitter-Tutorial.md
index c113282e937..cedd26b9250 100644
--- a/docs/Twitter-Tutorial.md
+++ b/docs/Twitter-Tutorial.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
Greetings! We see you’ve taken an interest in Druid. That’s awesome! Hopefully this tutorial will help clarify some core Druid concepts. We will go through one of the Real-time [[Examples]], and issue some basic Druid queries. The data source we’ll be working with is the [Twitter spritzer stream](https://dev.twitter.com/docs/streaming-apis/streams/public). If you are ready to explore Druid, brave its challenges, and maybe learn a thing or two, read on!
Setting Up
diff --git a/docs/Versioning.md b/docs/Versioning.md
index 33c665d6542..7b9fa24045c 100644
--- a/docs/Versioning.md
+++ b/docs/Versioning.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
This page discusses how we do versioning and provides information on our stable releases.
Versioning Strategy
diff --git a/docs/ZooKeeper.md b/docs/ZooKeeper.md
index 250d0280bcb..03f2b1b8e0c 100644
--- a/docs/ZooKeeper.md
+++ b/docs/ZooKeeper.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
Druid uses ZooKeeper (ZK) for management of current cluster state. The operations that happen over ZK are
1. [[Master]] leader election
diff --git a/docs/_config.yml b/docs/_config.yml
index 362c8bf5f91..1ba74937d8b 100644
--- a/docs/_config.yml
+++ b/docs/_config.yml
@@ -1,2 +1,3 @@
name: Your New Jekyll Site
pygments: true
+markdown: redcarpet
diff --git a/docs/contents.md b/docs/contents.md
index 2298a7c8bdf..23b56bc33a5 100644
--- a/docs/contents.md
+++ b/docs/contents.md
@@ -1,3 +1,6 @@
+---
+layout: default
+---
Contents
\* [[Introduction|Home]]
\* [[Download]]