Merge branch 'master' into subquery

This commit is contained in:
Yuval Oren 2014-02-03 14:52:50 -08:00
commit 689191c5ad
59 changed files with 228 additions and 9569 deletions

View File

@ -28,7 +28,7 @@
<parent> <parent>
<groupId>io.druid</groupId> <groupId>io.druid</groupId>
<artifactId>druid</artifactId> <artifactId>druid</artifactId>
<version>0.6.53-SNAPSHOT</version> <version>0.6.54-SNAPSHOT</version>
</parent> </parent>
<dependencies> <dependencies>

View File

@ -28,7 +28,7 @@
<parent> <parent>
<groupId>io.druid</groupId> <groupId>io.druid</groupId>
<artifactId>druid</artifactId> <artifactId>druid</artifactId>
<version>0.6.53-SNAPSHOT</version> <version>0.6.54-SNAPSHOT</version>
</parent> </parent>
<dependencies> <dependencies>

View File

@ -1,4 +1,5 @@
--- ---
layout: doc_page layout: doc_page
--- ---
Experimental features are features we have developed but have not fully tested in a production environment. If you choose to try them out, there will likely to edge cases that we have not covered. We would love feedback on any of these features, whether they are bug reports, suggestions for improvement, or letting us know they work as intended. # About Experimental Features
Experimental features are features we have developed but have not fully tested in a production environment. If you choose to try them out, there will likely be edge cases that we have not covered. We would love feedback on any of these features, whether they are bug reports, suggestions for improvement, or letting us know they work as intended.

View File

@ -1,6 +1,7 @@
--- ---
layout: doc_page layout: doc_page
--- ---
# Aggregations
Aggregations are specifications of processing over metrics available in Druid. Aggregations are specifications of processing over metrics available in Druid.
Available aggregations are: Available aggregations are:

View File

@ -0,0 +1,87 @@
---
layout: doc_page
---
Data Formats for Ingestion
==========================
Druid can ingest data in JSON, CSV, or TSV. While most examples in the documentation use data in JSON format, it is not difficult to configure Druid to ingest CSV or TSV data.
## Formatting the Data
The following are three samples of the data used in the [Wikipedia example](Tutorial:-Loading-Your-Data-Part-1.html).
_JSON_
```json
{"timestamp": "2013-08-31T01:02:33Z", "page": "Gypsy Danger", "language" : "en", "user" : "nuclear", "unpatrolled" : "true", "newPage" : "true", "robot": "false", "anonymous": "false", "namespace":"article", "continent":"North America", "country":"United States", "region":"Bay Area", "city":"San Francisco", "added": 57, "deleted": 200, "delta": -143}
{"timestamp": "2013-08-31T03:32:45Z", "page": "Striker Eureka", "language" : "en", "user" : "speed", "unpatrolled" : "false", "newPage" : "true", "robot": "true", "anonymous": "false", "namespace":"wikipedia", "continent":"Australia", "country":"Australia", "region":"Cantebury", "city":"Syndey", "added": 459, "deleted": 129, "delta": 330}
{"timestamp": "2013-08-31T07:11:21Z", "page": "Cherno Alpha", "language" : "ru", "user" : "masterYi", "unpatrolled" : "false", "newPage" : "true", "robot": "true", "anonymous": "false", "namespace":"article", "continent":"Asia", "country":"Russia", "region":"Oblast", "city":"Moscow", "added": 123, "deleted": 12, "delta": 111}
{"timestamp": "2013-08-31T11:58:39Z", "page": "Crimson Typhoon", "language" : "zh", "user" : "triplets", "unpatrolled" : "true", "newPage" : "false", "robot": "true", "anonymous": "false", "namespace":"wikipedia", "continent":"Asia", "country":"China", "region":"Shanxi", "city":"Taiyuan", "added": 905, "deleted": 5, "delta": 900}
{"timestamp": "2013-08-31T12:41:27Z", "page": "Coyote Tango", "language" : "ja", "user" : "cancer", "unpatrolled" : "true", "newPage" : "false", "robot": "true", "anonymous": "false", "namespace":"wikipedia", "continent":"Asia", "country":"Japan", "region":"Kanto", "city":"Tokyo", "added": 1, "deleted": 10, "delta": -9}
```
_CSV_
```
2013-08-31T01:02:33Z,"Gypsy Danger","en","nuclear","true","true","false","false","article","North America","United States","Bay Area","San Francisco",57,200,-143
2013-08-31T03:32:45Z,"Striker Eureka","en","speed","false","true","true","false","wikipedia","Australia","Australia","Cantebury","Syndey",459,129,330
2013-08-31T07:11:21Z,"Cherno Alpha","ru","masterYi","false","true","true","false","article","Asia","Russia","Oblast","Moscow",123,12,111
2013-08-31T11:58:39Z,"Crimson Typhoon","zh","triplets","true","false","true","false","wikipedia","Asia","China","Shanxi","Taiyuan",905,5,900
2013-08-31T12:41:27Z,"Coyote Tango","ja","cancer","true","false","true","false","wikipedia","Asia","Japan","Kanto","Tokyo",1,10,-9
```
_TSV_
```
2013-08-31T01:02:33Z "Gypsy Danger" "en" "nuclear" "true" "true" "false" "false" "article" "North America" "United States" "Bay Area" "San Francisco" 57 200 -143
2013-08-31T03:32:45Z "Striker Eureka" "en" "speed" "false" "true" "true" "false" "wikipedia" "Australia" "Australia" "Cantebury" "Syndey" 459 129 330
2013-08-31T07:11:21Z "Cherno Alpha" "ru" "masterYi" "false" "true" "true" "false" "article" "Asia" "Russia" "Oblast" "Moscow" 123 12 111
2013-08-31T11:58:39Z "Crimson Typhoon" "zh" "triplets" "true" "false" "true" "false" "wikipedia" "Asia" "China" "Shanxi" "Taiyuan" 905 5 900
2013-08-31T12:41:27Z "Coyote Tango" "ja" "cancer" "true" "false" "true" "false" "wikipedia" "Asia" "Japan" "Kanto" "Tokyo" 1 10 -9
```
Note that the CSV and TSV data do not contain column heads. This becomes important when you specify the data for ingesting.
## Configuring Ingestion For the Indexing Service
If you use the [indexing service](Indexing-Service.html) for ingesting the data, a [task](Tasks.html) must be configured and submitted. Tasks are configured with a JSON object which, among other things, specifies the data source and type. In the Wikipedia example, JSON data was read from a local file. The task spec contains a firehose element to specify this:
```json
"firehose" : {
"type" : "local",
"baseDir" : "examples/indexing",
"filter" : "wikipedia_data.json",
"parser" : {
"timestampSpec" : {
"column" : "timestamp"
},
"data" : {
"format" : "json",
"dimensions" : ["page","language","user","unpatrolled","newPage","robot","anonymous","namespace","continent","country","region","city"]
}
}
}
```
Specified here are the location of the datafile, the timestamp column, the format of the data, and the columns that will become dimensions in Druid.
Since the CSV data does not contain the column names, they will have to be added before that data can be processed:
```json
"firehose" : {
"type" : "local",
"baseDir" : "examples/indexing/",
"filter" : "wikipedia_data.csv",
"parser" : {
"timestampSpec" : {
"column" : "timestamp"
},
"data" : {
"type" : "csv",
"columns" : ["timestamp","page","language","user","unpatrolled","newPage","robot","anonymous","namespace","continent","country","region","city"],
"dimensions" : ["page","language","user","unpatrolled","newPage","robot","anonymous","namespace","continent","country","region","city"]
}
}
}
```
Note also that the filename extension and the data type were changed to "csv". For the TSV data, the same changes are made but with "tsv" for the filename extension and the data type.

View File

@ -1,6 +1,7 @@
--- ---
layout: doc_page layout: doc_page
--- ---
# Deep Storage
Deep storage is where segments are stored. It is a storage mechanism that Druid does not provide. This deep storage infrastructure defines the level of durability of your data, as long as Druid nodes can see this storage infrastructure and get at the segments stored on it, you will not lose data no matter how many Druid nodes you lose. If segments disappear from this storage layer, then you will lose whatever data those segments represented. Deep storage is where segments are stored. It is a storage mechanism that Druid does not provide. This deep storage infrastructure defines the level of durability of your data, as long as Druid nodes can see this storage infrastructure and get at the segments stored on it, you will not lose data no matter how many Druid nodes you lose. If segments disappear from this storage layer, then you will lose whatever data those segments represented.
The currently supported types of deep storage follow. The currently supported types of deep storage follow.

View File

@ -1,6 +1,8 @@
--- ---
layout: doc_page layout: doc_page
--- ---
# Transforming Dimension Values
The following JSON fields can be used in a query to operate on dimension values.
## DimensionSpec ## DimensionSpec
@ -8,7 +10,7 @@ layout: doc_page
### DefaultDimensionSpec ### DefaultDimensionSpec
Returns dimension values as is and optionally renames renames the dimension. Returns dimension values as is and optionally renames the dimension.
```json ```json
{ "type" : "default", "dimension" : <dimension>, "outputName": <output_name> } { "type" : "default", "dimension" : <dimension>, "outputName": <output_name> }

View File

@ -1,6 +1,7 @@
--- ---
layout: doc_page layout: doc_page
--- ---
#Query Filters
A filter is a JSON object indicating which rows of data should be included in the computation for a query. Its essentially the equivalent of the WHERE clause in SQL. Druid supports the following types of filters. A filter is a JSON object indicating which rows of data should be included in the computation for a query. Its essentially the equivalent of the WHERE clause in SQL. Druid supports the following types of filters.
### Selector filter ### Selector filter

View File

@ -1,6 +1,7 @@
--- ---
layout: doc_page layout: doc_page
--- ---
# Geographic Queries
Druid supports filtering specially spatially indexed columns based on an origin and a bound. Druid supports filtering specially spatially indexed columns based on an origin and a bound.
# Spatial Indexing # Spatial Indexing

View File

@ -1,6 +1,7 @@
--- ---
layout: doc_page layout: doc_page
--- ---
# Aggregation Granularity
The granularity field determines how data gets bucketed across the time dimension, i.e how it gets aggregated by hour, day, minute, etc. The granularity field determines how data gets bucketed across the time dimension, i.e how it gets aggregated by hour, day, minute, etc.
It can be specified either as a string for simple granularities or as an object for arbitrary granularities. It can be specified either as a string for simple granularities or as an object for arbitrary granularities.

View File

@ -1,6 +1,7 @@
--- ---
layout: doc_page layout: doc_page
--- ---
# groupBy Queries
These types of queries take a groupBy query object and return an array of JSON objects where each object represents a grouping asked for by the query. Note: If you only want to do straight aggreagates for some time range, we highly recommend using [TimeseriesQueries](TimeseriesQuery.html) instead. The performance will be substantially better. These types of queries take a groupBy query object and return an array of JSON objects where each object represents a grouping asked for by the query. Note: If you only want to do straight aggreagates for some time range, we highly recommend using [TimeseriesQueries](TimeseriesQuery.html) instead. The performance will be substantially better.
An example groupBy query object is shown below: An example groupBy query object is shown below:

View File

@ -1,6 +1,7 @@
--- ---
layout: doc_page layout: doc_page
--- ---
# Filter groupBy Query Results
A having clause is a JSON object identifying which rows from a groupBy query should be returned, by specifying conditions on aggregated values. A having clause is a JSON object identifying which rows from a groupBy query should be returned, by specifying conditions on aggregated values.
It is essentially the equivalent of the HAVING clause in SQL. It is essentially the equivalent of the HAVING clause in SQL.

View File

@ -1,6 +1,7 @@
--- ---
layout: doc_page layout: doc_page
--- ---
# Druid Indexing Service
The indexing service is a highly-available, distributed service that runs indexing related tasks. Indexing service [tasks](Tasks.html) create (and sometimes destroy) Druid [segments](Segments.html). The indexing service has a master/slave like architecture. The indexing service is a highly-available, distributed service that runs indexing related tasks. Indexing service [tasks](Tasks.html) create (and sometimes destroy) Druid [segments](Segments.html). The indexing service has a master/slave like architecture.
The indexing service is composed of three main components: a peon component that can run a single task, a [Middle Manager](Middlemanager.html) component that manages peons, and an overlord component that manages task distribution to middle managers. The indexing service is composed of three main components: a peon component that can run a single task, a [Middle Manager](Middlemanager.html) component that manages peons, and an overlord component that manages task distribution to middle managers.

View File

@ -1,6 +1,7 @@
--- ---
layout: doc_page layout: doc_page
--- ---
# MySQL Database
MySQL is an external dependency of Druid. We use it to store various metadata about the system, but not to store the actual data. There are a number of tables used for various purposes described below. MySQL is an external dependency of Druid. We use it to store various metadata about the system, but not to store the actual data. There are a number of tables used for various purposes described below.
Segments Table Segments Table

View File

@ -1,6 +1,7 @@
--- ---
layout: doc_page layout: doc_page
--- ---
# Sort groupBy Query Results
The orderBy field provides the functionality to sort and limit the set of results from a groupBy query. If you group by a single dimension and are ordering by a single metric, we highly recommend using [TopN Queries](TopNQuery.html) instead. The performance will be substantially better. Available options are: The orderBy field provides the functionality to sort and limit the set of results from a groupBy query. If you group by a single dimension and are ordering by a single metric, we highly recommend using [TopN Queries](TopNQuery.html) instead. The performance will be substantially better. Available options are:
### DefaultLimitSpec ### DefaultLimitSpec

View File

@ -1,6 +1,7 @@
--- ---
layout: doc_page layout: doc_page
--- ---
# Post-Aggregations
Post-aggregations are specifications of processing that should happen on aggregated values as they come out of Druid. If you include a post aggregation as part of a query, make sure to include all aggregators the post-aggregator requires. Post-aggregations are specifications of processing that should happen on aggregated values as they come out of Druid. If you include a post aggregation as part of a query, make sure to include all aggregators the post-aggregator requires.
There are several post-aggregators available. There are several post-aggregators available.

View File

@ -1,6 +1,7 @@
--- ---
layout: doc_page layout: doc_page
--- ---
# Configuring Rules for Coordinator Nodes
Note: It is recommended that the coordinator console is used to configure rules. However, the coordinator node does have HTTP endpoints to programmatically configure rules. Note: It is recommended that the coordinator console is used to configure rules. However, the coordinator node does have HTTP endpoints to programmatically configure rules.
Load Rules Load Rules

View File

@ -1,6 +1,7 @@
--- ---
layout: doc_page layout: doc_page
--- ---
# Search Queries
A search query returns dimension values that match the search specification. A search query returns dimension values that match the search specification.
```json ```json

View File

@ -1,6 +1,7 @@
--- ---
layout: doc_page layout: doc_page
--- ---
# Refining Search Queries
Search query specs define how a "match" is defined between a search value and a dimension value. The available search query specs are: Search query specs define how a "match" is defined between a search value and a dimension value. The available search query specs are:
InsensitiveContainsSearchQuerySpec InsensitiveContainsSearchQuerySpec

View File

@ -1,6 +1,7 @@
--- ---
layout: doc_page layout: doc_page
--- ---
# Segment Metadata Queries
Segment metadata queries return per segment information about: Segment metadata queries return per segment information about:
* Cardinality of all columns in the segment * Cardinality of all columns in the segment

View File

@ -1,6 +1,7 @@
--- ---
layout: doc_page layout: doc_page
--- ---
# Tasks
Tasks are run on middle managers and always operate on a single data source. Tasks are run on middle managers and always operate on a single data source.
There are several different types of tasks. There are several different types of tasks.
@ -158,34 +159,15 @@ The indexing service can also run real-time tasks. These tasks effectively trans
} }
``` ```
Id:
The ID of the task. Not required.
Resource:
A JSON object used for high availability purposes. Not required.
|Field|Type|Description|Required| |Field|Type|Description|Required|
|-----|----|-----------|--------| |-----|----|-----------|--------|
|id|String|The ID of the task.|No|
|Resource|JSON object|Used for high availability purposes.|No|
|availabilityGroup|String|An uniqueness identifier for the task. Tasks with the same availability group will always run on different middle managers. Used mainly for replication. |yes| |availabilityGroup|String|An uniqueness identifier for the task. Tasks with the same availability group will always run on different middle managers. Used mainly for replication. |yes|
|requiredCapacity|Integer|How much middle manager capacity this task will take.|yes| |requiredCapacity|Integer|How much middle manager capacity this task will take.|yes|
Schema: For schema, fireDepartmentConfig, windowPeriod, segmentGranularity, and rejectionPolicy, see the [realtime-ingestion doc](Realtime-ingestion.html). For firehose configuration, see [Firehose](Firehose.html).
See [Schema](Realtime.html).
Fire Department Config:
See [Config](Realtime.html).
Firehose:
See [Firehose](Firehose.html).
Window Period:
See [Realtime](Realtime.html).
Segment Granularity:
See [Realtime](Realtime.html).
Rejection Policy:
See [Realtime](Realtime.html).
Segment Merging Tasks Segment Merging Tasks
--------------------- ---------------------

View File

@ -1,6 +1,7 @@
--- ---
layout: doc_page layout: doc_page
--- ---
# Time Boundary Queries
Time boundary queries return the earliest and latest data points of a data set. The grammar is: Time boundary queries return the earliest and latest data points of a data set. The grammar is:
```json ```json

View File

@ -80,7 +80,7 @@ Let's start doing stuff. You can start a Druid [Realtime](Realtime.html) node by
Select "wikipedia". Select "wikipedia".
Once the node starts up you will see a bunch of logs about setting up properties and connecting to the data source. If everything was successful, you should see messages of the form shown below. Note that the first time you start the example, it may take some extra time due to its fetching various dependencies. Once the node starts up you will see a bunch of logs about setting up properties and connecting to the data source. If everything was successful, you should see messages of the form shown below.
``` ```
2013-09-04 19:33:11,922 INFO [main] org.eclipse.jetty.server.AbstractConnector - Started SelectChannelConnector@0.0.0.0:8083 2013-09-04 19:33:11,922 INFO [main] org.eclipse.jetty.server.AbstractConnector - Started SelectChannelConnector@0.0.0.0:8083
@ -118,7 +118,7 @@ Select "wikipedia" once again. This script issues [GroupByQueries](GroupByQuery.
This is a **groupBy** query, which you may be familiar with from SQL. We are grouping, or aggregating, via the `dimensions` field: `["page"]`. We are **filtering** via the `namespace` dimension, to only look at edits on `articles`. Our **aggregations** are what we are calculating: a count of the number of data rows, and a count of the number of edits that have occurred. This is a **groupBy** query, which you may be familiar with from SQL. We are grouping, or aggregating, via the `dimensions` field: `["page"]`. We are **filtering** via the `namespace` dimension, to only look at edits on `articles`. Our **aggregations** are what we are calculating: a count of the number of data rows, and a count of the number of edits that have occurred.
The result looks something like this: The result looks something like this (when it's prettified):
```json ```json
[ [
@ -323,13 +323,13 @@ Feel free to tweak other query parameters to answer other questions you may have
Next Steps Next Steps
---------- ----------
What to know even more information about the Druid Cluster? Check out [The Druid Cluster](Tutorial%3A-The-Druid-Cluster.html) What to know even more information about the Druid Cluster? Check out [The Druid Cluster](Tutorial%3A-The-Druid-Cluster.html).
Druid is even more fun if you load your own data into it! To learn how to load your data, see [Loading Your Data](Tutorial%3A-Loading-Your-Data-Part-1.html). Druid is even more fun if you load your own data into it! To learn how to load your data, see [Loading Your Data](Tutorial%3A-Loading-Your-Data-Part-1.html).
Additional Information Additional Information
---------------------- ----------------------
This tutorial is merely showcasing a small fraction of what Druid can do. If you are interested in more information about Druid, including setting up a more sophisticated Druid cluster, please read the other links in our wiki. This tutorial is merely showcasing a small fraction of what Druid can do. If you are interested in more information about Druid, including setting up a more sophisticated Druid cluster, read more of the Druid documentation and the blogs found on druid.io.
And thus concludes our journey! Hopefully you learned a thing or two about Druid real-time ingestion, querying Druid, and how Druid can be used to solve problems. If you have additional questions, feel free to post in our [google groups page](https://groups.google.com/forum/#!forum/druid-development). And thus concludes our journey! Hopefully you learned a thing or two about Druid real-time ingestion, querying Druid, and how Druid can be used to solve problems. If you have additional questions, feel free to post in our [google groups page](https://groups.google.com/forum/#!forum/druid-development).

View File

@ -7,7 +7,7 @@ Welcome back! In our first [tutorial](Tutorial%3A-A-First-Look-at-Druid.html), w
This tutorial will hopefully answer these questions! This tutorial will hopefully answer these questions!
In this tutorial, we will set up other types of Druid nodes as well as and external dependencies for a fully functional Druid cluster. The architecture of Druid is very much like the [Megazord](http://www.youtube.com/watch?v=7mQuHh1X4H4) from the popular 90s show Mighty Morphin' Power Rangers. Each Druid node has a specific purpose and the nodes come together to form a fully functional system. In this tutorial, we will set up other types of Druid nodes and external dependencies for a fully functional Druid cluster. The architecture of Druid is very much like the [Megazord](http://www.youtube.com/watch?v=7mQuHh1X4H4) from the popular 90s show Mighty Morphin' Power Rangers. Each Druid node has a specific purpose and the nodes come together to form a fully functional system.
## Downloading Druid ## Downloading Druid
@ -32,9 +32,9 @@ For deep storage, we have made a public S3 bucket (static.druid.io) available wh
#### Setting up MySQL #### Setting up MySQL
1. If you don't already have it, download MySQL Community Server here: [http://dev.mysql.com/downloads/mysql/](http://dev.mysql.com/downloads/mysql/) 1. If you don't already have it, download MySQL Community Server here: [http://dev.mysql.com/downloads/mysql/](http://dev.mysql.com/downloads/mysql/).
2. Install MySQL 2. Install MySQL.
3. Create a druid user and database 3. Create a druid user and database.
```bash ```bash
mysql -u root mysql -u root
@ -88,7 +88,7 @@ Metrics (things to aggregate over):
## The Cluster ## The Cluster
Let's start up a few nodes and download our data. First things though, let's make sure we have config directory where we will store configs for our various nodes: Let's start up a few nodes and download our data. First, let's make sure we have configs in the config directory for our various nodes. Issue the following from the Druid home directory:
``` ```
ls config ls config

View File

@ -1,6 +1,7 @@
--- ---
layout: doc_page layout: doc_page
--- ---
# Versioning Druid
This page discusses how we do versioning and provides information on our stable releases. This page discusses how we do versioning and provides information on our stable releases.
Versioning Strategy Versioning Strategy

View File

@ -1,6 +1,7 @@
--- ---
layout: doc_page layout: doc_page
--- ---
# ZooKeeper
Druid uses [ZooKeeper](http://zookeeper.apache.org/) (ZK) for management of current cluster state. The operations that happen over ZK are Druid uses [ZooKeeper](http://zookeeper.apache.org/) (ZK) for management of current cluster state. The operations that happen over ZK are
1. [Coordinator](Coordinator.html) leader election 1. [Coordinator](Coordinator.html) leader election

View File

@ -54,3 +54,7 @@
.doc-content table code { .doc-content table code {
background-color: transparent; background-color: transparent;
} }
td, th {
padding: 5px;
}

View File

@ -28,6 +28,7 @@ h2. Data Ingestion
* "Batch":./Batch-ingestion.html * "Batch":./Batch-ingestion.html
* "Indexing Service":./Indexing-Service.html * "Indexing Service":./Indexing-Service.html
** "Tasks":./Tasks.html ** "Tasks":./Tasks.html
* "Data Formats":./Data_formats.html
* "Ingestion FAQ":./Ingestion-FAQ.html * "Ingestion FAQ":./Ingestion-FAQ.html
h2. Querying h2. Querying
@ -38,15 +39,15 @@ h2. Querying
** "Granularities":./Granularities.html ** "Granularities":./Granularities.html
** "DimensionSpecs":./DimensionSpecs.html ** "DimensionSpecs":./DimensionSpecs.html
* Query Types * Query Types
** "GroupByQuery":./GroupByQuery.html ** "GroupBy":./GroupByQuery.html
*** "OrderBy":./OrderBy.html *** "OrderBy":./OrderBy.html
*** "Having":./Having.html *** "Having":./Having.html
** "SearchQuery":./SearchQuery.html ** "Search":./SearchQuery.html
*** "SearchQuerySpec":./SearchQuerySpec.html *** "SearchQuerySpec":./SearchQuerySpec.html
** "SegmentMetadataQuery":./SegmentMetadataQuery.html ** "Segment Metadata":./SegmentMetadataQuery.html
** "TimeBoundaryQuery":./TimeBoundaryQuery.html ** "Time Boundary":./TimeBoundaryQuery.html
** "TimeseriesQuery":./TimeseriesQuery.html ** "Timeseries":./TimeseriesQuery.html
** "TopNQuery":./TopNQuery.html ** "TopN":./TopNQuery.html
*** "TopNMetricSpec":./TopNMetricSpec.html *** "TopNMetricSpec":./TopNMetricSpec.html
h2. Architecture h2. Architecture

View File

@ -28,7 +28,7 @@
<parent> <parent>
<groupId>io.druid</groupId> <groupId>io.druid</groupId>
<artifactId>druid</artifactId> <artifactId>druid</artifactId>
<version>0.6.53-SNAPSHOT</version> <version>0.6.54-SNAPSHOT</version>
</parent> </parent>
<dependencies> <dependencies>

View File

@ -28,7 +28,7 @@
<parent> <parent>
<groupId>io.druid</groupId> <groupId>io.druid</groupId>
<artifactId>druid</artifactId> <artifactId>druid</artifactId>
<version>0.6.53-SNAPSHOT</version> <version>0.6.54-SNAPSHOT</version>
</parent> </parent>
<dependencies> <dependencies>

View File

@ -18,8 +18,7 @@
~ Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. ~ Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
--> -->
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion> <modelVersion>4.0.0</modelVersion>
<groupId>io.druid.extensions</groupId> <groupId>io.druid.extensions</groupId>
<artifactId>druid-hll</artifactId> <artifactId>druid-hll</artifactId>
@ -29,7 +28,7 @@
<parent> <parent>
<groupId>io.druid</groupId> <groupId>io.druid</groupId>
<artifactId>druid</artifactId> <artifactId>druid</artifactId>
<version>0.6.53-SNAPSHOT</version> <version>0.6.54-SNAPSHOT</version>
</parent> </parent>
<dependencies> <dependencies>

View File

@ -28,7 +28,7 @@
<parent> <parent>
<groupId>io.druid</groupId> <groupId>io.druid</groupId>
<artifactId>druid</artifactId> <artifactId>druid</artifactId>
<version>0.6.53-SNAPSHOT</version> <version>0.6.54-SNAPSHOT</version>
</parent> </parent>
<dependencies> <dependencies>

View File

@ -28,7 +28,7 @@
<parent> <parent>
<groupId>io.druid</groupId> <groupId>io.druid</groupId>
<artifactId>druid</artifactId> <artifactId>druid</artifactId>
<version>0.6.53-SNAPSHOT</version> <version>0.6.54-SNAPSHOT</version>
</parent> </parent>
<dependencies> <dependencies>

View File

@ -32,6 +32,7 @@ import io.druid.curator.discovery.ServerDiscoverySelector;
import io.druid.indexing.common.RetryPolicy; import io.druid.indexing.common.RetryPolicy;
import io.druid.indexing.common.RetryPolicyFactory; import io.druid.indexing.common.RetryPolicyFactory;
import io.druid.indexing.common.task.Task; import io.druid.indexing.common.task.Task;
import org.jboss.netty.channel.ChannelException;
import org.joda.time.Duration; import org.joda.time.Duration;
import java.io.IOException; import java.io.IOException;
@ -94,6 +95,7 @@ public class RemoteTaskActionClient implements TaskActionClient
} }
catch (Exception e) { catch (Exception e) {
Throwables.propagateIfInstanceOf(e.getCause(), IOException.class); Throwables.propagateIfInstanceOf(e.getCause(), IOException.class);
Throwables.propagateIfInstanceOf(e.getCause(), ChannelException.class);
throw Throwables.propagate(e); throw Throwables.propagate(e);
} }
@ -105,7 +107,7 @@ public class RemoteTaskActionClient implements TaskActionClient
return jsonMapper.convertValue(responseDict.get("result"), taskAction.getReturnTypeReference()); return jsonMapper.convertValue(responseDict.get("result"), taskAction.getReturnTypeReference());
} }
catch (IOException e) { catch (IOException | ChannelException e) {
log.warn(e, "Exception submitting action for task[%s]", task.getId()); log.warn(e, "Exception submitting action for task[%s]", task.getId());
final Duration delay = retryPolicy.getAndIncrementRetryDelay(); final Duration delay = retryPolicy.getAndIncrementRetryDelay();

View File

@ -29,7 +29,7 @@
<style type="text/css">@import "css/demo_table.css";</style> <style type="text/css">@import "css/demo_table.css";</style>
<script type="text/javascript" src="js/underscore-1.2.2.js"></script> <script type="text/javascript" src="js/underscore-1.2.2.js"></script>
<script type="text/javascript" src="js/jquery-1.8.3.js"></script> <script type="text/javascript" src="js/jquery-1.11.0.min.js"></script>
<script type="text/javascript" src="js/jquery.dataTables-1.8.2.js"></script> <script type="text/javascript" src="js/jquery.dataTables-1.8.2.js"></script>
<script type="text/javascript" src="js/druidTable-0.0.1.js"></script> <script type="text/javascript" src="js/druidTable-0.0.1.js"></script>
<script type="text/javascript" src="js/tablehelper-0.0.2.js"></script> <script type="text/javascript" src="js/tablehelper-0.0.2.js"></script>

View File

@ -28,7 +28,7 @@
<parent> <parent>
<groupId>io.druid</groupId> <groupId>io.druid</groupId>
<artifactId>druid</artifactId> <artifactId>druid</artifactId>
<version>0.6.53-SNAPSHOT</version> <version>0.6.54-SNAPSHOT</version>
</parent> </parent>
<dependencies> <dependencies>

View File

@ -28,7 +28,7 @@
<parent> <parent>
<groupId>io.druid</groupId> <groupId>io.druid</groupId>
<artifactId>druid</artifactId> <artifactId>druid</artifactId>
<version>0.6.53-SNAPSHOT</version> <version>0.6.54-SNAPSHOT</version>
</parent> </parent>
<dependencies> <dependencies>

View File

@ -23,7 +23,7 @@
<groupId>io.druid</groupId> <groupId>io.druid</groupId>
<artifactId>druid</artifactId> <artifactId>druid</artifactId>
<packaging>pom</packaging> <packaging>pom</packaging>
<version>0.6.53-SNAPSHOT</version> <version>0.6.54-SNAPSHOT</version>
<name>druid</name> <name>druid</name>
<description>druid</description> <description>druid</description>
<scm> <scm>
@ -513,6 +513,8 @@
<artifactId>maven-surefire-plugin</artifactId> <artifactId>maven-surefire-plugin</artifactId>
<version>2.12.2</version> <version>2.12.2</version>
<configuration> <configuration>
<!-- locale settings must be set on the command line before startup -->
<argLine>-Duser.language=en -Duser.country=US</argLine>
<systemPropertyVariables> <systemPropertyVariables>
<user.timezone>UTC</user.timezone> <user.timezone>UTC</user.timezone>
</systemPropertyVariables> </systemPropertyVariables>

View File

@ -28,7 +28,7 @@
<parent> <parent>
<groupId>io.druid</groupId> <groupId>io.druid</groupId>
<artifactId>druid</artifactId> <artifactId>druid</artifactId>
<version>0.6.53-SNAPSHOT</version> <version>0.6.54-SNAPSHOT</version>
</parent> </parent>
<dependencies> <dependencies>

View File

@ -9,7 +9,7 @@
<parent> <parent>
<groupId>io.druid</groupId> <groupId>io.druid</groupId>
<artifactId>druid</artifactId> <artifactId>druid</artifactId>
<version>0.6.53-SNAPSHOT</version> <version>0.6.54-SNAPSHOT</version>
</parent> </parent>
<dependencies> <dependencies>

View File

@ -28,7 +28,7 @@
<parent> <parent>
<groupId>io.druid</groupId> <groupId>io.druid</groupId>
<artifactId>druid</artifactId> <artifactId>druid</artifactId>
<version>0.6.53-SNAPSHOT</version> <version>0.6.54-SNAPSHOT</version>
</parent> </parent>
<dependencies> <dependencies>

View File

@ -28,7 +28,7 @@
<parent> <parent>
<groupId>io.druid</groupId> <groupId>io.druid</groupId>
<artifactId>druid</artifactId> <artifactId>druid</artifactId>
<version>0.6.53-SNAPSHOT</version> <version>0.6.54-SNAPSHOT</version>
</parent> </parent>
<dependencies> <dependencies>

View File

@ -263,7 +263,8 @@ public class DatabaseRuleManager
public List<Rule> getRules(final String dataSource) public List<Rule> getRules(final String dataSource)
{ {
return rules.get().get(dataSource); List<Rule> retVal = rules.get().get(dataSource);
return retVal == null ? Lists.<Rule>newArrayList() : retVal;
} }
public List<Rule> getRulesWithDefault(final String dataSource) public List<Rule> getRulesWithDefault(final String dataSource)

View File

@ -84,8 +84,7 @@ public class DatasourcesResource
@Produces("application/json") @Produces("application/json")
public Response getQueryableDataSources( public Response getQueryableDataSources(
@QueryParam("full") String full, @QueryParam("full") String full,
@QueryParam("simple") String simple, @QueryParam("simple") String simple
@QueryParam("gran") String gran
) )
{ {
Response.ResponseBuilder builder = Response.status(Response.Status.OK); Response.ResponseBuilder builder = Response.status(Response.Status.OK);
@ -107,9 +106,6 @@ public class DatasourcesResource
) )
) )
).build(); ).build();
} else if (gran != null) {
IndexGranularity granularity = IndexGranularity.fromString(gran);
// TODO
} }
return builder.entity( return builder.entity(
@ -131,6 +127,7 @@ public class DatasourcesResource
@DELETE @DELETE
@Path("/{dataSourceName}") @Path("/{dataSourceName}")
@Produces("application/json")
public Response deleteDataSource( public Response deleteDataSource(
@PathParam("dataSourceName") final String dataSourceName, @PathParam("dataSourceName") final String dataSourceName,
@QueryParam("kill") final String kill, @QueryParam("kill") final String kill,
@ -138,10 +135,22 @@ public class DatasourcesResource
) )
{ {
if (indexingServiceClient == null) { if (indexingServiceClient == null) {
return Response.status(Response.Status.OK).entity(ImmutableMap.of("error", "no indexing service found")).build(); return Response.ok().entity(ImmutableMap.of("error", "no indexing service found")).build();
} }
if (kill != null && Boolean.valueOf(kill)) { if (kill != null && Boolean.valueOf(kill)) {
indexingServiceClient.killSegments(dataSourceName, new Interval(interval)); try {
indexingServiceClient.killSegments(dataSourceName, new Interval(interval));
}
catch (Exception e) {
return Response.status(Response.Status.NOT_FOUND)
.entity(
ImmutableMap.of(
"error",
"Exception occurred. Are you sure you have an indexing service?"
)
)
.build();
}
} else { } else {
if (!databaseSegmentManager.removeDatasource(dataSourceName)) { if (!databaseSegmentManager.removeDatasource(dataSourceName)) {
return Response.status(Response.Status.NOT_FOUND).build(); return Response.status(Response.Status.NOT_FOUND).build();

View File

@ -21,13 +21,17 @@ package io.druid.server.http;
import com.google.inject.Inject; import com.google.inject.Inject;
import io.druid.db.DatabaseRuleManager; import io.druid.db.DatabaseRuleManager;
import io.druid.server.coordinator.rules.Rule;
import javax.ws.rs.Consumes;
import javax.ws.rs.GET; import javax.ws.rs.GET;
import javax.ws.rs.POST;
import javax.ws.rs.Path; import javax.ws.rs.Path;
import javax.ws.rs.PathParam; import javax.ws.rs.PathParam;
import javax.ws.rs.Produces; import javax.ws.rs.Produces;
import javax.ws.rs.QueryParam; import javax.ws.rs.QueryParam;
import javax.ws.rs.core.Response; import javax.ws.rs.core.Response;
import java.util.List;
/** /**
*/ */
@ -67,4 +71,18 @@ public class RulesResource
return Response.ok(databaseRuleManager.getRules(dataSourceName)) return Response.ok(databaseRuleManager.getRules(dataSourceName))
.build(); .build();
} }
@POST
@Path("/{dataSourceName}")
@Consumes("application/json")
public Response setDatasourceRules(
@PathParam("dataSourceName") final String dataSourceName,
final List<Rule> rules
)
{
if (databaseRuleManager.overrideRule(dataSourceName, rules)) {
return Response.ok().build();
}
return Response.status(Response.Status.INTERNAL_SERVER_ERROR).build();
}
} }

View File

@ -29,7 +29,7 @@
<style type="text/css">@import "css/demo_table.css";</style> <style type="text/css">@import "css/demo_table.css";</style>
<script type="text/javascript" src="js/underscore-1.2.2.js"></script> <script type="text/javascript" src="js/underscore-1.2.2.js"></script>
<script type="text/javascript" src="js/jquery-1.8.3.js"></script> <script type="text/javascript" src="js/jquery-1.11.0.min.js"></script>
<script type="text/javascript" src="js/jquery.dataTables-1.8.2.js"></script> <script type="text/javascript" src="js/jquery.dataTables-1.8.2.js"></script>
<script type="text/javascript" src="js/druidTable-0.0.1.js"></script> <script type="text/javascript" src="js/druidTable-0.0.1.js"></script>
<script type="text/javascript" src="js/init-0.0.2.js"></script> <script type="text/javascript" src="js/init-0.0.2.js"></script>

View File

@ -30,9 +30,9 @@
<style type="text/css">@import "css/config.css";</style> <style type="text/css">@import "css/config.css";</style>
<script type="text/javascript" src="js/underscore-1.2.2.js"></script> <script type="text/javascript" src="js/underscore-1.2.2.js"></script>
<script type="text/javascript" src="js/jquery-1.8.3.js"></script> <script type="text/javascript" src="js/jquery-1.11.0.min.js"></script>
<script type="text/javascript" src="js/jquery-ui-1.9.2.js"></script> <script type="text/javascript" src="js/jquery-ui-1.9.2.js"></script>
<script type="text/javascript" src="js/config-0.0.1.js"></script> <script type="text/javascript" src="js/config-0.0.2.js"></script>
</head> </head>
<body> <body>
<div class="container"> <div class="container">

View File

@ -30,7 +30,7 @@
<style type="text/css">@import "css/enable.css";</style> <style type="text/css">@import "css/enable.css";</style>
<script type="text/javascript" src="js/underscore-1.2.2.js"></script> <script type="text/javascript" src="js/underscore-1.2.2.js"></script>
<script type="text/javascript" src="js/jquery-1.8.3.js"></script> <script type="text/javascript" src="js/jquery-1.11.0.min.js"></script>
<script type="text/javascript" src="js/jquery-ui-1.9.2.js"></script> <script type="text/javascript" src="js/jquery-ui-1.9.2.js"></script>
<script type="text/javascript" src="js/enable-0.0.1.js"></script> <script type="text/javascript" src="js/enable-0.0.1.js"></script>
</head> </head>

View File

@ -20,7 +20,7 @@ function domToConfig(configDiv) {
} }
function getConfigs() { function getConfigs() {
$.getJSON("/coordinator/config", function(data) { $.getJSON("/druid/coordinator/v1/config", function(data) {
$('#config_list').empty(); $('#config_list').empty();
$.each(data, function (key, value) { $.each(data, function (key, value) {
@ -72,10 +72,10 @@ $(document).ready(function() {
$.ajax({ $.ajax({
type: 'POST', type: 'POST',
url:'/coordinator/config', url:'/druid/coordinator/v1/config',
data: JSON.stringify(configs), data: JSON.stringify(configs),
contentType:"application/json; charset=utf-8", contentType:"application/json; charset=utf-8",
dataType:"json", dataType:"text",
error: function(xhr, status, error) { error: function(xhr, status, error) {
$("#update_dialog").dialog("close"); $("#update_dialog").dialog("close");
$("#error_dialog").html(xhr.responseText); $("#error_dialog").html(xhr.responseText);

View File

@ -24,7 +24,7 @@ $(document).ready(function() {
url:'/druid/coordinator/v1/datasources/' + selected, url:'/druid/coordinator/v1/datasources/' + selected,
data: JSON.stringify(selected), data: JSON.stringify(selected),
contentType:"application/json; charset=utf-8", contentType:"application/json; charset=utf-8",
dataType:"json", dataType:"text",
error: function(xhr, status, error) { error: function(xhr, status, error) {
$("#enable_dialog").dialog("close"); $("#enable_dialog").dialog("close");
$("#error_dialog").html(xhr.responseText); $("#error_dialog").html(xhr.responseText);
@ -53,7 +53,7 @@ $(document).ready(function() {
url:'/druid/coordinator/v1/datasources/' + selected, url:'/druid/coordinator/v1/datasources/' + selected,
data: JSON.stringify(selected), data: JSON.stringify(selected),
contentType:"application/json; charset=utf-8", contentType:"application/json; charset=utf-8",
dataType:"json", dataType:"text",
error: function(xhr, status, error) { error: function(xhr, status, error) {
$("#disable_dialog").dialog("close"); $("#disable_dialog").dialog("close");
$("#error_dialog").html(xhr.responseText); $("#error_dialog").html(xhr.responseText);
@ -81,12 +81,11 @@ $(document).ready(function() {
$('#disabled_datasources').append($('<li>' + datasource + '</li>')); $('#disabled_datasources').append($('<li>' + datasource + '</li>'));
}); });
$.each(db_datasources, function(index, datasource) { $.each(db_datasources, function(index, datasource) {
$('#datasources').append($('<option></option>').attr("value", datasource).text(datasource)); $('#datasources').append($('<option></option>').val(datasource).text(datasource));
}); });
}); });
}); });
$("#enable").click(function() { $("#enable").click(function() {
$("#enable_dialog").dialog("open"); $("#enable_dialog").dialog("open");
}); });

View File

@ -3,11 +3,12 @@
$(document).ready(function() { $(document).ready(function() {
var basePath = "/druid/coordinator/v1/"; var basePath = "/druid/coordinator/v1/";
var type = $('#select_type').attr('value') + ''; var type = $('#select_type').val() + '';
var view = $('#select_view').attr('value') + ''; var view = $('#select_view').val() + '';
function handleTable(dontDisplay) function handleTable(dontDisplay)
{ {
console.log(type);
$.get(basePath + type + '?full', function(data) { $.get(basePath + type + '?full', function(data) {
buildTable(data, $('#result_table'), dontDisplay); buildTable(data, $('#result_table'), dontDisplay);
@ -75,8 +76,9 @@ $(document).ready(function() {
} }
$('#view_button').click(function() { $('#view_button').click(function() {
type = $('#select_type').attr('value') + ''; console.log("here");
view = $('#select_view').attr('value') + ''; type = $('#select_type').val() + '';
view = $('#select_view').val() + '';
resetViews(); resetViews();

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

View File

@ -24,7 +24,7 @@ $(document).ready(function() {
type: 'DELETE', type: 'DELETE',
url:'/druid/coordinator/v1/datasources/' + selected +'?kill=true&interval=' + interval, url:'/druid/coordinator/v1/datasources/' + selected +'?kill=true&interval=' + interval,
contentType:"application/json; charset=utf-8", contentType:"application/json; charset=utf-8",
dataType:"json", dataType:"text",
error: function(xhr, status, error) { error: function(xhr, status, error) {
$("#confirm_dialog").dialog("close"); $("#confirm_dialog").dialog("close");
$("#error_dialog").html(xhr.responseText); $("#error_dialog").html(xhr.responseText);
@ -43,7 +43,7 @@ $(document).ready(function() {
$.getJSON("/druid/coordinator/v1/db/datasources?includeDisabled", function(data) { $.getJSON("/druid/coordinator/v1/db/datasources?includeDisabled", function(data) {
$.each(data, function(index, datasource) { $.each(data, function(index, datasource) {
$('#datasources').append($('<option></option>').attr("value", datasource).text(datasource)); $('#datasources').append($('<option></option>').val(datasource).text(datasource));
}); });
}); });

View File

@ -243,7 +243,7 @@ $(document).ready(function() {
url:'/druid/coordinator/v1/rules/' + selected, url:'/druid/coordinator/v1/rules/' + selected,
data: JSON.stringify(rules), data: JSON.stringify(rules),
contentType:"application/json; charset=utf-8", contentType:"application/json; charset=utf-8",
dataType:"json", dataType:"text",
error: function(xhr, status, error) { error: function(xhr, status, error) {
$("#update_dialog").dialog("close"); $("#update_dialog").dialog("close");
$("#error_dialog").html(xhr.responseText); $("#error_dialog").html(xhr.responseText);
@ -266,9 +266,9 @@ $(document).ready(function() {
$.getJSON("/druid/coordinator/v1/db/datasources", function(data) { $.getJSON("/druid/coordinator/v1/db/datasources", function(data) {
$.each(data, function(index, datasource) { $.each(data, function(index, datasource) {
$('#datasources').append($('<option></option>').attr("value", datasource).text(datasource)); $('#datasources').append($('<option></option>').val(datasource).text(datasource));
}); });
$('#datasources').append($('<option></option>').attr("value", defaultDatasource).text(defaultDatasource)); $('#datasources').append($('<option></option>').val(defaultDatasource).text(defaultDatasource));
}); });
$("#datasources").change(function(event) { $("#datasources").change(function(event) {
@ -276,7 +276,7 @@ $(document).ready(function() {
$("#rules").show(); $("#rules").show();
}); });
$(".rule_dropdown_types").live("change", function(event) { $(document).on("change", '.rule_dropdown_types', null, function(event) {
var newRule = { var newRule = {
"type" : $(event.target).val() "type" : $(event.target).val()
}; };
@ -284,11 +284,11 @@ $(document).ready(function() {
ruleBody.replaceWith(makeRuleBody(newRule)); ruleBody.replaceWith(makeRuleBody(newRule));
}); });
$(".delete_rule").live("click", function(event) { $(document).on("click", '.delete_rule', null, function(event) {
$(event.target).parent(".rule").remove(); $(event.target).parent(".rule").remove();
}); });
$(".add_tier").live("click", function(event) { $(document).on("click", '.add_tier', null, function(event) {
$(event.target).parent().append(makeTierLoad(null, 0)); $(event.target).parent().append(makeTierLoad(null, 0));
}); });

View File

@ -29,7 +29,7 @@
<style type="text/css">@import "css/jquery-ui-1.9.2.css";</style> <style type="text/css">@import "css/jquery-ui-1.9.2.css";</style>
<script type="text/javascript" src="js/underscore-1.2.2.js"></script> <script type="text/javascript" src="js/underscore-1.2.2.js"></script>
<script type="text/javascript" src="js/jquery-1.8.3.js"></script> <script type="text/javascript" src="js/jquery-1.11.0.min.js"></script>
<script type="text/javascript" src="js/jquery-ui-1.9.2.js"></script> <script type="text/javascript" src="js/jquery-ui-1.9.2.js"></script>
<script type="text/javascript" src="js/kill-0.0.1.js"></script> <script type="text/javascript" src="js/kill-0.0.1.js"></script>
</head> </head>

View File

@ -30,9 +30,9 @@
<style type="text/css">@import "css/rules.css";</style> <style type="text/css">@import "css/rules.css";</style>
<script type="text/javascript" src="js/underscore-1.2.2.js"></script> <script type="text/javascript" src="js/underscore-1.2.2.js"></script>
<script type="text/javascript" src="js/jquery-1.8.3.js"></script> <script type="text/javascript" src="js/jquery-1.11.0.min.js"></script>
<script type="text/javascript" src="js/jquery-ui-1.9.2.js"></script> <script type="text/javascript" src="js/jquery-ui-1.9.2.js"></script>
<script type="text/javascript" src="js/rules-0.0.1.js"></script> <script type="text/javascript" src="js/rules-0.0.2.js"></script>
</head> </head>
<body> <body>
<div class="container"> <div class="container">

View File

@ -30,11 +30,11 @@
<style type="text/css">@import "css/index.css";</style> <style type="text/css">@import "css/index.css";</style>
<script type="text/javascript" src="js/underscore-1.2.2.js"></script> <script type="text/javascript" src="js/underscore-1.2.2.js"></script>
<script type="text/javascript" src="js/jquery-1.8.3.js"></script> <script type="text/javascript" src="js/jquery-1.11.0.min.js"></script>
<script type="text/javascript" src="js/jquery.dataTables-1.8.2.js"></script> <script type="text/javascript" src="js/jquery.dataTables-1.8.2.js"></script>
<script type="text/javascript" src="js/druidTable-0.0.1.js"></script> <script type="text/javascript" src="js/druidTable-0.0.1.js"></script>
<script type="text/javascript" src="js/tablehelper-0.0.2.js"></script> <script type="text/javascript" src="js/tablehelper-0.0.2.js"></script>
<script type="text/javascript" src="js/handlers-0.0.1.js"></script> <script type="text/javascript" src="js/handlers-0.0.2.js"></script>
</head> </head>
<body> <body>

View File

@ -27,7 +27,7 @@
<parent> <parent>
<groupId>io.druid</groupId> <groupId>io.druid</groupId>
<artifactId>druid</artifactId> <artifactId>druid</artifactId>
<version>0.6.53-SNAPSHOT</version> <version>0.6.54-SNAPSHOT</version>
</parent> </parent>
<dependencies> <dependencies>