mirror of https://github.com/apache/druid.git
Merge branch 'master' into subquery
This commit is contained in:
commit
689191c5ad
|
@ -28,7 +28,7 @@
|
|||
<parent>
|
||||
<groupId>io.druid</groupId>
|
||||
<artifactId>druid</artifactId>
|
||||
<version>0.6.53-SNAPSHOT</version>
|
||||
<version>0.6.54-SNAPSHOT</version>
|
||||
</parent>
|
||||
|
||||
<dependencies>
|
||||
|
|
|
@ -28,7 +28,7 @@
|
|||
<parent>
|
||||
<groupId>io.druid</groupId>
|
||||
<artifactId>druid</artifactId>
|
||||
<version>0.6.53-SNAPSHOT</version>
|
||||
<version>0.6.54-SNAPSHOT</version>
|
||||
</parent>
|
||||
|
||||
<dependencies>
|
||||
|
|
|
@ -1,4 +1,5 @@
|
|||
---
|
||||
layout: doc_page
|
||||
---
|
||||
Experimental features are features we have developed but have not fully tested in a production environment. If you choose to try them out, there will likely to edge cases that we have not covered. We would love feedback on any of these features, whether they are bug reports, suggestions for improvement, or letting us know they work as intended.
|
||||
# About Experimental Features
|
||||
Experimental features are features we have developed but have not fully tested in a production environment. If you choose to try them out, there will likely be edge cases that we have not covered. We would love feedback on any of these features, whether they are bug reports, suggestions for improvement, or letting us know they work as intended.
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
layout: doc_page
|
||||
---
|
||||
# Aggregations
|
||||
Aggregations are specifications of processing over metrics available in Druid.
|
||||
Available aggregations are:
|
||||
|
||||
|
|
|
@ -0,0 +1,87 @@
|
|||
---
|
||||
layout: doc_page
|
||||
---
|
||||
Data Formats for Ingestion
|
||||
==========================
|
||||
|
||||
Druid can ingest data in JSON, CSV, or TSV. While most examples in the documentation use data in JSON format, it is not difficult to configure Druid to ingest CSV or TSV data.
|
||||
|
||||
## Formatting the Data
|
||||
The following are three samples of the data used in the [Wikipedia example](Tutorial:-Loading-Your-Data-Part-1.html).
|
||||
|
||||
_JSON_
|
||||
|
||||
```json
|
||||
{"timestamp": "2013-08-31T01:02:33Z", "page": "Gypsy Danger", "language" : "en", "user" : "nuclear", "unpatrolled" : "true", "newPage" : "true", "robot": "false", "anonymous": "false", "namespace":"article", "continent":"North America", "country":"United States", "region":"Bay Area", "city":"San Francisco", "added": 57, "deleted": 200, "delta": -143}
|
||||
{"timestamp": "2013-08-31T03:32:45Z", "page": "Striker Eureka", "language" : "en", "user" : "speed", "unpatrolled" : "false", "newPage" : "true", "robot": "true", "anonymous": "false", "namespace":"wikipedia", "continent":"Australia", "country":"Australia", "region":"Cantebury", "city":"Syndey", "added": 459, "deleted": 129, "delta": 330}
|
||||
{"timestamp": "2013-08-31T07:11:21Z", "page": "Cherno Alpha", "language" : "ru", "user" : "masterYi", "unpatrolled" : "false", "newPage" : "true", "robot": "true", "anonymous": "false", "namespace":"article", "continent":"Asia", "country":"Russia", "region":"Oblast", "city":"Moscow", "added": 123, "deleted": 12, "delta": 111}
|
||||
{"timestamp": "2013-08-31T11:58:39Z", "page": "Crimson Typhoon", "language" : "zh", "user" : "triplets", "unpatrolled" : "true", "newPage" : "false", "robot": "true", "anonymous": "false", "namespace":"wikipedia", "continent":"Asia", "country":"China", "region":"Shanxi", "city":"Taiyuan", "added": 905, "deleted": 5, "delta": 900}
|
||||
{"timestamp": "2013-08-31T12:41:27Z", "page": "Coyote Tango", "language" : "ja", "user" : "cancer", "unpatrolled" : "true", "newPage" : "false", "robot": "true", "anonymous": "false", "namespace":"wikipedia", "continent":"Asia", "country":"Japan", "region":"Kanto", "city":"Tokyo", "added": 1, "deleted": 10, "delta": -9}
|
||||
```
|
||||
|
||||
_CSV_
|
||||
|
||||
```
|
||||
2013-08-31T01:02:33Z,"Gypsy Danger","en","nuclear","true","true","false","false","article","North America","United States","Bay Area","San Francisco",57,200,-143
|
||||
2013-08-31T03:32:45Z,"Striker Eureka","en","speed","false","true","true","false","wikipedia","Australia","Australia","Cantebury","Syndey",459,129,330
|
||||
2013-08-31T07:11:21Z,"Cherno Alpha","ru","masterYi","false","true","true","false","article","Asia","Russia","Oblast","Moscow",123,12,111
|
||||
2013-08-31T11:58:39Z,"Crimson Typhoon","zh","triplets","true","false","true","false","wikipedia","Asia","China","Shanxi","Taiyuan",905,5,900
|
||||
2013-08-31T12:41:27Z,"Coyote Tango","ja","cancer","true","false","true","false","wikipedia","Asia","Japan","Kanto","Tokyo",1,10,-9
|
||||
```
|
||||
|
||||
_TSV_
|
||||
|
||||
```
|
||||
2013-08-31T01:02:33Z "Gypsy Danger" "en" "nuclear" "true" "true" "false" "false" "article" "North America" "United States" "Bay Area" "San Francisco" 57 200 -143
|
||||
2013-08-31T03:32:45Z "Striker Eureka" "en" "speed" "false" "true" "true" "false" "wikipedia" "Australia" "Australia" "Cantebury" "Syndey" 459 129 330
|
||||
2013-08-31T07:11:21Z "Cherno Alpha" "ru" "masterYi" "false" "true" "true" "false" "article" "Asia" "Russia" "Oblast" "Moscow" 123 12 111
|
||||
2013-08-31T11:58:39Z "Crimson Typhoon" "zh" "triplets" "true" "false" "true" "false" "wikipedia" "Asia" "China" "Shanxi" "Taiyuan" 905 5 900
|
||||
2013-08-31T12:41:27Z "Coyote Tango" "ja" "cancer" "true" "false" "true" "false" "wikipedia" "Asia" "Japan" "Kanto" "Tokyo" 1 10 -9
|
||||
```
|
||||
|
||||
Note that the CSV and TSV data do not contain column heads. This becomes important when you specify the data for ingesting.
|
||||
|
||||
## Configuring Ingestion For the Indexing Service
|
||||
If you use the [indexing service](Indexing-Service.html) for ingesting the data, a [task](Tasks.html) must be configured and submitted. Tasks are configured with a JSON object which, among other things, specifies the data source and type. In the Wikipedia example, JSON data was read from a local file. The task spec contains a firehose element to specify this:
|
||||
|
||||
```json
|
||||
"firehose" : {
|
||||
"type" : "local",
|
||||
"baseDir" : "examples/indexing",
|
||||
"filter" : "wikipedia_data.json",
|
||||
"parser" : {
|
||||
"timestampSpec" : {
|
||||
"column" : "timestamp"
|
||||
},
|
||||
"data" : {
|
||||
"format" : "json",
|
||||
"dimensions" : ["page","language","user","unpatrolled","newPage","robot","anonymous","namespace","continent","country","region","city"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Specified here are the location of the datafile, the timestamp column, the format of the data, and the columns that will become dimensions in Druid.
|
||||
|
||||
Since the CSV data does not contain the column names, they will have to be added before that data can be processed:
|
||||
|
||||
```json
|
||||
"firehose" : {
|
||||
"type" : "local",
|
||||
"baseDir" : "examples/indexing/",
|
||||
"filter" : "wikipedia_data.csv",
|
||||
"parser" : {
|
||||
"timestampSpec" : {
|
||||
"column" : "timestamp"
|
||||
},
|
||||
"data" : {
|
||||
"type" : "csv",
|
||||
"columns" : ["timestamp","page","language","user","unpatrolled","newPage","robot","anonymous","namespace","continent","country","region","city"],
|
||||
"dimensions" : ["page","language","user","unpatrolled","newPage","robot","anonymous","namespace","continent","country","region","city"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Note also that the filename extension and the data type were changed to "csv". For the TSV data, the same changes are made but with "tsv" for the filename extension and the data type.
|
||||
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
layout: doc_page
|
||||
---
|
||||
# Deep Storage
|
||||
Deep storage is where segments are stored. It is a storage mechanism that Druid does not provide. This deep storage infrastructure defines the level of durability of your data, as long as Druid nodes can see this storage infrastructure and get at the segments stored on it, you will not lose data no matter how many Druid nodes you lose. If segments disappear from this storage layer, then you will lose whatever data those segments represented.
|
||||
|
||||
The currently supported types of deep storage follow.
|
||||
|
|
|
@ -1,6 +1,8 @@
|
|||
---
|
||||
layout: doc_page
|
||||
---
|
||||
# Transforming Dimension Values
|
||||
The following JSON fields can be used in a query to operate on dimension values.
|
||||
|
||||
## DimensionSpec
|
||||
|
||||
|
@ -8,7 +10,7 @@ layout: doc_page
|
|||
|
||||
### DefaultDimensionSpec
|
||||
|
||||
Returns dimension values as is and optionally renames renames the dimension.
|
||||
Returns dimension values as is and optionally renames the dimension.
|
||||
|
||||
```json
|
||||
{ "type" : "default", "dimension" : <dimension>, "outputName": <output_name> }
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
layout: doc_page
|
||||
---
|
||||
#Query Filters
|
||||
A filter is a JSON object indicating which rows of data should be included in the computation for a query. It’s essentially the equivalent of the WHERE clause in SQL. Druid supports the following types of filters.
|
||||
|
||||
### Selector filter
|
||||
|
@ -78,4 +79,4 @@ The following matches any dimension values for the dimension `name` between `'ba
|
|||
"dimension" : "name",
|
||||
"function" : "function(x) { return(x >= 'bar' && x <= 'foo') }"
|
||||
}
|
||||
```
|
||||
```
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
layout: doc_page
|
||||
---
|
||||
# Geographic Queries
|
||||
Druid supports filtering specially spatially indexed columns based on an origin and a bound.
|
||||
|
||||
# Spatial Indexing
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
layout: doc_page
|
||||
---
|
||||
# Aggregation Granularity
|
||||
The granularity field determines how data gets bucketed across the time dimension, i.e how it gets aggregated by hour, day, minute, etc.
|
||||
|
||||
It can be specified either as a string for simple granularities or as an object for arbitrary granularities.
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
layout: doc_page
|
||||
---
|
||||
# groupBy Queries
|
||||
These types of queries take a groupBy query object and return an array of JSON objects where each object represents a grouping asked for by the query. Note: If you only want to do straight aggreagates for some time range, we highly recommend using [TimeseriesQueries](TimeseriesQuery.html) instead. The performance will be substantially better.
|
||||
An example groupBy query object is shown below:
|
||||
|
||||
|
@ -86,4 +87,4 @@ To pull it all together, the above query would return *n\*m* data points, up to
|
|||
},
|
||||
...
|
||||
]
|
||||
```
|
||||
```
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
layout: doc_page
|
||||
---
|
||||
# Filter groupBy Query Results
|
||||
A having clause is a JSON object identifying which rows from a groupBy query should be returned, by specifying conditions on aggregated values.
|
||||
|
||||
It is essentially the equivalent of the HAVING clause in SQL.
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
layout: doc_page
|
||||
---
|
||||
# Druid Indexing Service
|
||||
The indexing service is a highly-available, distributed service that runs indexing related tasks. Indexing service [tasks](Tasks.html) create (and sometimes destroy) Druid [segments](Segments.html). The indexing service has a master/slave like architecture.
|
||||
|
||||
The indexing service is composed of three main components: a peon component that can run a single task, a [Middle Manager](Middlemanager.html) component that manages peons, and an overlord component that manages task distribution to middle managers.
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
layout: doc_page
|
||||
---
|
||||
# MySQL Database
|
||||
MySQL is an external dependency of Druid. We use it to store various metadata about the system, but not to store the actual data. There are a number of tables used for various purposes described below.
|
||||
|
||||
Segments Table
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
layout: doc_page
|
||||
---
|
||||
# Sort groupBy Query Results
|
||||
The orderBy field provides the functionality to sort and limit the set of results from a groupBy query. If you group by a single dimension and are ordering by a single metric, we highly recommend using [TopN Queries](TopNQuery.html) instead. The performance will be substantially better. Available options are:
|
||||
|
||||
### DefaultLimitSpec
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
layout: doc_page
|
||||
---
|
||||
# Post-Aggregations
|
||||
Post-aggregations are specifications of processing that should happen on aggregated values as they come out of Druid. If you include a post aggregation as part of a query, make sure to include all aggregators the post-aggregator requires.
|
||||
|
||||
There are several post-aggregators available.
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
layout: doc_page
|
||||
---
|
||||
# Configuring Rules for Coordinator Nodes
|
||||
Note: It is recommended that the coordinator console is used to configure rules. However, the coordinator node does have HTTP endpoints to programmatically configure rules.
|
||||
|
||||
Load Rules
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
layout: doc_page
|
||||
---
|
||||
# Search Queries
|
||||
A search query returns dimension values that match the search specification.
|
||||
|
||||
```json
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
layout: doc_page
|
||||
---
|
||||
# Refining Search Queries
|
||||
Search query specs define how a "match" is defined between a search value and a dimension value. The available search query specs are:
|
||||
|
||||
InsensitiveContainsSearchQuerySpec
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
layout: doc_page
|
||||
---
|
||||
# Segment Metadata Queries
|
||||
Segment metadata queries return per segment information about:
|
||||
|
||||
* Cardinality of all columns in the segment
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
layout: doc_page
|
||||
---
|
||||
# Tasks
|
||||
Tasks are run on middle managers and always operate on a single data source.
|
||||
|
||||
There are several different types of tasks.
|
||||
|
@ -158,34 +159,15 @@ The indexing service can also run real-time tasks. These tasks effectively trans
|
|||
}
|
||||
```
|
||||
|
||||
Id:
|
||||
The ID of the task. Not required.
|
||||
|
||||
Resource:
|
||||
A JSON object used for high availability purposes. Not required.
|
||||
|
||||
|Field|Type|Description|Required|
|
||||
|-----|----|-----------|--------|
|
||||
|id|String|The ID of the task.|No|
|
||||
|Resource|JSON object|Used for high availability purposes.|No|
|
||||
|availabilityGroup|String|An uniqueness identifier for the task. Tasks with the same availability group will always run on different middle managers. Used mainly for replication. |yes|
|
||||
|requiredCapacity|Integer|How much middle manager capacity this task will take.|yes|
|
||||
|
||||
Schema:
|
||||
See [Schema](Realtime.html).
|
||||
For schema, fireDepartmentConfig, windowPeriod, segmentGranularity, and rejectionPolicy, see the [realtime-ingestion doc](Realtime-ingestion.html). For firehose configuration, see [Firehose](Firehose.html).
|
||||
|
||||
Fire Department Config:
|
||||
See [Config](Realtime.html).
|
||||
|
||||
Firehose:
|
||||
See [Firehose](Firehose.html).
|
||||
|
||||
Window Period:
|
||||
See [Realtime](Realtime.html).
|
||||
|
||||
Segment Granularity:
|
||||
See [Realtime](Realtime.html).
|
||||
|
||||
Rejection Policy:
|
||||
See [Realtime](Realtime.html).
|
||||
|
||||
Segment Merging Tasks
|
||||
---------------------
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
layout: doc_page
|
||||
---
|
||||
# Time Boundary Queries
|
||||
Time boundary queries return the earliest and latest data points of a data set. The grammar is:
|
||||
|
||||
```json
|
||||
|
|
|
@ -80,7 +80,7 @@ Let's start doing stuff. You can start a Druid [Realtime](Realtime.html) node by
|
|||
|
||||
Select "wikipedia".
|
||||
|
||||
Once the node starts up you will see a bunch of logs about setting up properties and connecting to the data source. If everything was successful, you should see messages of the form shown below.
|
||||
Note that the first time you start the example, it may take some extra time due to its fetching various dependencies. Once the node starts up you will see a bunch of logs about setting up properties and connecting to the data source. If everything was successful, you should see messages of the form shown below.
|
||||
|
||||
```
|
||||
2013-09-04 19:33:11,922 INFO [main] org.eclipse.jetty.server.AbstractConnector - Started SelectChannelConnector@0.0.0.0:8083
|
||||
|
@ -118,7 +118,7 @@ Select "wikipedia" once again. This script issues [GroupByQueries](GroupByQuery.
|
|||
|
||||
This is a **groupBy** query, which you may be familiar with from SQL. We are grouping, or aggregating, via the `dimensions` field: `["page"]`. We are **filtering** via the `namespace` dimension, to only look at edits on `articles`. Our **aggregations** are what we are calculating: a count of the number of data rows, and a count of the number of edits that have occurred.
|
||||
|
||||
The result looks something like this:
|
||||
The result looks something like this (when it's prettified):
|
||||
|
||||
```json
|
||||
[
|
||||
|
@ -323,13 +323,13 @@ Feel free to tweak other query parameters to answer other questions you may have
|
|||
Next Steps
|
||||
----------
|
||||
|
||||
What to know even more information about the Druid Cluster? Check out [The Druid Cluster](Tutorial%3A-The-Druid-Cluster.html)
|
||||
What to know even more information about the Druid Cluster? Check out [The Druid Cluster](Tutorial%3A-The-Druid-Cluster.html).
|
||||
|
||||
Druid is even more fun if you load your own data into it! To learn how to load your data, see [Loading Your Data](Tutorial%3A-Loading-Your-Data-Part-1.html).
|
||||
|
||||
Additional Information
|
||||
----------------------
|
||||
|
||||
This tutorial is merely showcasing a small fraction of what Druid can do. If you are interested in more information about Druid, including setting up a more sophisticated Druid cluster, please read the other links in our wiki.
|
||||
This tutorial is merely showcasing a small fraction of what Druid can do. If you are interested in more information about Druid, including setting up a more sophisticated Druid cluster, read more of the Druid documentation and the blogs found on druid.io.
|
||||
|
||||
And thus concludes our journey! Hopefully you learned a thing or two about Druid real-time ingestion, querying Druid, and how Druid can be used to solve problems. If you have additional questions, feel free to post in our [google groups page](https://groups.google.com/forum/#!forum/druid-development).
|
||||
|
|
|
@ -7,7 +7,7 @@ Welcome back! In our first [tutorial](Tutorial%3A-A-First-Look-at-Druid.html), w
|
|||
|
||||
This tutorial will hopefully answer these questions!
|
||||
|
||||
In this tutorial, we will set up other types of Druid nodes as well as and external dependencies for a fully functional Druid cluster. The architecture of Druid is very much like the [Megazord](http://www.youtube.com/watch?v=7mQuHh1X4H4) from the popular 90s show Mighty Morphin' Power Rangers. Each Druid node has a specific purpose and the nodes come together to form a fully functional system.
|
||||
In this tutorial, we will set up other types of Druid nodes and external dependencies for a fully functional Druid cluster. The architecture of Druid is very much like the [Megazord](http://www.youtube.com/watch?v=7mQuHh1X4H4) from the popular 90s show Mighty Morphin' Power Rangers. Each Druid node has a specific purpose and the nodes come together to form a fully functional system.
|
||||
|
||||
## Downloading Druid
|
||||
|
||||
|
@ -32,9 +32,9 @@ For deep storage, we have made a public S3 bucket (static.druid.io) available wh
|
|||
|
||||
#### Setting up MySQL
|
||||
|
||||
1. If you don't already have it, download MySQL Community Server here: [http://dev.mysql.com/downloads/mysql/](http://dev.mysql.com/downloads/mysql/)
|
||||
2. Install MySQL
|
||||
3. Create a druid user and database
|
||||
1. If you don't already have it, download MySQL Community Server here: [http://dev.mysql.com/downloads/mysql/](http://dev.mysql.com/downloads/mysql/).
|
||||
2. Install MySQL.
|
||||
3. Create a druid user and database.
|
||||
|
||||
```bash
|
||||
mysql -u root
|
||||
|
@ -88,7 +88,7 @@ Metrics (things to aggregate over):
|
|||
|
||||
## The Cluster
|
||||
|
||||
Let's start up a few nodes and download our data. First things though, let's make sure we have config directory where we will store configs for our various nodes:
|
||||
Let's start up a few nodes and download our data. First, let's make sure we have configs in the config directory for our various nodes. Issue the following from the Druid home directory:
|
||||
|
||||
```
|
||||
ls config
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
layout: doc_page
|
||||
---
|
||||
# Versioning Druid
|
||||
This page discusses how we do versioning and provides information on our stable releases.
|
||||
|
||||
Versioning Strategy
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
layout: doc_page
|
||||
---
|
||||
# ZooKeeper
|
||||
Druid uses [ZooKeeper](http://zookeeper.apache.org/) (ZK) for management of current cluster state. The operations that happen over ZK are
|
||||
|
||||
1. [Coordinator](Coordinator.html) leader election
|
||||
|
|
|
@ -53,4 +53,8 @@
|
|||
|
||||
.doc-content table code {
|
||||
background-color: transparent;
|
||||
}
|
||||
}
|
||||
|
||||
td, th {
|
||||
padding: 5px;
|
||||
}
|
||||
|
|
|
@ -28,6 +28,7 @@ h2. Data Ingestion
|
|||
* "Batch":./Batch-ingestion.html
|
||||
* "Indexing Service":./Indexing-Service.html
|
||||
** "Tasks":./Tasks.html
|
||||
* "Data Formats":./Data_formats.html
|
||||
* "Ingestion FAQ":./Ingestion-FAQ.html
|
||||
|
||||
h2. Querying
|
||||
|
@ -38,15 +39,15 @@ h2. Querying
|
|||
** "Granularities":./Granularities.html
|
||||
** "DimensionSpecs":./DimensionSpecs.html
|
||||
* Query Types
|
||||
** "GroupByQuery":./GroupByQuery.html
|
||||
** "GroupBy":./GroupByQuery.html
|
||||
*** "OrderBy":./OrderBy.html
|
||||
*** "Having":./Having.html
|
||||
** "SearchQuery":./SearchQuery.html
|
||||
** "Search":./SearchQuery.html
|
||||
*** "SearchQuerySpec":./SearchQuerySpec.html
|
||||
** "SegmentMetadataQuery":./SegmentMetadataQuery.html
|
||||
** "TimeBoundaryQuery":./TimeBoundaryQuery.html
|
||||
** "TimeseriesQuery":./TimeseriesQuery.html
|
||||
** "TopNQuery":./TopNQuery.html
|
||||
** "Segment Metadata":./SegmentMetadataQuery.html
|
||||
** "Time Boundary":./TimeBoundaryQuery.html
|
||||
** "Timeseries":./TimeseriesQuery.html
|
||||
** "TopN":./TopNQuery.html
|
||||
*** "TopNMetricSpec":./TopNMetricSpec.html
|
||||
|
||||
h2. Architecture
|
||||
|
|
|
@ -28,7 +28,7 @@
|
|||
<parent>
|
||||
<groupId>io.druid</groupId>
|
||||
<artifactId>druid</artifactId>
|
||||
<version>0.6.53-SNAPSHOT</version>
|
||||
<version>0.6.54-SNAPSHOT</version>
|
||||
</parent>
|
||||
|
||||
<dependencies>
|
||||
|
|
|
@ -28,7 +28,7 @@
|
|||
<parent>
|
||||
<groupId>io.druid</groupId>
|
||||
<artifactId>druid</artifactId>
|
||||
<version>0.6.53-SNAPSHOT</version>
|
||||
<version>0.6.54-SNAPSHOT</version>
|
||||
</parent>
|
||||
|
||||
<dependencies>
|
||||
|
|
|
@ -18,8 +18,7 @@
|
|||
~ Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||
-->
|
||||
|
||||
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
|
||||
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
|
||||
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
|
||||
<modelVersion>4.0.0</modelVersion>
|
||||
<groupId>io.druid.extensions</groupId>
|
||||
<artifactId>druid-hll</artifactId>
|
||||
|
@ -29,7 +28,7 @@
|
|||
<parent>
|
||||
<groupId>io.druid</groupId>
|
||||
<artifactId>druid</artifactId>
|
||||
<version>0.6.53-SNAPSHOT</version>
|
||||
<version>0.6.54-SNAPSHOT</version>
|
||||
</parent>
|
||||
|
||||
<dependencies>
|
||||
|
|
|
@ -28,7 +28,7 @@
|
|||
<parent>
|
||||
<groupId>io.druid</groupId>
|
||||
<artifactId>druid</artifactId>
|
||||
<version>0.6.53-SNAPSHOT</version>
|
||||
<version>0.6.54-SNAPSHOT</version>
|
||||
</parent>
|
||||
|
||||
<dependencies>
|
||||
|
|
|
@ -28,7 +28,7 @@
|
|||
<parent>
|
||||
<groupId>io.druid</groupId>
|
||||
<artifactId>druid</artifactId>
|
||||
<version>0.6.53-SNAPSHOT</version>
|
||||
<version>0.6.54-SNAPSHOT</version>
|
||||
</parent>
|
||||
|
||||
<dependencies>
|
||||
|
|
|
@ -32,6 +32,7 @@ import io.druid.curator.discovery.ServerDiscoverySelector;
|
|||
import io.druid.indexing.common.RetryPolicy;
|
||||
import io.druid.indexing.common.RetryPolicyFactory;
|
||||
import io.druid.indexing.common.task.Task;
|
||||
import org.jboss.netty.channel.ChannelException;
|
||||
import org.joda.time.Duration;
|
||||
|
||||
import java.io.IOException;
|
||||
|
@ -94,6 +95,7 @@ public class RemoteTaskActionClient implements TaskActionClient
|
|||
}
|
||||
catch (Exception e) {
|
||||
Throwables.propagateIfInstanceOf(e.getCause(), IOException.class);
|
||||
Throwables.propagateIfInstanceOf(e.getCause(), ChannelException.class);
|
||||
throw Throwables.propagate(e);
|
||||
}
|
||||
|
||||
|
@ -105,7 +107,7 @@ public class RemoteTaskActionClient implements TaskActionClient
|
|||
|
||||
return jsonMapper.convertValue(responseDict.get("result"), taskAction.getReturnTypeReference());
|
||||
}
|
||||
catch (IOException e) {
|
||||
catch (IOException | ChannelException e) {
|
||||
log.warn(e, "Exception submitting action for task[%s]", task.getId());
|
||||
|
||||
final Duration delay = retryPolicy.getAndIncrementRetryDelay();
|
||||
|
|
|
@ -29,7 +29,7 @@
|
|||
<style type="text/css">@import "css/demo_table.css";</style>
|
||||
|
||||
<script type="text/javascript" src="js/underscore-1.2.2.js"></script>
|
||||
<script type="text/javascript" src="js/jquery-1.8.3.js"></script>
|
||||
<script type="text/javascript" src="js/jquery-1.11.0.min.js"></script>
|
||||
<script type="text/javascript" src="js/jquery.dataTables-1.8.2.js"></script>
|
||||
<script type="text/javascript" src="js/druidTable-0.0.1.js"></script>
|
||||
<script type="text/javascript" src="js/tablehelper-0.0.2.js"></script>
|
||||
|
|
|
@ -28,7 +28,7 @@
|
|||
<parent>
|
||||
<groupId>io.druid</groupId>
|
||||
<artifactId>druid</artifactId>
|
||||
<version>0.6.53-SNAPSHOT</version>
|
||||
<version>0.6.54-SNAPSHOT</version>
|
||||
</parent>
|
||||
|
||||
<dependencies>
|
||||
|
|
|
@ -28,7 +28,7 @@
|
|||
<parent>
|
||||
<groupId>io.druid</groupId>
|
||||
<artifactId>druid</artifactId>
|
||||
<version>0.6.53-SNAPSHOT</version>
|
||||
<version>0.6.54-SNAPSHOT</version>
|
||||
</parent>
|
||||
|
||||
<dependencies>
|
||||
|
|
4
pom.xml
4
pom.xml
|
@ -23,7 +23,7 @@
|
|||
<groupId>io.druid</groupId>
|
||||
<artifactId>druid</artifactId>
|
||||
<packaging>pom</packaging>
|
||||
<version>0.6.53-SNAPSHOT</version>
|
||||
<version>0.6.54-SNAPSHOT</version>
|
||||
<name>druid</name>
|
||||
<description>druid</description>
|
||||
<scm>
|
||||
|
@ -513,6 +513,8 @@
|
|||
<artifactId>maven-surefire-plugin</artifactId>
|
||||
<version>2.12.2</version>
|
||||
<configuration>
|
||||
<!-- locale settings must be set on the command line before startup -->
|
||||
<argLine>-Duser.language=en -Duser.country=US</argLine>
|
||||
<systemPropertyVariables>
|
||||
<user.timezone>UTC</user.timezone>
|
||||
</systemPropertyVariables>
|
||||
|
|
|
@ -28,7 +28,7 @@
|
|||
<parent>
|
||||
<groupId>io.druid</groupId>
|
||||
<artifactId>druid</artifactId>
|
||||
<version>0.6.53-SNAPSHOT</version>
|
||||
<version>0.6.54-SNAPSHOT</version>
|
||||
</parent>
|
||||
|
||||
<dependencies>
|
||||
|
|
|
@ -9,7 +9,7 @@
|
|||
<parent>
|
||||
<groupId>io.druid</groupId>
|
||||
<artifactId>druid</artifactId>
|
||||
<version>0.6.53-SNAPSHOT</version>
|
||||
<version>0.6.54-SNAPSHOT</version>
|
||||
</parent>
|
||||
|
||||
<dependencies>
|
||||
|
|
|
@ -28,7 +28,7 @@
|
|||
<parent>
|
||||
<groupId>io.druid</groupId>
|
||||
<artifactId>druid</artifactId>
|
||||
<version>0.6.53-SNAPSHOT</version>
|
||||
<version>0.6.54-SNAPSHOT</version>
|
||||
</parent>
|
||||
|
||||
<dependencies>
|
||||
|
|
|
@ -28,7 +28,7 @@
|
|||
<parent>
|
||||
<groupId>io.druid</groupId>
|
||||
<artifactId>druid</artifactId>
|
||||
<version>0.6.53-SNAPSHOT</version>
|
||||
<version>0.6.54-SNAPSHOT</version>
|
||||
</parent>
|
||||
|
||||
<dependencies>
|
||||
|
|
|
@ -263,7 +263,8 @@ public class DatabaseRuleManager
|
|||
|
||||
public List<Rule> getRules(final String dataSource)
|
||||
{
|
||||
return rules.get().get(dataSource);
|
||||
List<Rule> retVal = rules.get().get(dataSource);
|
||||
return retVal == null ? Lists.<Rule>newArrayList() : retVal;
|
||||
}
|
||||
|
||||
public List<Rule> getRulesWithDefault(final String dataSource)
|
||||
|
|
|
@ -84,8 +84,7 @@ public class DatasourcesResource
|
|||
@Produces("application/json")
|
||||
public Response getQueryableDataSources(
|
||||
@QueryParam("full") String full,
|
||||
@QueryParam("simple") String simple,
|
||||
@QueryParam("gran") String gran
|
||||
@QueryParam("simple") String simple
|
||||
)
|
||||
{
|
||||
Response.ResponseBuilder builder = Response.status(Response.Status.OK);
|
||||
|
@ -107,9 +106,6 @@ public class DatasourcesResource
|
|||
)
|
||||
)
|
||||
).build();
|
||||
} else if (gran != null) {
|
||||
IndexGranularity granularity = IndexGranularity.fromString(gran);
|
||||
// TODO
|
||||
}
|
||||
|
||||
return builder.entity(
|
||||
|
@ -131,6 +127,7 @@ public class DatasourcesResource
|
|||
|
||||
@DELETE
|
||||
@Path("/{dataSourceName}")
|
||||
@Produces("application/json")
|
||||
public Response deleteDataSource(
|
||||
@PathParam("dataSourceName") final String dataSourceName,
|
||||
@QueryParam("kill") final String kill,
|
||||
|
@ -138,10 +135,22 @@ public class DatasourcesResource
|
|||
)
|
||||
{
|
||||
if (indexingServiceClient == null) {
|
||||
return Response.status(Response.Status.OK).entity(ImmutableMap.of("error", "no indexing service found")).build();
|
||||
return Response.ok().entity(ImmutableMap.of("error", "no indexing service found")).build();
|
||||
}
|
||||
if (kill != null && Boolean.valueOf(kill)) {
|
||||
indexingServiceClient.killSegments(dataSourceName, new Interval(interval));
|
||||
try {
|
||||
indexingServiceClient.killSegments(dataSourceName, new Interval(interval));
|
||||
}
|
||||
catch (Exception e) {
|
||||
return Response.status(Response.Status.NOT_FOUND)
|
||||
.entity(
|
||||
ImmutableMap.of(
|
||||
"error",
|
||||
"Exception occurred. Are you sure you have an indexing service?"
|
||||
)
|
||||
)
|
||||
.build();
|
||||
}
|
||||
} else {
|
||||
if (!databaseSegmentManager.removeDatasource(dataSourceName)) {
|
||||
return Response.status(Response.Status.NOT_FOUND).build();
|
||||
|
|
|
@ -21,13 +21,17 @@ package io.druid.server.http;
|
|||
|
||||
import com.google.inject.Inject;
|
||||
import io.druid.db.DatabaseRuleManager;
|
||||
import io.druid.server.coordinator.rules.Rule;
|
||||
|
||||
import javax.ws.rs.Consumes;
|
||||
import javax.ws.rs.GET;
|
||||
import javax.ws.rs.POST;
|
||||
import javax.ws.rs.Path;
|
||||
import javax.ws.rs.PathParam;
|
||||
import javax.ws.rs.Produces;
|
||||
import javax.ws.rs.QueryParam;
|
||||
import javax.ws.rs.core.Response;
|
||||
import java.util.List;
|
||||
|
||||
/**
|
||||
*/
|
||||
|
@ -67,4 +71,18 @@ public class RulesResource
|
|||
return Response.ok(databaseRuleManager.getRules(dataSourceName))
|
||||
.build();
|
||||
}
|
||||
|
||||
@POST
|
||||
@Path("/{dataSourceName}")
|
||||
@Consumes("application/json")
|
||||
public Response setDatasourceRules(
|
||||
@PathParam("dataSourceName") final String dataSourceName,
|
||||
final List<Rule> rules
|
||||
)
|
||||
{
|
||||
if (databaseRuleManager.overrideRule(dataSourceName, rules)) {
|
||||
return Response.ok().build();
|
||||
}
|
||||
return Response.status(Response.Status.INTERNAL_SERVER_ERROR).build();
|
||||
}
|
||||
}
|
||||
|
|
|
@ -29,7 +29,7 @@
|
|||
<style type="text/css">@import "css/demo_table.css";</style>
|
||||
|
||||
<script type="text/javascript" src="js/underscore-1.2.2.js"></script>
|
||||
<script type="text/javascript" src="js/jquery-1.8.3.js"></script>
|
||||
<script type="text/javascript" src="js/jquery-1.11.0.min.js"></script>
|
||||
<script type="text/javascript" src="js/jquery.dataTables-1.8.2.js"></script>
|
||||
<script type="text/javascript" src="js/druidTable-0.0.1.js"></script>
|
||||
<script type="text/javascript" src="js/init-0.0.2.js"></script>
|
||||
|
|
|
@ -30,9 +30,9 @@
|
|||
<style type="text/css">@import "css/config.css";</style>
|
||||
|
||||
<script type="text/javascript" src="js/underscore-1.2.2.js"></script>
|
||||
<script type="text/javascript" src="js/jquery-1.8.3.js"></script>
|
||||
<script type="text/javascript" src="js/jquery-1.11.0.min.js"></script>
|
||||
<script type="text/javascript" src="js/jquery-ui-1.9.2.js"></script>
|
||||
<script type="text/javascript" src="js/config-0.0.1.js"></script>
|
||||
<script type="text/javascript" src="js/config-0.0.2.js"></script>
|
||||
</head>
|
||||
<body>
|
||||
<div class="container">
|
||||
|
|
|
@ -30,7 +30,7 @@
|
|||
<style type="text/css">@import "css/enable.css";</style>
|
||||
|
||||
<script type="text/javascript" src="js/underscore-1.2.2.js"></script>
|
||||
<script type="text/javascript" src="js/jquery-1.8.3.js"></script>
|
||||
<script type="text/javascript" src="js/jquery-1.11.0.min.js"></script>
|
||||
<script type="text/javascript" src="js/jquery-ui-1.9.2.js"></script>
|
||||
<script type="text/javascript" src="js/enable-0.0.1.js"></script>
|
||||
</head>
|
||||
|
|
|
@ -20,7 +20,7 @@ function domToConfig(configDiv) {
|
|||
}
|
||||
|
||||
function getConfigs() {
|
||||
$.getJSON("/coordinator/config", function(data) {
|
||||
$.getJSON("/druid/coordinator/v1/config", function(data) {
|
||||
$('#config_list').empty();
|
||||
|
||||
$.each(data, function (key, value) {
|
||||
|
@ -72,10 +72,10 @@ $(document).ready(function() {
|
|||
|
||||
$.ajax({
|
||||
type: 'POST',
|
||||
url:'/coordinator/config',
|
||||
url:'/druid/coordinator/v1/config',
|
||||
data: JSON.stringify(configs),
|
||||
contentType:"application/json; charset=utf-8",
|
||||
dataType:"json",
|
||||
dataType:"text",
|
||||
error: function(xhr, status, error) {
|
||||
$("#update_dialog").dialog("close");
|
||||
$("#error_dialog").html(xhr.responseText);
|
|
@ -24,7 +24,7 @@ $(document).ready(function() {
|
|||
url:'/druid/coordinator/v1/datasources/' + selected,
|
||||
data: JSON.stringify(selected),
|
||||
contentType:"application/json; charset=utf-8",
|
||||
dataType:"json",
|
||||
dataType:"text",
|
||||
error: function(xhr, status, error) {
|
||||
$("#enable_dialog").dialog("close");
|
||||
$("#error_dialog").html(xhr.responseText);
|
||||
|
@ -53,7 +53,7 @@ $(document).ready(function() {
|
|||
url:'/druid/coordinator/v1/datasources/' + selected,
|
||||
data: JSON.stringify(selected),
|
||||
contentType:"application/json; charset=utf-8",
|
||||
dataType:"json",
|
||||
dataType:"text",
|
||||
error: function(xhr, status, error) {
|
||||
$("#disable_dialog").dialog("close");
|
||||
$("#error_dialog").html(xhr.responseText);
|
||||
|
@ -81,12 +81,11 @@ $(document).ready(function() {
|
|||
$('#disabled_datasources').append($('<li>' + datasource + '</li>'));
|
||||
});
|
||||
$.each(db_datasources, function(index, datasource) {
|
||||
$('#datasources').append($('<option></option>').attr("value", datasource).text(datasource));
|
||||
$('#datasources').append($('<option></option>').val(datasource).text(datasource));
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
|
||||
$("#enable").click(function() {
|
||||
$("#enable_dialog").dialog("open");
|
||||
});
|
||||
|
|
|
@ -3,11 +3,12 @@
|
|||
$(document).ready(function() {
|
||||
|
||||
var basePath = "/druid/coordinator/v1/";
|
||||
var type = $('#select_type').attr('value') + '';
|
||||
var view = $('#select_view').attr('value') + '';
|
||||
var type = $('#select_type').val() + '';
|
||||
var view = $('#select_view').val() + '';
|
||||
|
||||
function handleTable(dontDisplay)
|
||||
{
|
||||
console.log(type);
|
||||
$.get(basePath + type + '?full', function(data) {
|
||||
buildTable(data, $('#result_table'), dontDisplay);
|
||||
|
||||
|
@ -75,8 +76,9 @@ $(document).ready(function() {
|
|||
}
|
||||
|
||||
$('#view_button').click(function() {
|
||||
type = $('#select_type').attr('value') + '';
|
||||
view = $('#select_view').attr('value') + '';
|
||||
console.log("here");
|
||||
type = $('#select_type').val() + '';
|
||||
view = $('#select_view').val() + '';
|
||||
|
||||
resetViews();
|
||||
|
File diff suppressed because one or more lines are too long
File diff suppressed because it is too large
Load Diff
|
@ -24,7 +24,7 @@ $(document).ready(function() {
|
|||
type: 'DELETE',
|
||||
url:'/druid/coordinator/v1/datasources/' + selected +'?kill=true&interval=' + interval,
|
||||
contentType:"application/json; charset=utf-8",
|
||||
dataType:"json",
|
||||
dataType:"text",
|
||||
error: function(xhr, status, error) {
|
||||
$("#confirm_dialog").dialog("close");
|
||||
$("#error_dialog").html(xhr.responseText);
|
||||
|
@ -43,7 +43,7 @@ $(document).ready(function() {
|
|||
|
||||
$.getJSON("/druid/coordinator/v1/db/datasources?includeDisabled", function(data) {
|
||||
$.each(data, function(index, datasource) {
|
||||
$('#datasources').append($('<option></option>').attr("value", datasource).text(datasource));
|
||||
$('#datasources').append($('<option></option>').val(datasource).text(datasource));
|
||||
});
|
||||
});
|
||||
|
||||
|
|
|
@ -243,7 +243,7 @@ $(document).ready(function() {
|
|||
url:'/druid/coordinator/v1/rules/' + selected,
|
||||
data: JSON.stringify(rules),
|
||||
contentType:"application/json; charset=utf-8",
|
||||
dataType:"json",
|
||||
dataType:"text",
|
||||
error: function(xhr, status, error) {
|
||||
$("#update_dialog").dialog("close");
|
||||
$("#error_dialog").html(xhr.responseText);
|
||||
|
@ -266,9 +266,9 @@ $(document).ready(function() {
|
|||
|
||||
$.getJSON("/druid/coordinator/v1/db/datasources", function(data) {
|
||||
$.each(data, function(index, datasource) {
|
||||
$('#datasources').append($('<option></option>').attr("value", datasource).text(datasource));
|
||||
$('#datasources').append($('<option></option>').val(datasource).text(datasource));
|
||||
});
|
||||
$('#datasources').append($('<option></option>').attr("value", defaultDatasource).text(defaultDatasource));
|
||||
$('#datasources').append($('<option></option>').val(defaultDatasource).text(defaultDatasource));
|
||||
});
|
||||
|
||||
$("#datasources").change(function(event) {
|
||||
|
@ -276,7 +276,7 @@ $(document).ready(function() {
|
|||
$("#rules").show();
|
||||
});
|
||||
|
||||
$(".rule_dropdown_types").live("change", function(event) {
|
||||
$(document).on("change", '.rule_dropdown_types', null, function(event) {
|
||||
var newRule = {
|
||||
"type" : $(event.target).val()
|
||||
};
|
||||
|
@ -284,11 +284,11 @@ $(document).ready(function() {
|
|||
ruleBody.replaceWith(makeRuleBody(newRule));
|
||||
});
|
||||
|
||||
$(".delete_rule").live("click", function(event) {
|
||||
$(document).on("click", '.delete_rule', null, function(event) {
|
||||
$(event.target).parent(".rule").remove();
|
||||
});
|
||||
|
||||
$(".add_tier").live("click", function(event) {
|
||||
$(document).on("click", '.add_tier', null, function(event) {
|
||||
$(event.target).parent().append(makeTierLoad(null, 0));
|
||||
});
|
||||
|
|
@ -29,7 +29,7 @@
|
|||
<style type="text/css">@import "css/jquery-ui-1.9.2.css";</style>
|
||||
|
||||
<script type="text/javascript" src="js/underscore-1.2.2.js"></script>
|
||||
<script type="text/javascript" src="js/jquery-1.8.3.js"></script>
|
||||
<script type="text/javascript" src="js/jquery-1.11.0.min.js"></script>
|
||||
<script type="text/javascript" src="js/jquery-ui-1.9.2.js"></script>
|
||||
<script type="text/javascript" src="js/kill-0.0.1.js"></script>
|
||||
</head>
|
||||
|
|
|
@ -30,9 +30,9 @@
|
|||
<style type="text/css">@import "css/rules.css";</style>
|
||||
|
||||
<script type="text/javascript" src="js/underscore-1.2.2.js"></script>
|
||||
<script type="text/javascript" src="js/jquery-1.8.3.js"></script>
|
||||
<script type="text/javascript" src="js/jquery-1.11.0.min.js"></script>
|
||||
<script type="text/javascript" src="js/jquery-ui-1.9.2.js"></script>
|
||||
<script type="text/javascript" src="js/rules-0.0.1.js"></script>
|
||||
<script type="text/javascript" src="js/rules-0.0.2.js"></script>
|
||||
</head>
|
||||
<body>
|
||||
<div class="container">
|
||||
|
|
|
@ -30,11 +30,11 @@
|
|||
<style type="text/css">@import "css/index.css";</style>
|
||||
|
||||
<script type="text/javascript" src="js/underscore-1.2.2.js"></script>
|
||||
<script type="text/javascript" src="js/jquery-1.8.3.js"></script>
|
||||
<script type="text/javascript" src="js/jquery-1.11.0.min.js"></script>
|
||||
<script type="text/javascript" src="js/jquery.dataTables-1.8.2.js"></script>
|
||||
<script type="text/javascript" src="js/druidTable-0.0.1.js"></script>
|
||||
<script type="text/javascript" src="js/tablehelper-0.0.2.js"></script>
|
||||
<script type="text/javascript" src="js/handlers-0.0.1.js"></script>
|
||||
<script type="text/javascript" src="js/handlers-0.0.2.js"></script>
|
||||
</head>
|
||||
|
||||
<body>
|
||||
|
|
|
@ -27,7 +27,7 @@
|
|||
<parent>
|
||||
<groupId>io.druid</groupId>
|
||||
<artifactId>druid</artifactId>
|
||||
<version>0.6.53-SNAPSHOT</version>
|
||||
<version>0.6.54-SNAPSHOT</version>
|
||||
</parent>
|
||||
|
||||
<dependencies>
|
||||
|
|
Loading…
Reference in New Issue