Merge pull request #22 from cwiki-us-docs/feature/getting_started
删除不需要的资源文件
|
@ -1,226 +0,0 @@
|
||||||
<!-- toc -->
|
|
||||||
|
|
||||||
<script async src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script>
|
|
||||||
<ins class="adsbygoogle"
|
|
||||||
style="display:block; text-align:center;"
|
|
||||||
data-ad-layout="in-article"
|
|
||||||
data-ad-format="fluid"
|
|
||||||
data-ad-client="ca-pub-8828078415045620"
|
|
||||||
data-ad-slot="7586680510"></ins>
|
|
||||||
<script>
|
|
||||||
(adsbygoogle = window.adsbygoogle || []).push({});
|
|
||||||
</script>
|
|
||||||
|
|
||||||
## 使用Apache Hadoop加载批数据
|
|
||||||
|
|
||||||
本教程向您展示如何使用远程Hadoop集群将数据文件加载到Apache Druid中。
|
|
||||||
|
|
||||||
对于本教程,我们假设您已经使用[快速入门](../GettingStarted/chapter-2.md)中所述的 `micro-quickstart` 单机配置完成了前边的[批处理摄取指南](tutorial-batch.md)。
|
|
||||||
|
|
||||||
### 安装Docker
|
|
||||||
|
|
||||||
本教程要求将[Docker](https://docs.docker.com/install/)安装在教程计算机上。
|
|
||||||
|
|
||||||
Docker安装完成后,请继续执行本教程中的后续步骤
|
|
||||||
|
|
||||||
### 构建Hadoop Docker镜像
|
|
||||||
在本教程中,我们为Hadoop 2.8.5集群提供了一个Dockerfile,我们将使用它运行批处理索引任务。
|
|
||||||
|
|
||||||
该Dockerfile和相关文件位于 `quickstart/tutorial/hadoop/docker`。
|
|
||||||
|
|
||||||
从apache-druid-0.17.0软件包根目录中,运行以下命令以构建名为"druid-hadoop-demo"的Docker镜像,其版本标签为"2.8.5":
|
|
||||||
|
|
||||||
```json
|
|
||||||
cd quickstart/tutorial/hadoop/docker
|
|
||||||
docker build -t druid-hadoop-demo:2.8.5 .
|
|
||||||
```
|
|
||||||
该命令运行后开始构建Hadoop镜像。镜像构建完成后,可以在控制台中看到 `Successfully tagged druid-hadoop-demo:2.8.5` 的信息。
|
|
||||||
|
|
||||||
### 安装Hadoop Docker集群
|
|
||||||
#### 创建临时共享目录
|
|
||||||
|
|
||||||
我们需要一个共享目录以便于主机和Hadoop容器之间进行传输文件
|
|
||||||
|
|
||||||
我们在 `/tmp` 下创建一些文件夹,稍后我们在启动Hadoop容器时会使用到它们:
|
|
||||||
|
|
||||||
```json
|
|
||||||
mkdir -p /tmp/shared
|
|
||||||
mkdir -p /tmp/shared/hadoop_xml
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 配置 /etc/hosts
|
|
||||||
|
|
||||||
在主机的 `/etc/hosts` 中增加以下入口:
|
|
||||||
```json
|
|
||||||
127.0.0.1 druid-hadoop-demo
|
|
||||||
```
|
|
||||||
#### 启动Hadoop容器
|
|
||||||
在 `/tmp/shared` 文件夹被创建和 `/etc/hosts` 入口被添加后,运行以下命令来启动Hadoop容器:
|
|
||||||
|
|
||||||
```json
|
|
||||||
docker run -it -h druid-hadoop-demo --name druid-hadoop-demo -p 2049:2049 -p 2122:2122 -p 8020:8020 -p 8021:8021 -p 8030:8030 -p 8031:8031 -p 8032:8032 -p 8033:8033 -p 8040:8040 -p 8042:8042 -p 8088:8088 -p 8443:8443 -p 9000:9000 -p 10020:10020 -p 19888:19888 -p 34455:34455 -p 49707:49707 -p 50010:50010 -p 50020:50020 -p 50030:50030 -p 50060:50060 -p 50070:50070 -p 50075:50075 -p 50090:50090 -p 51111:51111 -v /tmp/shared:/shared druid-hadoop-demo:2.8.5 /etc/bootstrap.sh -bash
|
|
||||||
```
|
|
||||||
|
|
||||||
容器启动后,您的终端将连接到容器内运行的bash shell:
|
|
||||||
|
|
||||||
```json
|
|
||||||
Starting sshd: [ OK ]
|
|
||||||
18/07/26 17:27:15 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
|
|
||||||
Starting namenodes on [druid-hadoop-demo]
|
|
||||||
druid-hadoop-demo: starting namenode, logging to /usr/local/hadoop/logs/hadoop-root-namenode-druid-hadoop-demo.out
|
|
||||||
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-druid-hadoop-demo.out
|
|
||||||
Starting secondary namenodes [0.0.0.0]
|
|
||||||
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-root-secondarynamenode-druid-hadoop-demo.out
|
|
||||||
18/07/26 17:27:31 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
|
|
||||||
starting yarn daemons
|
|
||||||
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn--resourcemanager-druid-hadoop-demo.out
|
|
||||||
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-root-nodemanager-druid-hadoop-demo.out
|
|
||||||
starting historyserver, logging to /usr/local/hadoop/logs/mapred--historyserver-druid-hadoop-demo.out
|
|
||||||
bash-4.1#
|
|
||||||
```
|
|
||||||
`Unable to load native-hadoop library for your platform... using builtin-java classes where applicable` 这个信息可以安全地忽略掉。
|
|
||||||
|
|
||||||
##### 进入Hadoop容器shell
|
|
||||||
|
|
||||||
运行下边命令打开Hadoop容器的另一个shell:
|
|
||||||
```json
|
|
||||||
docker exec -it druid-hadoop-demo bash
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 拷贝数据到Hadoop容器
|
|
||||||
|
|
||||||
从apache-druid-0.17.0安装包的根目录拷贝 `quickstart/tutorial/wikiticker-2015-09-12-sampled.json.gz` 样例数据到共享文件夹
|
|
||||||
|
|
||||||
```json
|
|
||||||
cp quickstart/tutorial/wikiticker-2015-09-12-sampled.json.gz /tmp/shared/wikiticker-2015-09-12-sampled.json.gz
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 设置Hadoop目录
|
|
||||||
|
|
||||||
在Hadoop容器shell中,运行以下命令来设置本次教程需要的HDFS目录,同时拷贝输入数据到HDFS上:
|
|
||||||
|
|
||||||
```json
|
|
||||||
cd /usr/local/hadoop/bin
|
|
||||||
./hdfs dfs -mkdir /druid
|
|
||||||
./hdfs dfs -mkdir /druid/segments
|
|
||||||
./hdfs dfs -mkdir /quickstart
|
|
||||||
./hdfs dfs -chmod 777 /druid
|
|
||||||
./hdfs dfs -chmod 777 /druid/segments
|
|
||||||
./hdfs dfs -chmod 777 /quickstart
|
|
||||||
./hdfs dfs -chmod -R 777 /tmp
|
|
||||||
./hdfs dfs -chmod -R 777 /user
|
|
||||||
./hdfs dfs -put /shared/wikiticker-2015-09-12-sampled.json.gz /quickstart/wikiticker-2015-09-12-sampled.json.gz
|
|
||||||
```
|
|
||||||
如果在命令执行中遇到了namenode相关的错误,可能是因为Hadoop容器没有完成初始化,等一会儿后重新执行命令。
|
|
||||||
|
|
||||||
### 配置使用Hadoop的Druid
|
|
||||||
|
|
||||||
配置用于Hadoop批加载的Druid集群还需要额外的一些步骤。
|
|
||||||
|
|
||||||
#### 拷贝Hadoop配置到Druid classpath
|
|
||||||
|
|
||||||
从Hadoop容器shell中,运行以下命令将Hadoop.xml配置文件拷贝到共享文件夹中:
|
|
||||||
```json
|
|
||||||
cp /usr/local/hadoop/etc/hadoop/*.xml /shared/hadoop_xml
|
|
||||||
```
|
|
||||||
|
|
||||||
在宿主机上运行下边命令,其中{PATH_TO_DRUID}替换为Druid软件包的路径:
|
|
||||||
|
|
||||||
```json
|
|
||||||
mkdir -p {PATH_TO_DRUID}/conf/druid/single-server/micro-quickstart/_common/hadoop-xml
|
|
||||||
cp /tmp/shared/hadoop_xml/*.xml {PATH_TO_DRUID}/conf/druid/single-server/micro-quickstart/_common/hadoop-xml/
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 更新Druid段与日志的存储
|
|
||||||
在常用的文本编辑器中,打开 `conf/druid/single-server/micro-quickstart/_common/common.runtime.properties` 文件做如下修改:
|
|
||||||
|
|
||||||
**禁用本地深度存储,启用HDFS深度存储**
|
|
||||||
```json
|
|
||||||
#
|
|
||||||
# Deep storage
|
|
||||||
#
|
|
||||||
|
|
||||||
# For local disk (only viable in a cluster if this is a network mount):
|
|
||||||
#druid.storage.type=local
|
|
||||||
#druid.storage.storageDirectory=var/druid/segments
|
|
||||||
|
|
||||||
# For HDFS:
|
|
||||||
druid.storage.type=hdfs
|
|
||||||
druid.storage.storageDirectory=/druid/segments
|
|
||||||
```
|
|
||||||
**禁用本地日志存储,启动HDFS日志存储**
|
|
||||||
```json
|
|
||||||
#
|
|
||||||
# Indexing service logs
|
|
||||||
#
|
|
||||||
|
|
||||||
# For local disk (only viable in a cluster if this is a network mount):
|
|
||||||
#druid.indexer.logs.type=file
|
|
||||||
#druid.indexer.logs.directory=var/druid/indexing-logs
|
|
||||||
|
|
||||||
# For HDFS:
|
|
||||||
druid.indexer.logs.type=hdfs
|
|
||||||
druid.indexer.logs.directory=/druid/indexing-logs
|
|
||||||
```
|
|
||||||
#### 重启Druid集群
|
|
||||||
|
|
||||||
Hadoop.xml文件拷贝到Druid集群、段和日志存储配置更新为HDFS后,Druid集群需要重启才可以让配置生效。
|
|
||||||
|
|
||||||
如果集群正在运行,`CTRL-C` 终止 `bin/start-micro-quickstart`脚本,重新执行它使得Druid服务恢复。
|
|
||||||
|
|
||||||
### 加载批数据
|
|
||||||
|
|
||||||
我们提供了2015年9月12日起对Wikipedia编辑的示例数据,以帮助您入门。
|
|
||||||
|
|
||||||
要将数据加载到Druid中,可以提交指向该文件的*摄取任务*。我们已经包含了一个任务,该任务会加载存档中包含 `wikiticker-2015-09-12-sampled.json.gz`文件。
|
|
||||||
|
|
||||||
通过以下命令进行提交 `wikipedia-index-hadoop.json` 任务:
|
|
||||||
```json
|
|
||||||
bin/post-index-task --file quickstart/tutorial/wikipedia-index-hadoop.json --url http://localhost:8081
|
|
||||||
```
|
|
||||||
|
|
||||||
### 查询数据
|
|
||||||
|
|
||||||
加载数据后,请按照[查询教程](./chapter-4.md)的操作,对新加载的数据执行一些示例查询.
|
|
||||||
|
|
||||||
### 清理数据
|
|
||||||
|
|
||||||
本教程只能与[查询教程](./chapter-4.md)一起使用。
|
|
||||||
|
|
||||||
如果您打算完成其他任何教程,还需要:
|
|
||||||
* 关闭集群,通过删除Druid软件包中的 `var` 目录来重置集群状态
|
|
||||||
* 将 `conf/druid/single-server/micro-quickstart/_common/common.runtime.properties` 恢复深度存储与任务存储配置到本地类型
|
|
||||||
* 重启集群
|
|
||||||
|
|
||||||
这是必需的,因为其他摄取教程将写入相同的"wikipedia"数据源,并且以后的教程希望集群使用本地深度存储。
|
|
||||||
|
|
||||||
恢复配置示例:
|
|
||||||
```json
|
|
||||||
#
|
|
||||||
# Deep storage
|
|
||||||
#
|
|
||||||
|
|
||||||
# For local disk (only viable in a cluster if this is a network mount):
|
|
||||||
druid.storage.type=local
|
|
||||||
druid.storage.storageDirectory=var/druid/segments
|
|
||||||
|
|
||||||
# For HDFS:
|
|
||||||
#druid.storage.type=hdfs
|
|
||||||
#druid.storage.storageDirectory=/druid/segments
|
|
||||||
|
|
||||||
#
|
|
||||||
# Indexing service logs
|
|
||||||
#
|
|
||||||
|
|
||||||
# For local disk (only viable in a cluster if this is a network mount):
|
|
||||||
druid.indexer.logs.type=file
|
|
||||||
druid.indexer.logs.directory=var/druid/indexing-logs
|
|
||||||
|
|
||||||
# For HDFS:
|
|
||||||
#druid.indexer.logs.type=hdfs
|
|
||||||
#druid.indexer.logs.directory=/druid/indexing-logs
|
|
||||||
```
|
|
||||||
|
|
||||||
### 进一步阅读
|
|
||||||
|
|
||||||
更多关于从Hadoop加载数据的信息,可以查看[Druid Hadoop批量摄取文档](../DataIngestion/hadoopbased.md)
|
|
|
@ -1,288 +0,0 @@
|
||||||
<!-- toc -->
|
|
||||||
|
|
||||||
<script async src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script>
|
|
||||||
<ins class="adsbygoogle"
|
|
||||||
style="display:block; text-align:center;"
|
|
||||||
data-ad-layout="in-article"
|
|
||||||
data-ad-format="fluid"
|
|
||||||
data-ad-client="ca-pub-8828078415045620"
|
|
||||||
data-ad-slot="7586680510"></ins>
|
|
||||||
<script>
|
|
||||||
(adsbygoogle = window.adsbygoogle || []).push({});
|
|
||||||
</script>
|
|
||||||
|
|
||||||
## 查询数据
|
|
||||||
|
|
||||||
本教程将以Druid SQL和Druid的原生查询格式的示例演示如何在Apache Druid中查询数据。
|
|
||||||
|
|
||||||
本教程假定您已经完成了摄取教程之一,因为我们将查询Wikipedia编辑样例数据。
|
|
||||||
|
|
||||||
* [加载本地文件](tutorial-batch.md)
|
|
||||||
* [从Kafka加载数据](./chapter-2.md)
|
|
||||||
* [从Hadoop加载数据](./chapter-3.md)
|
|
||||||
|
|
||||||
Druid查询通过HTTP发送,Druid控制台包括一个视图,用于向Druid发出查询并很好地格式化结果。
|
|
||||||
|
|
||||||
### Druid SQL查询
|
|
||||||
|
|
||||||
Druid支持SQL查询。
|
|
||||||
|
|
||||||
该查询检索了2015年9月12日被编辑最多的10个维基百科页面
|
|
||||||
|
|
||||||
```json
|
|
||||||
SELECT page, COUNT(*) AS Edits
|
|
||||||
FROM wikipedia
|
|
||||||
WHERE TIMESTAMP '2015-09-12 00:00:00' <= "__time" AND "__time" < TIMESTAMP '2015-09-13 00:00:00'
|
|
||||||
GROUP BY page
|
|
||||||
ORDER BY Edits DESC
|
|
||||||
LIMIT 10
|
|
||||||
```
|
|
||||||
|
|
||||||
让我们来看几种不同的查询方法
|
|
||||||
|
|
||||||
#### 通过控制台查询SQL
|
|
||||||
|
|
||||||
您可以通过在控制台中进行上述查询:
|
|
||||||
|
|
||||||
![](img-3/tutorial-query-01.png)
|
|
||||||
|
|
||||||
控制台查询视图通过内联文档提供自动补全功能。
|
|
||||||
|
|
||||||
![](img-3/tutorial-query-02.png)
|
|
||||||
|
|
||||||
您还可以从 `...` 选项菜单中配置要与查询一起发送的其他上下文标志。
|
|
||||||
|
|
||||||
请注意,控制台将(默认情况下)使用带Limit的SQL查询,以便可以完成诸如`SELECT * FROM wikipedia`之类的查询,您可以通过 `Smart query limit` 切换关闭此行为。
|
|
||||||
|
|
||||||
![](img-3/tutorial-query-03.png)
|
|
||||||
|
|
||||||
查询视图提供了可以为您编写和修改查询的上下文操作。
|
|
||||||
|
|
||||||
#### 通过dsql查询SQL
|
|
||||||
|
|
||||||
为方便起见,Druid软件包中包括了一个SQL命令行客户端,位于Druid根目录中的 `bin/dsql`
|
|
||||||
|
|
||||||
运行 `bin/dsql`, 可以看到如下:
|
|
||||||
```json
|
|
||||||
Welcome to dsql, the command-line client for Druid SQL.
|
|
||||||
Type "\h" for help.
|
|
||||||
dsql>
|
|
||||||
```
|
|
||||||
将SQl粘贴到 `dsql` 中提交查询:
|
|
||||||
|
|
||||||
```json
|
|
||||||
dsql> SELECT page, COUNT(*) AS Edits FROM wikipedia WHERE "__time" BETWEEN TIMESTAMP '2015-09-12 00:00:00' AND TIMESTAMP '2015-09-13 00:00:00' GROUP BY page ORDER BY Edits DESC LIMIT 10;
|
|
||||||
┌──────────────────────────────────────────────────────────┬───────┐
|
|
||||||
│ page │ Edits │
|
|
||||||
├──────────────────────────────────────────────────────────┼───────┤
|
|
||||||
│ Wikipedia:Vandalismusmeldung │ 33 │
|
|
||||||
│ User:Cyde/List of candidates for speedy deletion/Subpage │ 28 │
|
|
||||||
│ Jeremy Corbyn │ 27 │
|
|
||||||
│ Wikipedia:Administrators' noticeboard/Incidents │ 21 │
|
|
||||||
│ Flavia Pennetta │ 20 │
|
|
||||||
│ Total Drama Presents: The Ridonculous Race │ 18 │
|
|
||||||
│ User talk:Dudeperson176123 │ 18 │
|
|
||||||
│ Wikipédia:Le Bistro/12 septembre 2015 │ 18 │
|
|
||||||
│ Wikipedia:In the news/Candidates │ 17 │
|
|
||||||
│ Wikipedia:Requests for page protection │ 17 │
|
|
||||||
└──────────────────────────────────────────────────────────┴───────┘
|
|
||||||
Retrieved 10 rows in 0.06s.
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 通过HTTP查询SQL
|
|
||||||
|
|
||||||
SQL查询作为JSON通过HTTP提交
|
|
||||||
|
|
||||||
教程包括一个示例文件, 该文件`quickstart/tutorial/wikipedia-top-pages-sql.json`包含上面显示的SQL查询, 我们将该查询提交给Druid Broker。
|
|
||||||
|
|
||||||
```json
|
|
||||||
curl -X 'POST' -H 'Content-Type:application/json' -d @quickstart/tutorial/wikipedia-top-pages-sql.json http://localhost:8888/druid/v2/sql
|
|
||||||
```
|
|
||||||
结果返回如下:
|
|
||||||
|
|
||||||
```json
|
|
||||||
[
|
|
||||||
{
|
|
||||||
"page": "Wikipedia:Vandalismusmeldung",
|
|
||||||
"Edits": 33
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"page": "User:Cyde/List of candidates for speedy deletion/Subpage",
|
|
||||||
"Edits": 28
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"page": "Jeremy Corbyn",
|
|
||||||
"Edits": 27
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"page": "Wikipedia:Administrators' noticeboard/Incidents",
|
|
||||||
"Edits": 21
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"page": "Flavia Pennetta",
|
|
||||||
"Edits": 20
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"page": "Total Drama Presents: The Ridonculous Race",
|
|
||||||
"Edits": 18
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"page": "User talk:Dudeperson176123",
|
|
||||||
"Edits": 18
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"page": "Wikipédia:Le Bistro/12 septembre 2015",
|
|
||||||
"Edits": 18
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"page": "Wikipedia:In the news/Candidates",
|
|
||||||
"Edits": 17
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"page": "Wikipedia:Requests for page protection",
|
|
||||||
"Edits": 17
|
|
||||||
}
|
|
||||||
]
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 更多Druid SQL示例
|
|
||||||
|
|
||||||
这是一组可尝试的查询:
|
|
||||||
|
|
||||||
**时间查询**
|
|
||||||
|
|
||||||
```json
|
|
||||||
SELECT FLOOR(__time to HOUR) AS HourTime, SUM(deleted) AS LinesDeleted
|
|
||||||
FROM wikipedia WHERE "__time" BETWEEN TIMESTAMP '2015-09-12 00:00:00' AND TIMESTAMP '2015-09-13 00:00:00'
|
|
||||||
GROUP BY 1
|
|
||||||
```
|
|
||||||
|
|
||||||
![](img-3/tutorial-query-03.png)
|
|
||||||
|
|
||||||
**聚合查询**
|
|
||||||
|
|
||||||
```json
|
|
||||||
SELECT channel, page, SUM(added)
|
|
||||||
FROM wikipedia WHERE "__time" BETWEEN TIMESTAMP '2015-09-12 00:00:00' AND TIMESTAMP '2015-09-13 00:00:00'
|
|
||||||
GROUP BY channel, page
|
|
||||||
ORDER BY SUM(added) DESC
|
|
||||||
```
|
|
||||||
|
|
||||||
![](img-3/tutorial-query-04.png)
|
|
||||||
|
|
||||||
**查询原始数据**
|
|
||||||
|
|
||||||
```json
|
|
||||||
SELECT user, page
|
|
||||||
FROM wikipedia WHERE "__time" BETWEEN TIMESTAMP '2015-09-12 02:00:00' AND TIMESTAMP '2015-09-12 03:00:00'
|
|
||||||
LIMIT 5
|
|
||||||
```
|
|
||||||
|
|
||||||
![](img-3/tutorial-query-05.png)
|
|
||||||
|
|
||||||
#### SQL查询计划
|
|
||||||
|
|
||||||
Druid SQL能够解释给定查询的查询计划, 在控制台中,可以通过 `...` 按钮访问此功能。
|
|
||||||
|
|
||||||
![](img-3/tutorial-query-06.png)
|
|
||||||
|
|
||||||
如果您以其他方式查询,则可以通过在Druid SQL查询之前添加 `EXPLAIN PLAN FOR` 来获得查询计划。
|
|
||||||
|
|
||||||
使用上边的一个示例:
|
|
||||||
|
|
||||||
`EXPLAIN PLAN FOR SELECT page, COUNT(*) AS Edits FROM wikipedia WHERE "__time" BETWEEN TIMESTAMP '2015-09-12 00:00:00' AND TIMESTAMP '2015-09-13 00:00:00' GROUP BY page ORDER BY Edits DESC LIMIT 10;`
|
|
||||||
|
|
||||||
```json
|
|
||||||
dsql> EXPLAIN PLAN FOR SELECT page, COUNT(*) AS Edits FROM wikipedia WHERE "__time" BETWEEN TIMESTAMP '2015-09-12 00:00:00' AND TIMESTAMP '2015-09-13 00:00:00' GROUP BY page ORDER BY Edits DESC LIMIT 10;
|
|
||||||
┌─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
|
|
||||||
│ PLAN │
|
|
||||||
├─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
|
|
||||||
│ DruidQueryRel(query=[{"queryType":"topN","dataSource":{"type":"table","name":"wikipedia"},"virtualColumns":[],"dimension":{"type":"default","dimension":"page","outputName":"d0","outputType":"STRING"},"metric":{"type":"numeric","metric":"a0"},"threshold":10,"intervals":{"type":"intervals","intervals":["2015-09-12T00:00:00.000Z/2015-09-13T00:00:00.001Z"]},"filter":null,"granularity":{"type":"all"},"aggregations":[{"type":"count","name":"a0"}],"postAggregations":[],"context":{},"descending":false}], signature=[{d0:STRING, a0:LONG}]) │
|
|
||||||
└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
|
|
||||||
Retrieved 1 row in 0.03s.
|
|
||||||
```
|
|
||||||
|
|
||||||
### 原生JSON查询
|
|
||||||
|
|
||||||
Druid的原生查询格式以JSON表示。
|
|
||||||
|
|
||||||
#### 通过控制台原生查询
|
|
||||||
|
|
||||||
您可以从控制台的"Query"视图发出原生Druid查询。
|
|
||||||
|
|
||||||
这是一个查询,可检索2015-09-12上具有最多页面编辑量的10个wikipedia页面。
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"queryType" : "topN",
|
|
||||||
"dataSource" : "wikipedia",
|
|
||||||
"intervals" : ["2015-09-12/2015-09-13"],
|
|
||||||
"granularity" : "all",
|
|
||||||
"dimension" : "page",
|
|
||||||
"metric" : "count",
|
|
||||||
"threshold" : 10,
|
|
||||||
"aggregations" : [
|
|
||||||
{
|
|
||||||
"type" : "count",
|
|
||||||
"name" : "count"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
只需将其粘贴到控制台即可将编辑器切换到JSON模式。
|
|
||||||
|
|
||||||
![](img-3/tutorial-query-07.png)
|
|
||||||
|
|
||||||
#### 通过HTTP原生查询
|
|
||||||
|
|
||||||
我们在 `quickstart/tutorial/wikipedia-top-pages.json` 文件中包括了一个示例原生TopN查询。
|
|
||||||
|
|
||||||
提交该查询到Druid:
|
|
||||||
|
|
||||||
```json
|
|
||||||
curl -X 'POST' -H 'Content-Type:application/json' -d @quickstart/tutorial/wikipedia-top-pages.json http://localhost:8888/druid/v2?pretty
|
|
||||||
```
|
|
||||||
|
|
||||||
您可以看到如下的查询结果:
|
|
||||||
|
|
||||||
```json
|
|
||||||
[ {
|
|
||||||
"timestamp" : "2015-09-12T00:46:58.771Z",
|
|
||||||
"result" : [ {
|
|
||||||
"count" : 33,
|
|
||||||
"page" : "Wikipedia:Vandalismusmeldung"
|
|
||||||
}, {
|
|
||||||
"count" : 28,
|
|
||||||
"page" : "User:Cyde/List of candidates for speedy deletion/Subpage"
|
|
||||||
}, {
|
|
||||||
"count" : 27,
|
|
||||||
"page" : "Jeremy Corbyn"
|
|
||||||
}, {
|
|
||||||
"count" : 21,
|
|
||||||
"page" : "Wikipedia:Administrators' noticeboard/Incidents"
|
|
||||||
}, {
|
|
||||||
"count" : 20,
|
|
||||||
"page" : "Flavia Pennetta"
|
|
||||||
}, {
|
|
||||||
"count" : 18,
|
|
||||||
"page" : "Total Drama Presents: The Ridonculous Race"
|
|
||||||
}, {
|
|
||||||
"count" : 18,
|
|
||||||
"page" : "User talk:Dudeperson176123"
|
|
||||||
}, {
|
|
||||||
"count" : 18,
|
|
||||||
"page" : "Wikipédia:Le Bistro/12 septembre 2015"
|
|
||||||
}, {
|
|
||||||
"count" : 17,
|
|
||||||
"page" : "Wikipedia:In the news/Candidates"
|
|
||||||
}, {
|
|
||||||
"count" : 17,
|
|
||||||
"page" : "Wikipedia:Requests for page protection"
|
|
||||||
} ]
|
|
||||||
} ]
|
|
||||||
```
|
|
||||||
|
|
||||||
### 进一步阅读
|
|
||||||
|
|
||||||
[查询文档](../querying/makeNativeQueries.md)有更多关于Druid原生JSON查询的信息
|
|
||||||
[Druid SQL文档](../querying/druidsql.md)有更多关于Druid SQL查询的信息
|
|
Before Width: | Height: | Size: 133 KiB |
Before Width: | Height: | Size: 468 KiB |
Before Width: | Height: | Size: 208 KiB |
Before Width: | Height: | Size: 250 KiB |
Before Width: | Height: | Size: 251 KiB |
Before Width: | Height: | Size: 100 KiB |
Before Width: | Height: | Size: 96 KiB |
Before Width: | Height: | Size: 169 KiB |
Before Width: | Height: | Size: 114 KiB |
Before Width: | Height: | Size: 118 KiB |
Before Width: | Height: | Size: 148 KiB |
Before Width: | Height: | Size: 122 KiB |
Before Width: | Height: | Size: 139 KiB |
Before Width: | Height: | Size: 115 KiB |
Before Width: | Height: | Size: 498 KiB |
Before Width: | Height: | Size: 213 KiB |
Before Width: | Height: | Size: 255 KiB |
Before Width: | Height: | Size: 251 KiB |
Before Width: | Height: | Size: 92 KiB |
Before Width: | Height: | Size: 133 KiB |
Before Width: | Height: | Size: 96 KiB |
Before Width: | Height: | Size: 168 KiB |
Before Width: | Height: | Size: 111 KiB |
Before Width: | Height: | Size: 117 KiB |
Before Width: | Height: | Size: 149 KiB |
Before Width: | Height: | Size: 140 KiB |
|
@ -1,60 +0,0 @@
|
||||||
<!-- toc -->
|
|
||||||
|
|
||||||
<script async src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script>
|
|
||||||
<ins class="adsbygoogle"
|
|
||||||
style="display:block; text-align:center;"
|
|
||||||
data-ad-layout="in-article"
|
|
||||||
data-ad-format="fluid"
|
|
||||||
data-ad-client="ca-pub-8828078415045620"
|
|
||||||
data-ad-slot="7586680510"></ins>
|
|
||||||
<script>
|
|
||||||
(adsbygoogle = window.adsbygoogle || []).push({});
|
|
||||||
</script>
|
|
||||||
|
|
||||||
### Druid是什么
|
|
||||||
|
|
||||||
Apache Druid是一个实时分析型数据库,旨在对大型数据集进行快速的查询分析("[OLAP](https://en.wikipedia.org/wiki/Online_analytical_processing)"查询)。Druid最常被当做数据库来用以支持实时摄取、高性能查询和高稳定运行的应用场景,同时,Druid也通常被用来助力分析型应用的图形化界面,或者当做需要快速聚合的高并发后端API,Druid最适合应用于面向事件类型的数据。
|
|
||||||
|
|
||||||
Druid通常应用于以下场景:
|
|
||||||
|
|
||||||
* 点击流分析(Web端和移动端)
|
|
||||||
* 网络监测分析(网络性能监控)
|
|
||||||
* 服务指标存储
|
|
||||||
* 供应链分析(制造类指标)
|
|
||||||
* 应用性能指标分析
|
|
||||||
* 数字广告分析
|
|
||||||
* 商务智能 / OLAP
|
|
||||||
|
|
||||||
Druid的核心架构吸收和结合了[数据仓库](https://en.wikipedia.org/wiki/Data_warehouse)、[时序数据库](https://en.wikipedia.org/wiki/Time_series_database)以及[检索系统](https://en.wikipedia.org/wiki/Search_engine_(computing))的优势,其主要特征如下:
|
|
||||||
|
|
||||||
1. **列式存储**,Druid使用列式存储,这意味着在一个特定的数据查询中它只需要查询特定的列,这样极地提高了部分列查询场景的性能。另外,每一列数据都针对特定数据类型做了优化存储,从而支持快速的扫描和聚合。
|
|
||||||
2. **可扩展的分布式系统**,Druid通常部署在数十到数百台服务器的集群中,并且可以提供每秒数百万条记录的接收速率,数万亿条记录的保留存储以及亚秒级到几秒的查询延迟。
|
|
||||||
3. **大规模并行处理**,Druid可以在整个集群中并行处理查询。
|
|
||||||
4. **实时或批量摄取**,Druid可以实时(已经被摄取的数据可立即用于查询)或批量摄取数据。
|
|
||||||
5. **自修复、自平衡、易于操作**,作为集群运维操作人员,要伸缩集群只需添加或删除服务,集群就会在后台自动重新平衡自身,而不会造成任何停机。如果任何一台Druid服务器发生故障,系统将自动绕过损坏。 Druid设计为7*24全天候运行,无需出于任何原因而导致计划内停机,包括配置更改和软件更新。
|
|
||||||
6. **不会丢失数据的云原生容错架构**,一旦Druid摄取了数据,副本就安全地存储在[深度存储介质](Design/../chapter-1.md)(通常是云存储,HDFS或共享文件系统)中。即使某个Druid服务发生故障,也可以从深度存储中恢复您的数据。对于仅影响少数Druid服务的有限故障,副本可确保在系统恢复时仍然可以进行查询。
|
|
||||||
7. **用于快速过滤的索引**,Druid使用[CONCISE](https://arxiv.org/pdf/1004.0403.pdf)或[Roaring](https://roaringbitmap.org/)压缩的位图索引来创建索引,以支持快速过滤和跨多列搜索。
|
|
||||||
8. **基于时间的分区**,Druid首先按时间对数据进行分区,另外同时可以根据其他字段进行分区。这意味着基于时间的查询将仅访问与查询时间范围匹配的分区,这将大大提高基于时间的数据的性能。
|
|
||||||
9. **近似算法**,Druid应用了近似count-distinct,近似排序以及近似直方图和分位数计算的算法。这些算法占用有限的内存使用量,通常比精确计算要快得多。对于精度要求比速度更重要的场景,Druid还提供了精确count-distinct和精确排序。
|
|
||||||
10. **摄取时自动汇总聚合**,Druid支持在数据摄取阶段可选地进行数据汇总,这种汇总会部分预先聚合您的数据,并可以节省大量成本并提高性能。
|
|
||||||
|
|
||||||
### 什么场景下应该使用Druid
|
|
||||||
|
|
||||||
许多公司都已经将Druid应用于多种不同的应用场景,详情可查看[Powered by Apache Druid](https://druid.apache.org/druid-powered)页面。
|
|
||||||
|
|
||||||
如果您的使用场景符合以下的几个特征,那么Druid是一个非常不错的选择:
|
|
||||||
|
|
||||||
* 数据插入频率比较高,但较少更新数据
|
|
||||||
* 大多数查询场景为聚合查询和分组查询(GroupBy),同时还有一定得检索与扫描查询
|
|
||||||
* 将数据查询延迟目标定位100毫秒到几秒钟之间
|
|
||||||
* 数据具有时间属性(Druid针对时间做了优化和设计)
|
|
||||||
* 在多表场景下,每次查询仅命中一个大的分布式表,查询又可能命中多个较小的lookup表
|
|
||||||
* 场景中包含高基维度数据列(例如URL,用户ID等),并且需要对其进行快速计数和排序
|
|
||||||
* 需要从Kafka、HDFS、对象存储(如Amazon S3)中加载数据
|
|
||||||
|
|
||||||
如果您的使用场景符合以下特征,那么使用Druid可能是一个不好的选择:
|
|
||||||
|
|
||||||
* 根据主键对现有数据进行低延迟更新操作。Druid支持流式插入,但不支持流式更新(更新操作是通过后台批处理作业完成)
|
|
||||||
* 延迟不重要的离线数据系统
|
|
||||||
* 场景中包括大连接(将一个大事实表连接到另一个大事实表),并且可以接受花费很长时间来完成这些查询
|
|
||||||
|
|
|
@ -1,163 +0,0 @@
|
||||||
<!-- toc -->
|
|
||||||
<script async src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script>
|
|
||||||
<ins class="adsbygoogle"
|
|
||||||
style="display:block; text-align:center;"
|
|
||||||
data-ad-layout="in-article"
|
|
||||||
data-ad-format="fluid"
|
|
||||||
data-ad-client="ca-pub-8828078415045620"
|
|
||||||
data-ad-slot="7586680510"></ins>
|
|
||||||
<script>
|
|
||||||
(adsbygoogle = window.adsbygoogle || []).push({});
|
|
||||||
</script>
|
|
||||||
|
|
||||||
### 快速开始
|
|
||||||
|
|
||||||
在本快速入门教程中,我们将下载Druid并将其安装在一台服务器上,完成初始安装后,向集群中加载数据。
|
|
||||||
|
|
||||||
在开始快速入门之前,阅读[Druid概述](chapter-1.md)和[数据摄取概述](../DataIngestion/index.md)会很有帮助,因为当前教程会引用这些页面上讨论的概念。
|
|
||||||
|
|
||||||
#### 预备条件
|
|
||||||
##### 软件
|
|
||||||
* **Java 8(8u92+)**
|
|
||||||
* Linux, Mac OS X, 或者其他类UNIX系统(Windows不支持)
|
|
||||||
|
|
||||||
> [!WARNING]
|
|
||||||
> Druid服务运行依赖Java 8,可以使用环境变量`DRUID_JAVA_HOME`或`JAVA_HOME`指定在何处查找Java,有关更多详细信息,请运行`verify-java`脚本。
|
|
||||||
|
|
||||||
##### 硬件
|
|
||||||
|
|
||||||
Druid安装包提供了几个[单服务器配置](chapter-3.md)的示例,以及使用这些配置启动Druid进程的脚本。
|
|
||||||
|
|
||||||
如果您正在使用便携式等小型计算机上运行服务,则配置为4CPU/16GB RAM环境的`micro-quickstart`配置是一个不错的选择。
|
|
||||||
|
|
||||||
如果您打算在本教程之外使用单机部署进行进一步试验评估,则建议使用比`micro-quickstart`更大的配置。
|
|
||||||
|
|
||||||
#### 入门开始
|
|
||||||
|
|
||||||
[下载](https://www.apache.org/dyn/closer.cgi?path=/druid/0.17.0/apache-druid-0.17.0-bin.tar.gz)Druid最新0.17.0release安装包
|
|
||||||
|
|
||||||
在终端中运行以下命令来提取Druid
|
|
||||||
|
|
||||||
```json
|
|
||||||
tar -xzf apache-druid-0.17.0-bin.tar.gz
|
|
||||||
cd apache-druid-0.17.0
|
|
||||||
```
|
|
||||||
|
|
||||||
在安装包中有以下文件:
|
|
||||||
|
|
||||||
* `LICENSE`和`NOTICE`文件
|
|
||||||
* `bin/*` - 启停等脚本
|
|
||||||
* `conf/*` - 用于单节点部署和集群部署的示例配置
|
|
||||||
* `extensions/*` - Druid核心扩展
|
|
||||||
* `hadoop-dependencies/*` - Druid Hadoop依赖
|
|
||||||
* `lib/*` - Druid核心库和依赖
|
|
||||||
* `quickstart/*` - 配置文件,样例数据,以及快速入门教材的其他文件
|
|
||||||
|
|
||||||
#### 启动服务
|
|
||||||
|
|
||||||
以下命令假定您使用的是`micro-quickstart`单机配置,如果使用的是其他配置,在`bin`目录下有每一种配置对应的脚本,如`bin/start-single-server-small`
|
|
||||||
|
|
||||||
在`apache-druid-0.17.0`安装包的根目录下执行命令:
|
|
||||||
|
|
||||||
```json
|
|
||||||
./bin/start-micro-quickstart
|
|
||||||
```
|
|
||||||
然后将在本地计算机上启动Zookeeper和Druid服务实例,例如:
|
|
||||||
|
|
||||||
```json
|
|
||||||
$ ./bin/start-micro-quickstart
|
|
||||||
[Fri May 3 11:40:50 2019] Running command[zk], logging to[/apache-druid-0.17.0/var/sv/zk.log]: bin/run-zk conf
|
|
||||||
[Fri May 3 11:40:50 2019] Running command[coordinator-overlord], logging to[/apache-druid-0.17.0/var/sv/coordinator-overlord.log]: bin/run-druid coordinator-overlord conf/druid/single-server/micro-quickstart
|
|
||||||
[Fri May 3 11:40:50 2019] Running command[broker], logging to[/apache-druid-0.17.0/var/sv/broker.log]: bin/run-druid broker conf/druid/single-server/micro-quickstart
|
|
||||||
[Fri May 3 11:40:50 2019] Running command[router], logging to[/apache-druid-0.17.0/var/sv/router.log]: bin/run-druid router conf/druid/single-server/micro-quickstart
|
|
||||||
[Fri May 3 11:40:50 2019] Running command[historical], logging to[/apache-druid-0.17.0/var/sv/historical.log]: bin/run-druid historical conf/druid/single-server/micro-quickstart
|
|
||||||
[Fri May 3 11:40:50 2019] Running command[middleManager], logging to[/apache-druid-0.17.0/var/sv/middleManager.log]: bin/run-druid middleManager conf/druid/single-server/micro-quickstart
|
|
||||||
```
|
|
||||||
|
|
||||||
所有的状态(例如集群元数据存储和服务的segment文件)将保留在`apache-druid-0.17.0`软件包根目录下的`var`目录中, 服务的日志位于 `var/sv`。
|
|
||||||
|
|
||||||
稍后,如果您想停止服务,请按`CTRL-C`退出`bin/start-micro-quickstart`脚本,该脚本将终止Druid进程。
|
|
||||||
|
|
||||||
集群启动后,可以访问[http://localhost:8888](http://localhost:8888)来Druid控制台,控制台由Druid Router进程启动。
|
|
||||||
|
|
||||||
![tutorial-quickstart](tutorial-quickstart-01.png)
|
|
||||||
|
|
||||||
所有Druid进程完全启动需要花费几秒钟。 如果在启动服务后立即打开控制台,则可能会看到一些可以安全忽略的错误。
|
|
||||||
|
|
||||||
#### 加载数据
|
|
||||||
##### 教程使用的数据集
|
|
||||||
|
|
||||||
对于以下数据加载教程,我们提供了一个示例数据文件,其中包含2015年9月12日发生的Wikipedia页面编辑事件。
|
|
||||||
|
|
||||||
该样本数据位于Druid包根目录的`quickstart/tutorial/wikiticker-2015-09-12-sampled.json.gz`中,页面编辑事件作为JSON对象存储在文本文件中。
|
|
||||||
|
|
||||||
示例数据包含以下几列,示例事件如下所示:
|
|
||||||
|
|
||||||
* added
|
|
||||||
* channel
|
|
||||||
* cityName
|
|
||||||
* comment
|
|
||||||
* countryIsoCode
|
|
||||||
* countryName
|
|
||||||
* deleted
|
|
||||||
* delta
|
|
||||||
* isAnonymous
|
|
||||||
* isMinor
|
|
||||||
* isNew
|
|
||||||
* isRobot
|
|
||||||
* isUnpatrolled
|
|
||||||
* metroCode
|
|
||||||
* namespace
|
|
||||||
* page
|
|
||||||
* regionIsoCode
|
|
||||||
* regionName
|
|
||||||
* user
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"timestamp":"2015-09-12T20:03:45.018Z",
|
|
||||||
"channel":"#en.wikipedia",
|
|
||||||
"namespace":"Main",
|
|
||||||
"page":"Spider-Man's powers and equipment",
|
|
||||||
"user":"foobar",
|
|
||||||
"comment":"/* Artificial web-shooters */",
|
|
||||||
"cityName":"New York",
|
|
||||||
"regionName":"New York",
|
|
||||||
"regionIsoCode":"NY",
|
|
||||||
"countryName":"United States",
|
|
||||||
"countryIsoCode":"US",
|
|
||||||
"isAnonymous":false,
|
|
||||||
"isNew":false,
|
|
||||||
"isMinor":false,
|
|
||||||
"isRobot":false,
|
|
||||||
"isUnpatrolled":false,
|
|
||||||
"added":99,
|
|
||||||
"delta":99,
|
|
||||||
"deleted":0,
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
##### 数据加载
|
|
||||||
|
|
||||||
以下教程演示了将数据加载到Druid的各种方法,包括批处理和流处理用例。 所有教程均假定您使用的是上面提到的`micro-quickstart`单机配置。
|
|
||||||
|
|
||||||
* [加载本地文件](../tutorial-batch.md) - 本教程演示了如何使用Druid的本地批处理摄取来执行批文件加载
|
|
||||||
* [从Kafka加载流数据](../chapter-2.md) - 本教程演示了如何从Kafka主题加载流数据
|
|
||||||
* [从Hadoop加载数据](../chapter-3.md) - 本教程演示了如何使用远程Hadoop集群执行批处理文件加载
|
|
||||||
* [编写一个自己的数据摄取规范](../chapter-10.md) - 本教程演示了如何编写新的数据摄取规范并使用它来加载数据
|
|
||||||
|
|
||||||
##### 重置集群状态
|
|
||||||
|
|
||||||
如果要在清理服务后重新启动,请删除`var`目录,然后再次运行`bin/start-micro-quickstart`脚本。
|
|
||||||
|
|
||||||
一旦每个服务都启动,您就可以加载数据了。
|
|
||||||
|
|
||||||
##### 重置Kafka
|
|
||||||
|
|
||||||
如果您完成了[教程:从Kafka加载流数据](../chapter-2.md)并希望重置集群状态,则还应该清除所有Kafka状态。
|
|
||||||
|
|
||||||
在停止ZooKeeper和Druid服务之前,使用`CTRL-C`关闭`Kafka Broker`,然后删除`/tmp/kafka-logs`中的Kafka日志目录:
|
|
||||||
|
|
||||||
```
|
|
||||||
rm -rf /tmp/kafka-logs
|
|
||||||
```
|
|
|
@ -258,3 +258,217 @@ druid.indexer.logs.directory=var/druid/indexing-logs
|
||||||
## Further reading
|
## Further reading
|
||||||
|
|
||||||
For more information on loading batch data with Hadoop, please see [the Hadoop batch ingestion documentation](../ingestion/hadoop.md).
|
For more information on loading batch data with Hadoop, please see [the Hadoop batch ingestion documentation](../ingestion/hadoop.md).
|
||||||
|
|
||||||
|
## 使用Apache Hadoop加载批数据
|
||||||
|
|
||||||
|
本教程向您展示如何使用远程Hadoop集群将数据文件加载到Apache Druid中。
|
||||||
|
|
||||||
|
对于本教程,我们假设您已经使用[快速入门](../GettingStarted/chapter-2.md)中所述的 `micro-quickstart` 单机配置完成了前边的[批处理摄取指南](tutorial-batch.md)。
|
||||||
|
|
||||||
|
### 安装Docker
|
||||||
|
|
||||||
|
本教程要求将[Docker](https://docs.docker.com/install/)安装在教程计算机上。
|
||||||
|
|
||||||
|
Docker安装完成后,请继续执行本教程中的后续步骤
|
||||||
|
|
||||||
|
### 构建Hadoop Docker镜像
|
||||||
|
在本教程中,我们为Hadoop 2.8.5集群提供了一个Dockerfile,我们将使用它运行批处理索引任务。
|
||||||
|
|
||||||
|
该Dockerfile和相关文件位于 `quickstart/tutorial/hadoop/docker`。
|
||||||
|
|
||||||
|
从apache-druid-0.17.0软件包根目录中,运行以下命令以构建名为"druid-hadoop-demo"的Docker镜像,其版本标签为"2.8.5":
|
||||||
|
|
||||||
|
```json
|
||||||
|
cd quickstart/tutorial/hadoop/docker
|
||||||
|
docker build -t druid-hadoop-demo:2.8.5 .
|
||||||
|
```
|
||||||
|
该命令运行后开始构建Hadoop镜像。镜像构建完成后,可以在控制台中看到 `Successfully tagged druid-hadoop-demo:2.8.5` 的信息。
|
||||||
|
|
||||||
|
### 安装Hadoop Docker集群
|
||||||
|
#### 创建临时共享目录
|
||||||
|
|
||||||
|
我们需要一个共享目录以便于主机和Hadoop容器之间进行传输文件
|
||||||
|
|
||||||
|
我们在 `/tmp` 下创建一些文件夹,稍后我们在启动Hadoop容器时会使用到它们:
|
||||||
|
|
||||||
|
```json
|
||||||
|
mkdir -p /tmp/shared
|
||||||
|
mkdir -p /tmp/shared/hadoop_xml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 配置 /etc/hosts
|
||||||
|
|
||||||
|
在主机的 `/etc/hosts` 中增加以下入口:
|
||||||
|
```json
|
||||||
|
127.0.0.1 druid-hadoop-demo
|
||||||
|
```
|
||||||
|
#### 启动Hadoop容器
|
||||||
|
在 `/tmp/shared` 文件夹被创建和 `/etc/hosts` 入口被添加后,运行以下命令来启动Hadoop容器:
|
||||||
|
|
||||||
|
```json
|
||||||
|
docker run -it -h druid-hadoop-demo --name druid-hadoop-demo -p 2049:2049 -p 2122:2122 -p 8020:8020 -p 8021:8021 -p 8030:8030 -p 8031:8031 -p 8032:8032 -p 8033:8033 -p 8040:8040 -p 8042:8042 -p 8088:8088 -p 8443:8443 -p 9000:9000 -p 10020:10020 -p 19888:19888 -p 34455:34455 -p 49707:49707 -p 50010:50010 -p 50020:50020 -p 50030:50030 -p 50060:50060 -p 50070:50070 -p 50075:50075 -p 50090:50090 -p 51111:51111 -v /tmp/shared:/shared druid-hadoop-demo:2.8.5 /etc/bootstrap.sh -bash
|
||||||
|
```
|
||||||
|
|
||||||
|
容器启动后,您的终端将连接到容器内运行的bash shell:
|
||||||
|
|
||||||
|
```json
|
||||||
|
Starting sshd: [ OK ]
|
||||||
|
18/07/26 17:27:15 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
|
||||||
|
Starting namenodes on [druid-hadoop-demo]
|
||||||
|
druid-hadoop-demo: starting namenode, logging to /usr/local/hadoop/logs/hadoop-root-namenode-druid-hadoop-demo.out
|
||||||
|
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-druid-hadoop-demo.out
|
||||||
|
Starting secondary namenodes [0.0.0.0]
|
||||||
|
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-root-secondarynamenode-druid-hadoop-demo.out
|
||||||
|
18/07/26 17:27:31 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
|
||||||
|
starting yarn daemons
|
||||||
|
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn--resourcemanager-druid-hadoop-demo.out
|
||||||
|
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-root-nodemanager-druid-hadoop-demo.out
|
||||||
|
starting historyserver, logging to /usr/local/hadoop/logs/mapred--historyserver-druid-hadoop-demo.out
|
||||||
|
bash-4.1#
|
||||||
|
```
|
||||||
|
`Unable to load native-hadoop library for your platform... using builtin-java classes where applicable` 这个信息可以安全地忽略掉。
|
||||||
|
|
||||||
|
##### 进入Hadoop容器shell
|
||||||
|
|
||||||
|
运行下边命令打开Hadoop容器的另一个shell:
|
||||||
|
```json
|
||||||
|
docker exec -it druid-hadoop-demo bash
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 拷贝数据到Hadoop容器
|
||||||
|
|
||||||
|
从apache-druid-0.17.0安装包的根目录拷贝 `quickstart/tutorial/wikiticker-2015-09-12-sampled.json.gz` 样例数据到共享文件夹
|
||||||
|
|
||||||
|
```json
|
||||||
|
cp quickstart/tutorial/wikiticker-2015-09-12-sampled.json.gz /tmp/shared/wikiticker-2015-09-12-sampled.json.gz
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 设置Hadoop目录
|
||||||
|
|
||||||
|
在Hadoop容器shell中,运行以下命令来设置本次教程需要的HDFS目录,同时拷贝输入数据到HDFS上:
|
||||||
|
|
||||||
|
```json
|
||||||
|
cd /usr/local/hadoop/bin
|
||||||
|
./hdfs dfs -mkdir /druid
|
||||||
|
./hdfs dfs -mkdir /druid/segments
|
||||||
|
./hdfs dfs -mkdir /quickstart
|
||||||
|
./hdfs dfs -chmod 777 /druid
|
||||||
|
./hdfs dfs -chmod 777 /druid/segments
|
||||||
|
./hdfs dfs -chmod 777 /quickstart
|
||||||
|
./hdfs dfs -chmod -R 777 /tmp
|
||||||
|
./hdfs dfs -chmod -R 777 /user
|
||||||
|
./hdfs dfs -put /shared/wikiticker-2015-09-12-sampled.json.gz /quickstart/wikiticker-2015-09-12-sampled.json.gz
|
||||||
|
```
|
||||||
|
如果在命令执行中遇到了namenode相关的错误,可能是因为Hadoop容器没有完成初始化,等一会儿后重新执行命令。
|
||||||
|
|
||||||
|
### 配置使用Hadoop的Druid
|
||||||
|
|
||||||
|
配置用于Hadoop批加载的Druid集群还需要额外的一些步骤。
|
||||||
|
|
||||||
|
#### 拷贝Hadoop配置到Druid classpath
|
||||||
|
|
||||||
|
从Hadoop容器shell中,运行以下命令将Hadoop.xml配置文件拷贝到共享文件夹中:
|
||||||
|
```json
|
||||||
|
cp /usr/local/hadoop/etc/hadoop/*.xml /shared/hadoop_xml
|
||||||
|
```
|
||||||
|
|
||||||
|
在宿主机上运行下边命令,其中{PATH_TO_DRUID}替换为Druid软件包的路径:
|
||||||
|
|
||||||
|
```json
|
||||||
|
mkdir -p {PATH_TO_DRUID}/conf/druid/single-server/micro-quickstart/_common/hadoop-xml
|
||||||
|
cp /tmp/shared/hadoop_xml/*.xml {PATH_TO_DRUID}/conf/druid/single-server/micro-quickstart/_common/hadoop-xml/
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 更新Druid段与日志的存储
|
||||||
|
在常用的文本编辑器中,打开 `conf/druid/single-server/micro-quickstart/_common/common.runtime.properties` 文件做如下修改:
|
||||||
|
|
||||||
|
**禁用本地深度存储,启用HDFS深度存储**
|
||||||
|
```json
|
||||||
|
#
|
||||||
|
# Deep storage
|
||||||
|
#
|
||||||
|
|
||||||
|
# For local disk (only viable in a cluster if this is a network mount):
|
||||||
|
#druid.storage.type=local
|
||||||
|
#druid.storage.storageDirectory=var/druid/segments
|
||||||
|
|
||||||
|
# For HDFS:
|
||||||
|
druid.storage.type=hdfs
|
||||||
|
druid.storage.storageDirectory=/druid/segments
|
||||||
|
```
|
||||||
|
**禁用本地日志存储,启动HDFS日志存储**
|
||||||
|
```json
|
||||||
|
#
|
||||||
|
# Indexing service logs
|
||||||
|
#
|
||||||
|
|
||||||
|
# For local disk (only viable in a cluster if this is a network mount):
|
||||||
|
#druid.indexer.logs.type=file
|
||||||
|
#druid.indexer.logs.directory=var/druid/indexing-logs
|
||||||
|
|
||||||
|
# For HDFS:
|
||||||
|
druid.indexer.logs.type=hdfs
|
||||||
|
druid.indexer.logs.directory=/druid/indexing-logs
|
||||||
|
```
|
||||||
|
#### 重启Druid集群
|
||||||
|
|
||||||
|
Hadoop.xml文件拷贝到Druid集群、段和日志存储配置更新为HDFS后,Druid集群需要重启才可以让配置生效。
|
||||||
|
|
||||||
|
如果集群正在运行,`CTRL-C` 终止 `bin/start-micro-quickstart`脚本,重新执行它使得Druid服务恢复。
|
||||||
|
|
||||||
|
### 加载批数据
|
||||||
|
|
||||||
|
我们提供了2015年9月12日起对Wikipedia编辑的示例数据,以帮助您入门。
|
||||||
|
|
||||||
|
要将数据加载到Druid中,可以提交指向该文件的*摄取任务*。我们已经包含了一个任务,该任务会加载存档中包含 `wikiticker-2015-09-12-sampled.json.gz`文件。
|
||||||
|
|
||||||
|
通过以下命令进行提交 `wikipedia-index-hadoop.json` 任务:
|
||||||
|
```json
|
||||||
|
bin/post-index-task --file quickstart/tutorial/wikipedia-index-hadoop.json --url http://localhost:8081
|
||||||
|
```
|
||||||
|
|
||||||
|
### 查询数据
|
||||||
|
|
||||||
|
加载数据后,请按照[查询教程](./chapter-4.md)的操作,对新加载的数据执行一些示例查询.
|
||||||
|
|
||||||
|
### 清理数据
|
||||||
|
|
||||||
|
本教程只能与[查询教程](./chapter-4.md)一起使用。
|
||||||
|
|
||||||
|
如果您打算完成其他任何教程,还需要:
|
||||||
|
* 关闭集群,通过删除Druid软件包中的 `var` 目录来重置集群状态
|
||||||
|
* 将 `conf/druid/single-server/micro-quickstart/_common/common.runtime.properties` 恢复深度存储与任务存储配置到本地类型
|
||||||
|
* 重启集群
|
||||||
|
|
||||||
|
这是必需的,因为其他摄取教程将写入相同的"wikipedia"数据源,并且以后的教程希望集群使用本地深度存储。
|
||||||
|
|
||||||
|
恢复配置示例:
|
||||||
|
```json
|
||||||
|
#
|
||||||
|
# Deep storage
|
||||||
|
#
|
||||||
|
|
||||||
|
# For local disk (only viable in a cluster if this is a network mount):
|
||||||
|
druid.storage.type=local
|
||||||
|
druid.storage.storageDirectory=var/druid/segments
|
||||||
|
|
||||||
|
# For HDFS:
|
||||||
|
#druid.storage.type=hdfs
|
||||||
|
#druid.storage.storageDirectory=/druid/segments
|
||||||
|
|
||||||
|
#
|
||||||
|
# Indexing service logs
|
||||||
|
#
|
||||||
|
|
||||||
|
# For local disk (only viable in a cluster if this is a network mount):
|
||||||
|
druid.indexer.logs.type=file
|
||||||
|
druid.indexer.logs.directory=var/druid/indexing-logs
|
||||||
|
|
||||||
|
# For HDFS:
|
||||||
|
#druid.indexer.logs.type=hdfs
|
||||||
|
#druid.indexer.logs.directory=/druid/indexing-logs
|
||||||
|
```
|
||||||
|
|
||||||
|
### 进一步阅读
|
||||||
|
|
||||||
|
更多关于从Hadoop加载数据的信息,可以查看[Druid Hadoop批量摄取文档](../DataIngestion/hadoopbased.md)
|
|
@ -276,3 +276,279 @@ The following results should be returned:
|
||||||
See the [Druid SQL documentation](../querying/sql.md) for more information on using Druid SQL queries.
|
See the [Druid SQL documentation](../querying/sql.md) for more information on using Druid SQL queries.
|
||||||
|
|
||||||
See the [Queries documentation](../querying/querying.md) for more information on Druid native queries.
|
See the [Queries documentation](../querying/querying.md) for more information on Druid native queries.
|
||||||
|
|
||||||
|
## 查询数据
|
||||||
|
|
||||||
|
本教程将以Druid SQL和Druid的原生查询格式的示例演示如何在Apache Druid中查询数据。
|
||||||
|
|
||||||
|
本教程假定您已经完成了摄取教程之一,因为我们将查询Wikipedia编辑样例数据。
|
||||||
|
|
||||||
|
* [加载本地文件](tutorial-batch.md)
|
||||||
|
* [从Kafka加载数据](./chapter-2.md)
|
||||||
|
* [从Hadoop加载数据](./chapter-3.md)
|
||||||
|
|
||||||
|
Druid查询通过HTTP发送,Druid控制台包括一个视图,用于向Druid发出查询并很好地格式化结果。
|
||||||
|
|
||||||
|
### Druid SQL查询
|
||||||
|
|
||||||
|
Druid支持SQL查询。
|
||||||
|
|
||||||
|
该查询检索了2015年9月12日被编辑最多的10个维基百科页面
|
||||||
|
|
||||||
|
```json
|
||||||
|
SELECT page, COUNT(*) AS Edits
|
||||||
|
FROM wikipedia
|
||||||
|
WHERE TIMESTAMP '2015-09-12 00:00:00' <= "__time" AND "__time" < TIMESTAMP '2015-09-13 00:00:00'
|
||||||
|
GROUP BY page
|
||||||
|
ORDER BY Edits DESC
|
||||||
|
LIMIT 10
|
||||||
|
```
|
||||||
|
|
||||||
|
让我们来看几种不同的查询方法
|
||||||
|
|
||||||
|
#### 通过控制台查询SQL
|
||||||
|
|
||||||
|
您可以通过在控制台中进行上述查询:
|
||||||
|
|
||||||
|
![](img-3/tutorial-query-01.png)
|
||||||
|
|
||||||
|
控制台查询视图通过内联文档提供自动补全功能。
|
||||||
|
|
||||||
|
![](img-3/tutorial-query-02.png)
|
||||||
|
|
||||||
|
您还可以从 `...` 选项菜单中配置要与查询一起发送的其他上下文标志。
|
||||||
|
|
||||||
|
请注意,控制台将(默认情况下)使用带Limit的SQL查询,以便可以完成诸如`SELECT * FROM wikipedia`之类的查询,您可以通过 `Smart query limit` 切换关闭此行为。
|
||||||
|
|
||||||
|
![](img-3/tutorial-query-03.png)
|
||||||
|
|
||||||
|
查询视图提供了可以为您编写和修改查询的上下文操作。
|
||||||
|
|
||||||
|
#### 通过dsql查询SQL
|
||||||
|
|
||||||
|
为方便起见,Druid软件包中包括了一个SQL命令行客户端,位于Druid根目录中的 `bin/dsql`
|
||||||
|
|
||||||
|
运行 `bin/dsql`, 可以看到如下:
|
||||||
|
```json
|
||||||
|
Welcome to dsql, the command-line client for Druid SQL.
|
||||||
|
Type "\h" for help.
|
||||||
|
dsql>
|
||||||
|
```
|
||||||
|
将SQl粘贴到 `dsql` 中提交查询:
|
||||||
|
|
||||||
|
```json
|
||||||
|
dsql> SELECT page, COUNT(*) AS Edits FROM wikipedia WHERE "__time" BETWEEN TIMESTAMP '2015-09-12 00:00:00' AND TIMESTAMP '2015-09-13 00:00:00' GROUP BY page ORDER BY Edits DESC LIMIT 10;
|
||||||
|
┌──────────────────────────────────────────────────────────┬───────┐
|
||||||
|
│ page │ Edits │
|
||||||
|
├──────────────────────────────────────────────────────────┼───────┤
|
||||||
|
│ Wikipedia:Vandalismusmeldung │ 33 │
|
||||||
|
│ User:Cyde/List of candidates for speedy deletion/Subpage │ 28 │
|
||||||
|
│ Jeremy Corbyn │ 27 │
|
||||||
|
│ Wikipedia:Administrators' noticeboard/Incidents │ 21 │
|
||||||
|
│ Flavia Pennetta │ 20 │
|
||||||
|
│ Total Drama Presents: The Ridonculous Race │ 18 │
|
||||||
|
│ User talk:Dudeperson176123 │ 18 │
|
||||||
|
│ Wikipédia:Le Bistro/12 septembre 2015 │ 18 │
|
||||||
|
│ Wikipedia:In the news/Candidates │ 17 │
|
||||||
|
│ Wikipedia:Requests for page protection │ 17 │
|
||||||
|
└──────────────────────────────────────────────────────────┴───────┘
|
||||||
|
Retrieved 10 rows in 0.06s.
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 通过HTTP查询SQL
|
||||||
|
|
||||||
|
SQL查询作为JSON通过HTTP提交
|
||||||
|
|
||||||
|
教程包括一个示例文件, 该文件`quickstart/tutorial/wikipedia-top-pages-sql.json`包含上面显示的SQL查询, 我们将该查询提交给Druid Broker。
|
||||||
|
|
||||||
|
```json
|
||||||
|
curl -X 'POST' -H 'Content-Type:application/json' -d @quickstart/tutorial/wikipedia-top-pages-sql.json http://localhost:8888/druid/v2/sql
|
||||||
|
```
|
||||||
|
结果返回如下:
|
||||||
|
|
||||||
|
```json
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"page": "Wikipedia:Vandalismusmeldung",
|
||||||
|
"Edits": 33
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"page": "User:Cyde/List of candidates for speedy deletion/Subpage",
|
||||||
|
"Edits": 28
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"page": "Jeremy Corbyn",
|
||||||
|
"Edits": 27
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"page": "Wikipedia:Administrators' noticeboard/Incidents",
|
||||||
|
"Edits": 21
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"page": "Flavia Pennetta",
|
||||||
|
"Edits": 20
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"page": "Total Drama Presents: The Ridonculous Race",
|
||||||
|
"Edits": 18
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"page": "User talk:Dudeperson176123",
|
||||||
|
"Edits": 18
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"page": "Wikipédia:Le Bistro/12 septembre 2015",
|
||||||
|
"Edits": 18
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"page": "Wikipedia:In the news/Candidates",
|
||||||
|
"Edits": 17
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"page": "Wikipedia:Requests for page protection",
|
||||||
|
"Edits": 17
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 更多Druid SQL示例
|
||||||
|
|
||||||
|
这是一组可尝试的查询:
|
||||||
|
|
||||||
|
**时间查询**
|
||||||
|
|
||||||
|
```json
|
||||||
|
SELECT FLOOR(__time to HOUR) AS HourTime, SUM(deleted) AS LinesDeleted
|
||||||
|
FROM wikipedia WHERE "__time" BETWEEN TIMESTAMP '2015-09-12 00:00:00' AND TIMESTAMP '2015-09-13 00:00:00'
|
||||||
|
GROUP BY 1
|
||||||
|
```
|
||||||
|
|
||||||
|
![](img-3/tutorial-query-03.png)
|
||||||
|
|
||||||
|
**聚合查询**
|
||||||
|
|
||||||
|
```json
|
||||||
|
SELECT channel, page, SUM(added)
|
||||||
|
FROM wikipedia WHERE "__time" BETWEEN TIMESTAMP '2015-09-12 00:00:00' AND TIMESTAMP '2015-09-13 00:00:00'
|
||||||
|
GROUP BY channel, page
|
||||||
|
ORDER BY SUM(added) DESC
|
||||||
|
```
|
||||||
|
|
||||||
|
![](img-3/tutorial-query-04.png)
|
||||||
|
|
||||||
|
**查询原始数据**
|
||||||
|
|
||||||
|
```json
|
||||||
|
SELECT user, page
|
||||||
|
FROM wikipedia WHERE "__time" BETWEEN TIMESTAMP '2015-09-12 02:00:00' AND TIMESTAMP '2015-09-12 03:00:00'
|
||||||
|
LIMIT 5
|
||||||
|
```
|
||||||
|
|
||||||
|
![](img-3/tutorial-query-05.png)
|
||||||
|
|
||||||
|
#### SQL查询计划
|
||||||
|
|
||||||
|
Druid SQL能够解释给定查询的查询计划, 在控制台中,可以通过 `...` 按钮访问此功能。
|
||||||
|
|
||||||
|
![](img-3/tutorial-query-06.png)
|
||||||
|
|
||||||
|
如果您以其他方式查询,则可以通过在Druid SQL查询之前添加 `EXPLAIN PLAN FOR` 来获得查询计划。
|
||||||
|
|
||||||
|
使用上边的一个示例:
|
||||||
|
|
||||||
|
`EXPLAIN PLAN FOR SELECT page, COUNT(*) AS Edits FROM wikipedia WHERE "__time" BETWEEN TIMESTAMP '2015-09-12 00:00:00' AND TIMESTAMP '2015-09-13 00:00:00' GROUP BY page ORDER BY Edits DESC LIMIT 10;`
|
||||||
|
|
||||||
|
```json
|
||||||
|
dsql> EXPLAIN PLAN FOR SELECT page, COUNT(*) AS Edits FROM wikipedia WHERE "__time" BETWEEN TIMESTAMP '2015-09-12 00:00:00' AND TIMESTAMP '2015-09-13 00:00:00' GROUP BY page ORDER BY Edits DESC LIMIT 10;
|
||||||
|
┌─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
|
||||||
|
│ PLAN │
|
||||||
|
├─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
|
||||||
|
│ DruidQueryRel(query=[{"queryType":"topN","dataSource":{"type":"table","name":"wikipedia"},"virtualColumns":[],"dimension":{"type":"default","dimension":"page","outputName":"d0","outputType":"STRING"},"metric":{"type":"numeric","metric":"a0"},"threshold":10,"intervals":{"type":"intervals","intervals":["2015-09-12T00:00:00.000Z/2015-09-13T00:00:00.001Z"]},"filter":null,"granularity":{"type":"all"},"aggregations":[{"type":"count","name":"a0"}],"postAggregations":[],"context":{},"descending":false}], signature=[{d0:STRING, a0:LONG}]) │
|
||||||
|
└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
|
||||||
|
Retrieved 1 row in 0.03s.
|
||||||
|
```
|
||||||
|
|
||||||
|
### 原生JSON查询
|
||||||
|
|
||||||
|
Druid的原生查询格式以JSON表示。
|
||||||
|
|
||||||
|
#### 通过控制台原生查询
|
||||||
|
|
||||||
|
您可以从控制台的"Query"视图发出原生Druid查询。
|
||||||
|
|
||||||
|
这是一个查询,可检索2015-09-12上具有最多页面编辑量的10个wikipedia页面。
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"queryType" : "topN",
|
||||||
|
"dataSource" : "wikipedia",
|
||||||
|
"intervals" : ["2015-09-12/2015-09-13"],
|
||||||
|
"granularity" : "all",
|
||||||
|
"dimension" : "page",
|
||||||
|
"metric" : "count",
|
||||||
|
"threshold" : 10,
|
||||||
|
"aggregations" : [
|
||||||
|
{
|
||||||
|
"type" : "count",
|
||||||
|
"name" : "count"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
只需将其粘贴到控制台即可将编辑器切换到JSON模式。
|
||||||
|
|
||||||
|
![](img-3/tutorial-query-07.png)
|
||||||
|
|
||||||
|
#### 通过HTTP原生查询
|
||||||
|
|
||||||
|
我们在 `quickstart/tutorial/wikipedia-top-pages.json` 文件中包括了一个示例原生TopN查询。
|
||||||
|
|
||||||
|
提交该查询到Druid:
|
||||||
|
|
||||||
|
```json
|
||||||
|
curl -X 'POST' -H 'Content-Type:application/json' -d @quickstart/tutorial/wikipedia-top-pages.json http://localhost:8888/druid/v2?pretty
|
||||||
|
```
|
||||||
|
|
||||||
|
您可以看到如下的查询结果:
|
||||||
|
|
||||||
|
```json
|
||||||
|
[ {
|
||||||
|
"timestamp" : "2015-09-12T00:46:58.771Z",
|
||||||
|
"result" : [ {
|
||||||
|
"count" : 33,
|
||||||
|
"page" : "Wikipedia:Vandalismusmeldung"
|
||||||
|
}, {
|
||||||
|
"count" : 28,
|
||||||
|
"page" : "User:Cyde/List of candidates for speedy deletion/Subpage"
|
||||||
|
}, {
|
||||||
|
"count" : 27,
|
||||||
|
"page" : "Jeremy Corbyn"
|
||||||
|
}, {
|
||||||
|
"count" : 21,
|
||||||
|
"page" : "Wikipedia:Administrators' noticeboard/Incidents"
|
||||||
|
}, {
|
||||||
|
"count" : 20,
|
||||||
|
"page" : "Flavia Pennetta"
|
||||||
|
}, {
|
||||||
|
"count" : 18,
|
||||||
|
"page" : "Total Drama Presents: The Ridonculous Race"
|
||||||
|
}, {
|
||||||
|
"count" : 18,
|
||||||
|
"page" : "User talk:Dudeperson176123"
|
||||||
|
}, {
|
||||||
|
"count" : 18,
|
||||||
|
"page" : "Wikipédia:Le Bistro/12 septembre 2015"
|
||||||
|
}, {
|
||||||
|
"count" : 17,
|
||||||
|
"page" : "Wikipedia:In the news/Candidates"
|
||||||
|
}, {
|
||||||
|
"count" : 17,
|
||||||
|
"page" : "Wikipedia:Requests for page protection"
|
||||||
|
} ]
|
||||||
|
} ]
|
||||||
|
```
|
||||||
|
|
||||||
|
### 进一步阅读
|
||||||
|
|
||||||
|
[查询文档](../querying/makeNativeQueries.md)有更多关于Druid原生JSON查询的信息
|
||||||
|
[Druid SQL文档](../querying/druidsql.md)有更多关于Druid SQL查询的信息
|