Merge pull request #16 from cwiki-us-docs/getting_started

Getting started
This commit is contained in:
YuCheng Hu 2021-07-26 19:10:51 -04:00 committed by GitHub
commit a948fe99d2
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
6 changed files with 57 additions and 68 deletions

View File

@ -34,6 +34,5 @@
</script>
<script src="//cdn.jsdelivr.net/npm/docsify/lib/docsify.min.js"></script>
</body>
</html>

View File

@ -148,14 +148,14 @@ First, download and unpack the release archive. It's best to do this on a single
since you will be editing the configurations and then copying the modified distribution out to all
of your servers.
[Download](https://www.apache.org/dyn/closer.cgi?path=/druid/{{DRUIDVERSION}}/apache-druid-{{DRUIDVERSION}}-bin.tar.gz)
the {{DRUIDVERSION}} release.
[Download](https://www.apache.org/dyn/closer.cgi?path=/druid/apache-druid-0.21.1/apache-druid-apache-druid-0.21.1-bin.tar.gz)
the apache-druid-0.21.1 release.
Extract Druid by running the following commands in your terminal:
```bash
tar -xzf apache-druid-{{DRUIDVERSION}}-bin.tar.gz
cd apache-druid-{{DRUIDVERSION}}
tar -xzf apache-druid-apache-druid-0.21.1-bin.tar.gz
cd apache-druid-apache-druid-0.21.1
```
In the package, you should find:
@ -419,7 +419,7 @@ Copy the Druid distribution and your edited configurations to your Master server
If you have been editing the configurations on your local machine, you can use *rsync* to copy them:
```bash
rsync -az apache-druid-{{DRUIDVERSION}}/ MASTER_SERVER:apache-druid-{{DRUIDVERSION}}/
rsync -az apache-druid-apache-druid-0.21.1/ MASTER_SERVER:apache-druid-apache-druid-0.21.1/
```
### No Zookeeper on Master

View File

@ -3,6 +3,10 @@
在本快速开始的内容部分,将向你介绍有关如何开始使用 Apache Druid 和一些相关的基本特性。
当你按照给出的步骤完成操作后,你将能够安装并且运行 Druid 和使用自带的批量数据摄取ingestion特性向安装成功的 Druid 实例中导入数据。
!> 当前翻译基于的 Druid 版本为 apache-druid-0.21.1,本页面中的有关下载链接可能会随着版本更新而失效。
请自行根据官方办法的发行进度进行搜索更新和下载。
在开始我们下面的步骤之前,请先阅读 [Druid 概述](../design/index.md) 和 [数据摄取ingestion概述](../ingestion/index.md) 中的内容。
因为下面使用的步骤将会参照在前面 2 个 页面中提到过的一些概念和定义。
@ -33,17 +37,16 @@ Druid 配置属性包括有从 _Nano-Quickstart_ 配置 1 CPU, 4GB RAM
例如,如果你使用 Druid 的控制台对文件进行浏览的话,那么操作系统通只显示这个用户能够访问到的文件,或者说有权限进行查看的文件进行显示。
一般来说,我们是不希望 Druid 以 root 用户的权限来运行的。因此针对 Druid 的安装环境,可以考虑针对 Druid 实例,在操作系统中创建一个只供 Druid 运行的用户。
## 第 1 步:安装 Druid
当你确定你的系统已经满足 [安装要求](#安装要求) 的所有内容后,请按照下面的步骤:
1. 下载
下载地址为: [{{DRUIDVERSION}} 发布release](https://www.apache.org/dyn/closer.cgi?path=/druid/{{DRUIDVERSION}}/apache-druid-{{DRUIDVERSION}}-bin.tar.gz).
下载地址为: [apache-druid-0.21.1 发布release](https://www.apache.org/dyn/closer.cgi?path=/druid/apache-druid-0.21.1/apache-druid-apache-druid-0.21.1-bin.tar.gz).
2. 在你的控制台中,将下载的压缩包进行解压到当前目录,并且进入到解压的目录,或者你将目录移动到你希望部署的的目录中:
```bash
tar -xzf apache-druid-{{DRUIDVERSION}}-bin.tar.gz
cd apache-druid-{{DRUIDVERSION}}
tar -xzf apache-druid-apache-druid-0.21.1-bin.tar.gz
cd apache-druid-apache-druid-0.21.1
```
在解压后的目录中,你会看到 `LICENSE``NOTICE` 文件,以及一些子目录,在这些子目录中保存有可执行文件,配置文件,示例数据和其他的内容。
@ -61,7 +64,7 @@ Druid 配置属性包括有从 _Nano-Quickstart_ 配置 1 CPU, 4GB RAM
针对一台计算机,你可以使用 `micro-quickstart` 配置来启动所有 Druid 的服务。
在 apache-druid-{{DRUIDVERSION}} 包的根目录下,运行下面的命令:
在 apache-druid-apache-druid-0.21.1 包的根目录下,运行下面的命令:
```bash
./bin/start-micro-quickstart
@ -71,16 +74,16 @@ Druid 配置属性包括有从 _Nano-Quickstart_ 配置 1 CPU, 4GB RAM
```bash
$ ./bin/start-micro-quickstart
[Fri May 3 11:40:50 2019] Running command[zk], logging to[/apache-druid-{{DRUIDVERSION}}/var/sv/zk.log]: bin/run-zk conf
[Fri May 3 11:40:50 2019] Running command[coordinator-overlord], logging to[/apache-druid-{{DRUIDVERSION}}/var/sv/coordinator-overlord.log]: bin/run-druid coordinator-overlord conf/druid/single-server/micro-quickstart
[Fri May 3 11:40:50 2019] Running command[broker], logging to[/apache-druid-{{DRUIDVERSION}}/var/sv/broker.log]: bin/run-druid broker conf/druid/single-server/micro-quickstart
[Fri May 3 11:40:50 2019] Running command[router], logging to[/apache-druid-{{DRUIDVERSION}}/var/sv/router.log]: bin/run-druid router conf/druid/single-server/micro-quickstart
[Fri May 3 11:40:50 2019] Running command[historical], logging to[/apache-druid-{{DRUIDVERSION}}/var/sv/historical.log]: bin/run-druid historical conf/druid/single-server/micro-quickstart
[Fri May 3 11:40:50 2019] Running command[middleManager], logging to[/apache-druid-{{DRUIDVERSION}}/var/sv/middleManager.log]: bin/run-druid middleManager conf/druid/single-server/micro-quickstart
[Fri May 3 11:40:50 2019] Running command[zk], logging to[/apache-druid-apache-druid-0.21.1/var/sv/zk.log]: bin/run-zk conf
[Fri May 3 11:40:50 2019] Running command[coordinator-overlord], logging to[/apache-druid-apache-druid-0.21.1/var/sv/coordinator-overlord.log]: bin/run-druid coordinator-overlord conf/druid/single-server/micro-quickstart
[Fri May 3 11:40:50 2019] Running command[broker], logging to[/apache-druid-apache-druid-0.21.1/var/sv/broker.log]: bin/run-druid broker conf/druid/single-server/micro-quickstart
[Fri May 3 11:40:50 2019] Running command[router], logging to[/apache-druid-apache-druid-0.21.1/var/sv/router.log]: bin/run-druid router conf/druid/single-server/micro-quickstart
[Fri May 3 11:40:50 2019] Running command[historical], logging to[/apache-druid-apache-druid-0.21.1/var/sv/historical.log]: bin/run-druid historical conf/druid/single-server/micro-quickstart
[Fri May 3 11:40:50 2019] Running command[middleManager], logging to[/apache-druid-apache-druid-0.21.1/var/sv/middleManager.log]: bin/run-druid middleManager conf/druid/single-server/micro-quickstart
```
如上面输出的内容表示的集群元数据存储cluster metadata store 和服务段segments for the service都会保存在 Druid 根目录下面的 `var` 目录中。
这个 Druid 的根目录就是 apache-druid-{{DRUIDVERSION}},换句话说就是你最开始解压并且既然怒的目录。
这个 Druid 的根目录就是 apache-druid-apache-druid-0.21.1,换句话说就是你最开始解压并且既然怒的目录。
所有的服务将会把日志写入到 `var/sv` 目录中,同时也会将脚本的控制台输出按照上面的格式进行输出。
@ -95,7 +98,7 @@ $ ./bin/start-micro-quickstart
当 Druid 的进程完全启动后,打开 [Druid 控制台console](../operations/druid-console.md) 。访问的地址为: [http://localhost:8888](http://localhost:8888) 默认的使用端口为 8888。
![Druid console](../assets/tutorial-quickstart-01.png "Druid console")
![Druid console](../assets/tutorial-quickstart-01.png ':size=690')
整个过程可能还需要耗费几秒钟的时间等待所有的 Druid 服务启动,包括 [Druid router](../design/router.md) 这个服务。
@ -105,70 +108,57 @@ $ ./bin/start-micro-quickstart
## 第 4 步:导入数据
Druid 是通过读取和存储有关导入数据的摘要schema来完成导入的。
你可以完全手写一个数据导入参数摘要,或者使用 _data loader_ 来替你完成对数据摘要的定义。在这里我们通过使用 Druid 的原生批量数据导入来进行演示操作。
在 Druid 的发行包中还打包了一个可以供测试的样例数据。这个样例数据位于在 Druid 根目录下面的 `quickstart/tutorial/wikiticker-2015-09-12-sampled.json.gz` 路径下。
这个文件包含有给定日期2015年9月12日发生在 Wikipedia 上的所有页面编辑事件。
Ingestion specs define the schema of the data Druid reads and stores. You can write ingestion specs by hand or using the _data loader_,
as we'll do here to perform batch file loading with Druid's native batch ingestion.
1. 从 Druid 的控制台顶部,单击 **载入数据Load data** (![Load data](../assets/tutorial-batch-data-loader-00.png))。
The Druid distribution bundles sample data we can use. The sample data located in `quickstart/tutorial/wikiticker-2015-09-12-sampled.json.gz`
in the Druid root directory represents Wikipedia page edits for a given day.
2. 然后选择 **从磁盘载入Local disk** ,然后再单击选择 **连接数据Connect data**
1. Click **Load data** from the Druid console header (![Load data](../assets/tutorial-batch-data-loader-00.png)).
![Data loader init](../assets/tutorial-batch-data-loader-01.png ':size=690')
2. Select the **Local disk** tile and then click **Connect data**.
3. 输入下面的指定参数:
- **基础目录Base directory**: `quickstart/tutorial/`
![Data loader init](../assets/tutorial-batch-data-loader-01.png "Data loader init")
- **文件过滤器File filter**: `wikiticker-2015-09-12-sampled.json.gz`
3. Enter the following values:
![Data location](../assets/tutorial-batch-data-loader-015.png ':size=690')
在给定的 UI 界面中,输入基础的目录名称和 [通配文件过滤器](https://commons.apache.org/proper/commons-io/apidocs/org/apache/commons/io/filefilter/WildcardFileFilter.html) , 这种配置方式是为了能够让你在一次导入的过程中选择多个文件。
- **Base directory**: `quickstart/tutorial/`
4. 单击 **应用Apply**
数据载入器将会显示原始数据raw data在这里能够为你对数据检查提供一个机会你可以通过数据载入器查看给出的数据结构是不是自己想要的数据结构。
- **File filter**: `wikiticker-2015-09-12-sampled.json.gz`
![Data loader sample](../assets/tutorial-batch-data-loader-02.png ':size=690')
![Data location](../assets/tutorial-batch-data-loader-015.png "Data location")
Entering the base directory and [wildcard file filter](https://commons.apache.org/proper/commons-io/apidocs/org/apache/commons/io/filefilter/WildcardFileFilter.html) separately, as afforded by the UI, allows you to specify multiple files for ingestion at once.
4. Click **Apply**.
The data loader displays the raw data, giving you a chance to verify that the data
appears as expected.
![Data loader sample](../assets/tutorial-batch-data-loader-02.png "Data loader sample")
Notice that your position in the sequence of steps to load data, **Connect** in our case, appears at the top of the console, as shown below.
You can click other steps to move forward or backward in the sequence at any time.
请注意到当前我们对数据进行导入的步骤,在现在这个过程中的步骤为 **Connect**,这个将会显示在控制台的顶部,具体的菜单栏如下图显示的样式。
![Load data](../assets/tutorial-batch-data-loader-12.png)
![Load data](../assets/tutorial-batch-data-loader-12.png ':size=690')
你可以在当前的界面中对需要进行的步骤进行调整,你可以向前移动步骤,你也可以向后移动步骤。
5. Click **Next: Parse data**.
5. 单击 **下一步处理数据Parse data**.
数据载入工具将会根据载入的数据格式尝试自动确定需要使用的数据处理器。在现在的这个案例中,数据载入工具将会需要载入的数据认定为 `json` 格式。
认定的结果显示在页面的 **Input format** 字段内。
The data loader tries to determine the parser appropriate for the data format automatically. In this case
it identifies the data format as `json`, as shown in the **Input format** field at the bottom right.
![Data loader parse data](../assets/tutorial-batch-data-loader-03.png ':size=690')
![Data loader parse data](../assets/tutorial-batch-data-loader-03.png "Data loader parse data")
你也可以对这个格式进行调整,在 **Input format** 的选项中选择数据应该使用的格式,这样将会帮助 Druid 调用不同的数据格式处理器。
Feel free to select other **Input format** options to get a sense of their configuration settings
and how Druid parses other types of data.
6. 当 JSON 数据处理器被选择后,单击 **下一步处理日期Parse time**。这个 **Parse time** 的设置就是让你能够查看和调整数据中针对时间的主键设置。
6. With the JSON parser selected, click **Next: Parse time**. The **Parse time** settings are where you view and adjust the
primary timestamp column for the data.
![Data loader parse time](../assets/tutorial-batch-data-loader-04.png ':size=690')
![Data loader parse time](../assets/tutorial-batch-data-loader-04.png "Data loader parse time")
Druid 要求所有数据必须有一个 timestamp 的主键字段(这个主键字段被定义和存储在 `__time`)中。
如果你需要导入的数据没有时间字段的话,那么请选择 `Constant value`。在我们现在的示例中,数据载入器确定 `time` 字段是唯一可以被用来作为数据时间字段的数据。
Druid requires data to have a primary timestamp column (internally stored in a column called `__time`).
If you do not have a timestamp in your data, select `Constant value`. In our example, the data loader
determines that the `time` column is the only candidate that can be used as the primary time column.
7. 单击 **下一步转换Transform**, **下一步过滤器Filter**,然后再 **下一步配置摘要schema**,跳过一些步骤
因为针对本教程来说,你并不需要对导入时间进行换行,所以你不需要调整 转换Transform 和 过滤器Filter 的配置。
7. Click **Next: Transform**, **Next: Filter**, and then **Next: Configure schema**, skipping a few steps.
You do not need to adjust transformation or filtering settings, as applying ingestion time transforms and
filters are out of scope for this tutorial.
8. The Configure schema settings are where you configure what [dimensions](../ingestion/index.md#dimensions)
and [metrics](../ingestion/index.md#metrics) are ingested. The outcome of this configuration represents exactly how the
data will appear in Druid after ingestion.
8. 配置摘要schema 是你对 [dimensions](../ingestion/index.md#dimensions) 和 [metrics](../ingestion/index.md#metrics) 在导入数据的时候配置的地方。
这个界面显示的是当我们对数据在 Druid 中进行导入的时候,数据是如何在 Druid 中进行存储和表现的。
Since our dataset is very small, you can turn off [rollup](../ingestion/index.md#rollup)
by unsetting the **Rollup** switch and confirming the change when prompted.
@ -176,7 +166,7 @@ in the Druid root directory represents Wikipedia page edits for a given day.
![Data loader schema](../assets/tutorial-batch-data-loader-05.png "Data loader schema")
10. Click **Next: Partition** to configure how the data will be split into segments. In this case, choose `DAY` as
10. 单击 **Next: Partition** to configure how the data will be split into segments. In this case, choose `DAY` as
the **Segment granularity**.
![Data loader partition](../assets/tutorial-batch-data-loader-06.png "Data loader partition")

View File

@ -42,7 +42,7 @@ For this tutorial, we've provided a Dockerfile for a Hadoop 2.8.5 cluster, which
This Dockerfile and related files are located at `quickstart/tutorial/hadoop/docker`.
From the apache-druid-{{DRUIDVERSION}} package root, run the following commands to build a Docker image named "druid-hadoop-demo" with version tag "2.8.5":
From the apache-druid-apache-druid-0.21.1 package root, run the following commands to build a Docker image named "druid-hadoop-demo" with version tag "2.8.5":
```bash
cd quickstart/tutorial/hadoop/docker
@ -110,7 +110,7 @@ docker exec -it druid-hadoop-demo bash
### Copy input data to the Hadoop container
From the apache-druid-{{DRUIDVERSION}} package root on the host, copy the `quickstart/tutorial/wikiticker-2015-09-12-sampled.json.gz` sample data to the shared folder:
From the apache-druid-apache-druid-0.21.1 package root on the host, copy the `quickstart/tutorial/wikiticker-2015-09-12-sampled.json.gz` sample data to the shared folder:
```bash
cp quickstart/tutorial/wikiticker-2015-09-12-sampled.json.gz /tmp/shared/wikiticker-2015-09-12-sampled.json.gz

View File

@ -575,7 +575,7 @@ We've finished defining the ingestion spec, it should now look like the followin
## Submit the task and query the data
From the apache-druid-{{DRUIDVERSION}} package root, run the following command:
From the apache-druid-apache-druid-0.21.1 package root, run the following command:
```bash
bin/post-index-task --file quickstart/ingestion-tutorial-index.json --url http://localhost:8081

View File

@ -111,7 +111,7 @@ We will see how these definitions are used after we load this data.
## Load the example data
From the apache-druid-{{DRUIDVERSION}} package root, run the following command:
From the apache-druid-apache-druid-0.21.1 package root, run the following command:
```bash
bin/post-index-task --file quickstart/tutorial/rollup-index.json --url http://localhost:8081