Merge pull request #13 from cwiki-us-docs/getting_started
Getting started 完成合并
This commit is contained in:
commit
d77d57c482
|
@ -6,12 +6,14 @@
|
|||
|---|---|
|
||||
| 电子邮件 | [service@ossez.com](mailto:service@ossez.com) |
|
||||
| QQ 或微信 | 103899765 |
|
||||
| QQ 交流群 Spring | 15186112 |
|
||||
| QQ 交流群 | 15186112 |
|
||||
| 社区论坛 | [https://www.ossez.com/](https://www.ossez.com/) |
|
||||
| WIKI 维基 | [https://www.cwiki.us/](https://www.cwiki.us/) |
|
||||
| CN 博客 | [https://www.cwikius.cn/](https://www.cwikius.cn/) |
|
||||
|
||||
## 公众平台
|
||||
|
||||
我们建议您通过社区论坛来和我们进行沟通,请关注我们公众平台上的账号
|
||||
我们建议您通过社区论坛来和我们进行沟通,请关注我们公众平台上的账号。
|
||||
|
||||
### 微信公众号
|
||||
![](https://cdn.ossez.com/img/cwikius/cwikius-qr-wechat-search-w400.png)
|
||||
|
|
|
@ -59,7 +59,7 @@ jvm.config runtime.properties
|
|||
|
||||
在我们的所有进程中有四个需要配置的JVM参数
|
||||
|
||||
1. `-Duser.timezone=UTC` 该参数将JVM的默认时区设置为UTC。我们总是这样设置,不使用其他默认时区进行测试,因此本地时区可能会工作,但它们也可能会发现奇怪和有趣的错误。要在非UTC时区中发出查询,请参阅 [查询粒度](../Querying/granularity.md)
|
||||
1. `-Duser.timezone=UTC` 该参数将JVM的默认时区设置为UTC。我们总是这样设置,不使用其他默认时区进行测试,因此本地时区可能会工作,但它们也可能会发现奇怪和有趣的错误。要在非UTC时区中发出查询,请参阅 [查询粒度](../querying/granularity.md)
|
||||
2. `-Dfile.encoding=UTF-8` 这类似于时区,我们假设UTF-8进行测试。本地编码可能有效,但也可能导致奇怪和有趣的错误。
|
||||
3. `-Djava.io.tmpdir=<a path>` 系统中与文件系统交互的各个部分都是通过临时文件完成的,这些文件可能会变得有些大。许多生产系统都被设置为具有小的(但是很快的)`/tmp`目录,这对于Druid来说可能是个问题,因此我们建议将JVM的tmp目录指向一些有更多内容的目录。此目录不应为volatile tmpfs。这个目录还应该具有良好的读写速度,因此应该强烈避免NFS挂载。
|
||||
4. `-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager` 这允许log4j2处理使用标准java日志的非log4j2组件(如jetty)的日志。
|
||||
|
|
|
@ -101,7 +101,7 @@ foo_2015-01-03/2015-01-04_v1_2
|
|||
除非所有输入段具有相同的元数据,否则输出段可以具有与输入段不同的元数据。
|
||||
|
||||
* Dimensions: 由于Apache Druid支持schema更改,因此即使是同一个数据源的一部分,各个段之间的维度也可能不同。如果输入段具有不同的维度,则输出段基本上包括输入段的所有维度。但是,即使输入段具有相同的维度集,维度顺序或维度的数据类型也可能不同。例如,某些维度的数据类型可以从 `字符串` 类型更改为基本类型,或者可以更改维度的顺序以获得更好的局部性。在这种情况下,在数据类型和排序方面,最近段的维度先于旧段的维度。这是因为最近的段更有可能具有所需的新顺序和数据类型。如果要使用自己的顺序和类型,可以在压缩任务规范中指定自定义 `dimensionsSpec`。
|
||||
* Roll-up: 仅当为所有输入段设置了 `rollup` 时,才会汇总输出段。有关详细信息,请参见 [rollup](ingestion.md#rollup)。您可以使用 [段元数据查询](../Querying/segmentMetadata.md) 检查段是否已被rollup。
|
||||
* Roll-up: 仅当为所有输入段设置了 `rollup` 时,才会汇总输出段。有关详细信息,请参见 [rollup](ingestion.md#rollup)。您可以使用 [段元数据查询](../querying/segmentMetadata.md) 检查段是否已被rollup。
|
||||
|
||||
#### 压缩合并的IOConfig
|
||||
压缩IOConfig需要指定 `inputSpec`,如下所示。
|
||||
|
@ -139,7 +139,7 @@ Druid不支持按主键更新单个记录。
|
|||
|
||||
#### 使用lookups
|
||||
|
||||
如果有需要经常更新值的维度,请首先尝试使用 [lookups](../Querying/lookups.md)。lookups的一个典型用例是,在Druid段中存储一个ID维度,并希望将ID维度映射到一个人类可读的字符串值,该字符串值可能需要定期更新。
|
||||
如果有需要经常更新值的维度,请首先尝试使用 [lookups](../querying/lookups.md)。lookups的一个典型用例是,在Druid段中存储一个ID维度,并希望将ID维度映射到一个人类可读的字符串值,该字符串值可能需要定期更新。
|
||||
|
||||
#### 重新摄取数据
|
||||
|
||||
|
|
|
@ -62,7 +62,7 @@ Druid会拒绝时间窗口之外的事件, 确认事件是否被拒绝了的
|
|||
|
||||
### 查询返回来了空结果
|
||||
|
||||
您可以对为数据源创建的dimension和metric使用段 [元数据查询](../Querying/segmentMetadata.md)。确保您在查询中使用的聚合器的名称与这些metric之一匹配,还要确保指定的查询间隔与存在数据的有效时间范围匹配。
|
||||
您可以对为数据源创建的dimension和metric使用段 [元数据查询](../querying/segmentMetadata.md)。确保您在查询中使用的聚合器的名称与这些metric之一匹配,还要确保指定的查询间隔与存在数据的有效时间范围匹配。
|
||||
|
||||
### schema变化时如何在Druid中重新索引现有数据
|
||||
|
||||
|
|
|
@ -203,7 +203,7 @@ s3n://billy-bucket/the/data/is/here/y=2012/m=06/d=01/H=23
|
|||
| `dataSource` | String | Druid数据源名称,从该数据源读取数据 | 是 |
|
||||
| `intervals` | List | ISO-8601时间间隔的字符串List | 是 |
|
||||
| `segments` | List | 从中读取数据的段的列表,默认情况下自动获取。您可以通过向Coordinator的接口 `/druid/Coordinator/v1/metadata/datasources/segments?full` 进行POST查询来获取要放在这里的段列表。例如["2012-01-01T00:00:00.000/2012-01-03T00:00:00.000","2012-01-05T00:00:00.000/2012-01-07T00:00:00.000"]. 您可能希望手动提供此列表,以确保读取的段与任务提交时的段完全相同,如果用户提供的列表与任务实际运行时的数据库状态不匹配,则任务将失败 | 否 |
|
||||
| `filter` | JSON | 查看 [Filter](../Querying/filters.md) | 否 |
|
||||
| `filter` | JSON | 查看 [Filter](../querying/filters.md) | 否 |
|
||||
| `dimensions` | String数组 | 要加载的维度列的名称。默认情况下,列表将根据 `parseSpec` 构造。如果 `parseSpec` 没有维度的显式列表,则将读取存储数据中的所有维度列。 | 否 |
|
||||
| `metrics` | String数组 | 要加载的Metric列的名称。默认情况下,列表将根据所有已配置聚合器的"name"构造。 | 否 |
|
||||
| `ignoreWhenNoSegments` | boolean | 如果找不到段,是否忽略此 `ingestionSpec`。默认行为是在找不到段时引发错误。| 否 |
|
||||
|
|
|
@ -102,7 +102,7 @@ Rollup由 `granularitySpec` 中的 `rollup` 配置项控制。 默认情况下
|
|||
有关如何配置Rollup以及该特性将如何修改数据的示例,请参阅[Rollup教程](../tutorials/chapter-5.md)。
|
||||
|
||||
#### 最大化rollup比率
|
||||
通过比较Druid中的行数和接收的事件数,可以测量数据源的汇总率。这个数字越高,从汇总中获得的好处就越多。一种方法是使用[Druid SQL](../Querying/druidsql.md)查询,比如:
|
||||
通过比较Druid中的行数和接收的事件数,可以测量数据源的汇总率。这个数字越高,从汇总中获得的好处就越多。一种方法是使用[Druid SQL](../querying/druidsql.md)查询,比如:
|
||||
```json
|
||||
SELECT SUM("cnt") / COUNT(*) * 1.0 FROM datasource
|
||||
```
|
||||
|
@ -162,7 +162,7 @@ Druid数据源总是按时间划分为*时间块*,每个时间块包含一个
|
|||
>
|
||||
> 注意,当然,划分数据的一种方法是将其加载到分开的数据源中。这是一种完全可行的方法,当数据源的数量不会导致每个数据源的开销过大时,它可以很好地工作。如果使用这种方法,那么可以忽略这一部分,因为这部分描述了如何在单个数据源中设置分区。
|
||||
>
|
||||
> 有关将数据拆分为单独数据源的详细信息以及潜在的操作注意事项,请参阅 [多租户注意事项](../Querying/multitenancy.md)。
|
||||
> 有关将数据拆分为单独数据源的详细信息以及潜在的操作注意事项,请参阅 [多租户注意事项](../querying/multitenancy.md)。
|
||||
|
||||
### 摄入规范
|
||||
|
||||
|
@ -347,7 +347,7 @@ Druid数据源总是按时间划分为*时间块*,每个时间块包含一个
|
|||
|-|-|-|
|
||||
| dimensions | 维度名称或者对象的列表,在 `dimensions` 和 `dimensionExclusions` 中不能包含相同的列。 <br><br> 如果该配置为一个空数组,Druid将会把所有未出现在 `dimensionExclusions` 中的非时间、非指标列当做字符串类型的维度列,参见[Inclusions and exclusions](#Inclusions-and-exclusions)。 | `[]` |
|
||||
| dimensionExclusions | 在摄取中需要排除的列名称,在该配置中只支持名称,不支持对象。在 `dimensions` 和 `dimensionExclusions` 中不能包含相同的列。 | `[]` |
|
||||
| spatialDimensions | 一个[空间维度](../Querying/spatialfilter.md)的数组 | `[]` |
|
||||
| spatialDimensions | 一个[空间维度](../querying/spatialfilter.md)的数组 | `[]` |
|
||||
|
||||
###### `Dimension objects`
|
||||
在 `dimensions` 列的每一个维度可以是一个名称,也可以是一个对象。 提供一个名称等价于提供了一个给定名称的 `string` 类型的维度对象。例如: `page` 等价于 `{"name": "page", "type": "string"}`。
|
||||
|
@ -378,7 +378,7 @@ Druid以两种可能的方式来解释 `dimensionsSpec` : *normal* 和 *schemale
|
|||
|
||||
##### `metricsSpec`
|
||||
|
||||
`metricsSpec` 位于 `dataSchema` -> `metricsSpec` 中,是一个在摄入阶段要应用的 [聚合器](../Querying/Aggregations.md) 列表。 在启用了 [rollup](#rollup) 时是很有用的,因为它将配置如何在摄入阶段进行聚合。
|
||||
`metricsSpec` 位于 `dataSchema` -> `metricsSpec` 中,是一个在摄入阶段要应用的 [聚合器](../querying/Aggregations.md) 列表。 在启用了 [rollup](#rollup) 时是很有用的,因为它将配置如何在摄入阶段进行聚合。
|
||||
|
||||
一个 `metricsSpec` 实例如下:
|
||||
```json
|
||||
|
@ -389,7 +389,7 @@ Druid以两种可能的方式来解释 `dimensionsSpec` : *normal* 和 *schemale
|
|||
]
|
||||
```
|
||||
> [!WARNING]
|
||||
> 通常,当 [rollup](#rollup) 被禁用时,应该有一个空的 `metricsSpec`(因为没有rollup,Druid不会在摄取时进行任何的聚合,所以没有理由包含摄取时聚合器)。但是,在某些情况下,定义Metrics仍然是有意义的:例如,如果要创建一个复杂的列作为 [近似聚合](../Querying/Aggregations.md#近似聚合) 的预计算部分,则只能通过在 `metricsSpec` 中定义度量来实现
|
||||
> 通常,当 [rollup](#rollup) 被禁用时,应该有一个空的 `metricsSpec`(因为没有rollup,Druid不会在摄取时进行任何的聚合,所以没有理由包含摄取时聚合器)。但是,在某些情况下,定义Metrics仍然是有意义的:例如,如果要创建一个复杂的列作为 [近似聚合](../querying/Aggregations.md#近似聚合) 的预计算部分,则只能通过在 `metricsSpec` 中定义度量来实现
|
||||
|
||||
##### `granularitySpec`
|
||||
|
||||
|
@ -419,7 +419,7 @@ Druid以两种可能的方式来解释 `dimensionsSpec` : *normal* 和 *schemale
|
|||
|-|-|-|
|
||||
| type | `uniform` 或者 `arbitrary` ,大多数时候使用 `uniform` | `uniform` |
|
||||
| segmentGranularity | 数据源的 [时间分块](../design/Design.md#数据源和段) 粒度。每个时间块可以创建多个段, 例如,当设置为 `day` 时,同一天的事件属于同一时间块,该时间块可以根据其他配置和输入大小进一步划分为多个段。这里可以提供任何粒度。请注意,同一时间块中的所有段应具有相同的段粒度。 <br><br> 如果 `type` 字段设置为 `arbitrary` 则忽略 | `day` |
|
||||
| queryGranularity | 每个段内时间戳存储的分辨率, 必须等于或比 `segmentGranularity` 更细。这将是您可以查询的最细粒度,并且仍然可以查询到合理的结果。但是请注意,您仍然可以在比此粒度更粗的场景进行查询,例如 "`minute`"的值意味着记录将以分钟的粒度存储,并且可以在分钟的任意倍数(包括分钟、5分钟、小时等)进行查询。<br><br> 这里可以提供任何 [粒度](../Querying/AggregationGranularity.md) 。使用 `none` 按原样存储时间戳,而不进行任何截断。请注意,即使将 `queryGranularity` 设置为 `none`,也将应用 `rollup`。 | `none` |
|
||||
| queryGranularity | 每个段内时间戳存储的分辨率, 必须等于或比 `segmentGranularity` 更细。这将是您可以查询的最细粒度,并且仍然可以查询到合理的结果。但是请注意,您仍然可以在比此粒度更粗的场景进行查询,例如 "`minute`"的值意味着记录将以分钟的粒度存储,并且可以在分钟的任意倍数(包括分钟、5分钟、小时等)进行查询。<br><br> 这里可以提供任何 [粒度](../querying/AggregationGranularity.md) 。使用 `none` 按原样存储时间戳,而不进行任何截断。请注意,即使将 `queryGranularity` 设置为 `none`,也将应用 `rollup`。 | `none` |
|
||||
| rollup | 是否在摄取时使用 [rollup](#rollup)。 注意:即使 `queryGranularity` 设置为 `none`,rollup也仍然是有效的,当数据具有相同的时间戳时数据将被汇总 | `true` |
|
||||
| interval | 描述应该创建段的时间块的间隔列表。如果 `type` 设置为`uniform`,则此列表将根据 `segmentGranularity` 进行拆分和舍入。如果 `type` 设置为 `arbitrary` ,则将按原样使用此列表。<br><br> 如果该值不提供或者为空值,则批处理摄取任务通常会根据在输入数据中找到的时间戳来确定要输出的时间块。<br><br> 如果指定,批处理摄取任务可以跳过确定分区阶段,这可能会导致更快的摄取。批量摄取任务也可以预先请求它们的所有锁,而不是逐个请求。批处理摄取任务将丢弃任何时间戳超出指定间隔的记录。<br><br> 在任何形式的流摄取中忽略该配置。 | `null` |
|
||||
|
||||
|
|
|
@ -1095,7 +1095,7 @@ Druid输入源支持直接从现有的Druid段读取数据,可能使用新的
|
|||
| `interval` | ISO-8601时间间隔的字符串,它定义了获取数据的时间范围。 | 是 |
|
||||
| `dimensions` | 包含要从Druid数据源中选择的维度列名称的字符串列表。如果列表为空,则不返回维度。如果为空,则返回所有维度。 | 否 |
|
||||
| `metrics` | 包含要选择的Metric列名称的字符串列表。如果列表为空,则不返回任何度量。如果为空,则返回所有Metric。 | 否 |
|
||||
| `filter` | 详情请查看 [filters](../Querying/filters.html) 如果指定,则只返回与筛选器匹配的行。 | 否 |
|
||||
| `filter` | 详情请查看 [filters](../querying/filters.html) 如果指定,则只返回与筛选器匹配的行。 | 否 |
|
||||
|
||||
DruidInputSource规范的最小示例如下所示:
|
||||
```json
|
||||
|
|
|
@ -22,23 +22,23 @@
|
|||
* 除了timestamp列之外,Druid数据源中的所有列都是dimensions或metrics。这遵循 [OLAP数据的标准命名约定](https://en.wikipedia.org/wiki/Online_analytical_processing#Overview_of_OLAP_systems)。
|
||||
* 典型的生产数据源有几十到几百列。
|
||||
* [dimension列](ingestion.md#维度) 按原样存储,因此可以在查询时对其进行筛选、分组或聚合。它们总是单个字符串、字符串数组、单个long、单个double或单个float。
|
||||
* [Metrics列](ingestion.md#指标) 是 [预聚合](../Querying/Aggregations.md) 存储的,因此它们只能在查询时聚合(不能按筛选或分组)。它们通常存储为数字(整数或浮点数),但也可以存储为复杂对象,如[HyperLogLog草图或近似分位数草图](../Querying/Aggregations.md)。即使禁用了rollup,也可以在接收时配置metrics,但在启用汇总时最有用。
|
||||
* [Metrics列](ingestion.md#指标) 是 [预聚合](../querying/Aggregations.md) 存储的,因此它们只能在查询时聚合(不能按筛选或分组)。它们通常存储为数字(整数或浮点数),但也可以存储为复杂对象,如[HyperLogLog草图或近似分位数草图](../querying/Aggregations.md)。即使禁用了rollup,也可以在接收时配置metrics,但在启用汇总时最有用。
|
||||
|
||||
### 与其他设计模式类比
|
||||
#### 关系模型
|
||||
(如 Hive 或者 PostgreSQL)
|
||||
|
||||
Druid数据源通常相当于关系数据库中的表。Druid的 [lookups特性](../Querying/lookups.md) 可以类似于数据仓库样式的维度表,但是正如您将在下面看到的,如果您能够摆脱它,通常建议您进行非规范化。
|
||||
Druid数据源通常相当于关系数据库中的表。Druid的 [lookups特性](../querying/lookups.md) 可以类似于数据仓库样式的维度表,但是正如您将在下面看到的,如果您能够摆脱它,通常建议您进行非规范化。
|
||||
|
||||
关系数据建模的常见实践涉及 [规范化](https://en.wikipedia.org/wiki/Database_normalization) 的思想:将数据拆分为多个表,从而减少或消除数据冗余。例如,在"sales"表中,最佳实践关系建模要求将"product id"列作为外键放入单独的"products"表中,该表依次具有"product id"、"product name"和"product category"列, 这可以防止产品名称和类别需要在"sales"表中引用同一产品的不同行上重复。
|
||||
|
||||
另一方面,在Druid中,通常使用在查询时不需要连接的完全平坦的数据源。在"sales"表的例子中,在Druid中,通常直接将"product_id"、"product_name"和"product_category"作为维度存储在Druid "sales"数据源中,而不使用单独的"products"表。完全平坦的模式大大提高了性能,因为查询时不需要连接。作为一个额外的速度提升,这也允许Druid的查询层直接操作压缩字典编码的数据。因为Druid使用字典编码来有效地为字符串列每行存储一个整数, 所以可能与直觉相反,这并*没有*显著增加相对于规范化模式的存储空间。
|
||||
|
||||
如果需要的话,可以通过使用 [lookups](../Querying/lookups.md) 规范化Druid数据源,这大致相当于关系数据库中的维度表。在查询时,您将使用Druid的SQL `LOOKUP` 查找函数或者原生 `lookup` 提取函数,而不是像在关系数据库中那样使用JOIN关键字。由于lookup表会增加内存占用并在查询时产生更多的计算开销,因此仅当需要更新lookup表并立即反映主表中已摄取行的更改时,才建议执行此操作。
|
||||
如果需要的话,可以通过使用 [lookups](../querying/lookups.md) 规范化Druid数据源,这大致相当于关系数据库中的维度表。在查询时,您将使用Druid的SQL `LOOKUP` 查找函数或者原生 `lookup` 提取函数,而不是像在关系数据库中那样使用JOIN关键字。由于lookup表会增加内存占用并在查询时产生更多的计算开销,因此仅当需要更新lookup表并立即反映主表中已摄取行的更改时,才建议执行此操作。
|
||||
|
||||
在Druid中建模关系数据的技巧:
|
||||
* Druid数据源没有主键或唯一键,所以跳过这些。
|
||||
* 如果可能的话,去规格化。如果需要定期更新dimensions/lookup并将这些更改反映在已接收的数据中,请考虑使用 [lookups](../Querying/lookups.md) 进行部分规范化。
|
||||
* 如果可能的话,去规格化。如果需要定期更新dimensions/lookup并将这些更改反映在已接收的数据中,请考虑使用 [lookups](../querying/lookups.md) 进行部分规范化。
|
||||
* 如果需要将两个大型的分布式表连接起来,则必须在将数据加载到Druid之前执行此操作。Druid不支持两个数据源的查询时间连接。lookup在这里没有帮助,因为每个lookup表的完整副本存储在每个Druid服务器上,所以对于大型表来说,它们不是一个好的选择。
|
||||
* 考虑是否要为预聚合启用[rollup](ingestion.md#rollup),或者是否要禁用rollup并按原样加载现有数据。Druid中的Rollup类似于在关系模型中创建摘要表。
|
||||
|
||||
|
@ -53,7 +53,7 @@ Druid数据源通常相当于关系数据库中的表。Druid的 [lookups特性]
|
|||
* Druid并不认为数据点是"时间序列"的一部分。相反,Druid对每一点分别进行摄取和聚合
|
||||
* 创建一个维度,该维度指示数据点所属系列的名称。这个维度通常被称为"metric"或"name"。不要将名为"metric"的维度与Druid Metrics的概念混淆。将它放在"dimensionsSpec"中维度列表的第一个位置,以获得最佳性能(这有助于提高局部性;有关详细信息,请参阅下面的 [分区和排序](ingestion.md#分区))
|
||||
* 为附着到数据点的属性创建其他维度。在时序数据库系统中,这些通常称为"标签"
|
||||
* 创建与您希望能够查询的聚合类型相对应的 [Druid Metrics](ingestion.md#指标)。通常这包括"sum"、"min"和"max"(在long、float或double中的一种)。如果你想计算百分位数或分位数,可以使用Druid的 [近似聚合器](../Querying/Aggregations.md)
|
||||
* 创建与您希望能够查询的聚合类型相对应的 [Druid Metrics](ingestion.md#指标)。通常这包括"sum"、"min"和"max"(在long、float或double中的一种)。如果你想计算百分位数或分位数,可以使用Druid的 [近似聚合器](../querying/Aggregations.md)
|
||||
* 考虑启用 [rollup](ingestion.md#rollup),这将允许Druid潜在地将多个点合并到Druid数据源中的一行中。如果希望以不同于原始发出的时间粒度存储数据,则这可能非常有用。如果要在同一个数据源中组合时序和非时序数据,它也很有用
|
||||
* 如果您提前不知道要摄取哪些列,请使用空的维度列表来触发 [维度列的自动检测](#无schema的维度列)
|
||||
|
||||
|
@ -86,7 +86,7 @@ Druid可以在接收数据时将其汇总,以最小化需要存储的原始数
|
|||
|
||||
草图(sketches)减少了查询时的内存占用,因为它们限制了需要在服务器之间洗牌的数据量。例如,在分位数计算中,Druid不需要将所有数据点发送到中心位置,以便对它们进行排序和计算分位数,而只需要发送点的草图。这可以将数据传输需要减少到仅千字节。
|
||||
|
||||
有关Druid中可用的草图的详细信息,请参阅 [近似聚合器页面](../Querying/Aggregations.md)。
|
||||
有关Druid中可用的草图的详细信息,请参阅 [近似聚合器页面](../querying/Aggregations.md)。
|
||||
|
||||
如果你更喜欢 [视频](https://www.youtube.com/watch?v=Hpd3f_MLdXo),那就看一看吧!,一个讨论Druid Sketches的会议。
|
||||
|
||||
|
@ -104,7 +104,7 @@ Druid schema必须始终包含一个主时间戳, 主时间戳用于对数据进
|
|||
|
||||
如果数据有多个时间戳,则可以将其他时间戳作为辅助时间戳摄取。最好的方法是将它们作为 [毫秒格式的Long类型维度](ingestion.md#dimensionsspec) 摄取。如有必要,可以使用 [`transformSpec`](ingestion.md#transformspec) 和 `timestamp_parse` 等 [表达式](../misc/expression.md) 将它们转换成这种格式,后者返回毫秒时间戳。
|
||||
|
||||
在查询时,可以使用诸如 `MILLIS_TO_TIMESTAMP`、`TIME_FLOOR` 等 [SQL时间函数](../Querying/druidsql.md) 查询辅助时间戳。如果您使用的是原生Druid查询,那么可以使用 [表达式](../misc/expression.md)。
|
||||
在查询时,可以使用诸如 `MILLIS_TO_TIMESTAMP`、`TIME_FLOOR` 等 [SQL时间函数](../querying/druidsql.md) 查询辅助时间戳。如果您使用的是原生Druid查询,那么可以使用 [表达式](../misc/expression.md)。
|
||||
|
||||
#### 嵌套维度
|
||||
|
||||
|
|
|
@ -22,7 +22,7 @@
|
|||
任务API主要在两个地方是可用的:
|
||||
|
||||
* [Overlord](../design/Overlord.md) 进程提供HTTP API接口来进行提交任务、取消任务、检查任务状态、查看任务日志与报告等。 查看 [任务API文档](../Operations/api.md) 可以看到完整列表
|
||||
* Druid SQL包括了一个 [`sys.tasks`](../Querying/druidsql.md#系统Schema) ,保存了当前任务运行的信息。 此表是只读的,并且可以通过Overlord API查询完整信息的有限制的子集。
|
||||
* Druid SQL包括了一个 [`sys.tasks`](../querying/druidsql.md#系统Schema) ,保存了当前任务运行的信息。 此表是只读的,并且可以通过Overlord API查询完整信息的有限制的子集。
|
||||
|
||||
### 任务报告
|
||||
|
||||
|
|
|
@ -0,0 +1,63 @@
|
|||
<!-- toc -->
|
||||
|
||||
<script async src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script>
|
||||
<ins class="adsbygoogle"
|
||||
style="display:block; text-align:center;"
|
||||
data-ad-layout="in-article"
|
||||
data-ad-format="fluid"
|
||||
data-ad-client="ca-pub-8828078415045620"
|
||||
data-ad-slot="7586680510"></ins>
|
||||
<script>
|
||||
(adsbygoogle = window.adsbygoogle || []).push({});
|
||||
</script>
|
||||
|
||||
## Docker
|
||||
|
||||
在这个部分中,我们将从 [Docker Hub](https://hub.docker.com/r/apache/druid) 下载Apache Druid镜像,并使用 [Docker](https://www.docker.com/get-started) 和 [Docker Compose](https://docs.docker.com/compose/) 在一台机器上安装它。完成此初始设置后,集群将准备好加载数据。
|
||||
|
||||
在开始快速启动之前,阅读 [Druid概述](chapter-1.md) 和 [摄取概述](../DataIngestion/ingestion.md) 是很有帮助的,因为教程将参考这些页面上讨论的概念。此外,建议熟悉Docker。
|
||||
|
||||
### 前提条件
|
||||
|
||||
* Docker
|
||||
|
||||
### 快速开始
|
||||
|
||||
Druid源代码包含一个 [示例docker-compose.yml](https://github.com/apache/druid/blob/master/distribution/docker/docker-compose.yml) 它可以从Docker Hub中提取一个镜像,适合用作示例环境,并用于试验基于Docker的Druid配置和部署。
|
||||
|
||||
#### Compose文件
|
||||
|
||||
示例 `docker-compose.yml` 将为每个Druid服务创建一个容器,包括Zookeeper和作为元数据存储PostgreSQL容器。深度存储将是本地目录,默认配置为相对于 `docker-compose.yml`文件的 `./storage`,并将作为 `/opt/data` 挂载,并在需要访问深层存储的Druid容器之间共享。Druid容器是通过 [环境文件](https://github.com/apache/druid/blob/master/distribution/docker/environment) 配置的。
|
||||
|
||||
#### 配置
|
||||
|
||||
Druid Docker容器的配置是通过环境变量完成的,环境变量还可以指定到 [标准Druid配置文件](../Configuration/configuration.md) 的路径
|
||||
|
||||
特殊环境变量:
|
||||
|
||||
* `JAVA_OPTS` -- 设置 java options
|
||||
* `DRUID_LOG4J` -- 设置完成的 `log4j.xml`
|
||||
* `DRUID_LOG_LEVEL` -- 覆盖在log4j中的默认日志级别
|
||||
* `DRUID_XMX` -- 设置 Java `Xmx`
|
||||
* `DRUID_XMS` -- 设置 Java `Xms`
|
||||
* `DRUID_MAXNEWSIZE` -- 设置 Java最大新生代大小
|
||||
* `DRUID_NEWSIZE` -- 设置 Java 新生代大小
|
||||
* `DRUID_MAXDIRECTMEMORYSIZE` -- 设置Java最大直接内存大小
|
||||
* `DRUID_CONFIG_COMMON` -- druid "common"属性文件的完整路径
|
||||
* `DRUID_CONFIG_${service}` -- druid "service"属性文件的完整路径
|
||||
|
||||
除了特殊的环境变量外,在容器中启动Druid的脚本还将尝试使用以 `druid_`前缀开头的任何环境变量作为命令行配置。例如,Druid容器进程中的环境变量`druid_metadata_storage_type=postgresql` 将被转换为 `-Ddruid.metadata.storage.type=postgresql`
|
||||
|
||||
Druid `docker-compose.yml` 示例使用单个环境文件来指定完整的Druid配置;但是,在生产用例中,我们建议使用 `DRUID_COMMON_CONFIG` 和`DRUID_CONFIG_${service}` 或专门定制的特定于服务的环境文件。
|
||||
|
||||
### 启动集群
|
||||
|
||||
运行 `docker-compose up` 启动附加shell的集群,或运行 `docker-compose up -d` 在后台运行集群。如果直接使用示例文件,这个命令应该从Druid安装目录中的 `distribution/docker/` 运行。
|
||||
|
||||
启动集群后,可以导航到 [http://localhost:8888](http://localhost/) 。服务于 [Druid控制台](../Operations/druid-console.md) 的 [Druid路由进程](../Design/Router.md) 位于这个地址。
|
||||
|
||||
![](img/tutorial-quickstart-01.png)
|
||||
|
||||
所有Druid进程需要几秒钟才能完全启动。如果在启动服务后立即打开控制台,可能会看到一些可以安全忽略的错误。
|
||||
|
||||
从这里你可以跟着 [标准教程](chapter-2.md),或者详细说明你的 `docker-compose.yml` 根据需要添加任何其他外部服务依赖项。
|
|
@ -0,0 +1,60 @@
|
|||
<!-- toc -->
|
||||
|
||||
<script async src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script>
|
||||
<ins class="adsbygoogle"
|
||||
style="display:block; text-align:center;"
|
||||
data-ad-layout="in-article"
|
||||
data-ad-format="fluid"
|
||||
data-ad-client="ca-pub-8828078415045620"
|
||||
data-ad-slot="7586680510"></ins>
|
||||
<script>
|
||||
(adsbygoogle = window.adsbygoogle || []).push({});
|
||||
</script>
|
||||
|
||||
### Druid是什么
|
||||
|
||||
Apache Druid是一个实时分析型数据库,旨在对大型数据集进行快速的查询分析("[OLAP](https://en.wikipedia.org/wiki/Online_analytical_processing)"查询)。Druid最常被当做数据库来用以支持实时摄取、高性能查询和高稳定运行的应用场景,同时,Druid也通常被用来助力分析型应用的图形化界面,或者当做需要快速聚合的高并发后端API,Druid最适合应用于面向事件类型的数据。
|
||||
|
||||
Druid通常应用于以下场景:
|
||||
|
||||
* 点击流分析(Web端和移动端)
|
||||
* 网络监测分析(网络性能监控)
|
||||
* 服务指标存储
|
||||
* 供应链分析(制造类指标)
|
||||
* 应用性能指标分析
|
||||
* 数字广告分析
|
||||
* 商务智能 / OLAP
|
||||
|
||||
Druid的核心架构吸收和结合了[数据仓库](https://en.wikipedia.org/wiki/Data_warehouse)、[时序数据库](https://en.wikipedia.org/wiki/Time_series_database)以及[检索系统](https://en.wikipedia.org/wiki/Search_engine_(computing))的优势,其主要特征如下:
|
||||
|
||||
1. **列式存储**,Druid使用列式存储,这意味着在一个特定的数据查询中它只需要查询特定的列,这样极地提高了部分列查询场景的性能。另外,每一列数据都针对特定数据类型做了优化存储,从而支持快速的扫描和聚合。
|
||||
2. **可扩展的分布式系统**,Druid通常部署在数十到数百台服务器的集群中,并且可以提供每秒数百万条记录的接收速率,数万亿条记录的保留存储以及亚秒级到几秒的查询延迟。
|
||||
3. **大规模并行处理**,Druid可以在整个集群中并行处理查询。
|
||||
4. **实时或批量摄取**,Druid可以实时(已经被摄取的数据可立即用于查询)或批量摄取数据。
|
||||
5. **自修复、自平衡、易于操作**,作为集群运维操作人员,要伸缩集群只需添加或删除服务,集群就会在后台自动重新平衡自身,而不会造成任何停机。如果任何一台Druid服务器发生故障,系统将自动绕过损坏。 Druid设计为7*24全天候运行,无需出于任何原因而导致计划内停机,包括配置更改和软件更新。
|
||||
6. **不会丢失数据的云原生容错架构**,一旦Druid摄取了数据,副本就安全地存储在[深度存储介质](Design/../chapter-1.md)(通常是云存储,HDFS或共享文件系统)中。即使某个Druid服务发生故障,也可以从深度存储中恢复您的数据。对于仅影响少数Druid服务的有限故障,副本可确保在系统恢复时仍然可以进行查询。
|
||||
7. **用于快速过滤的索引**,Druid使用[CONCISE](https://arxiv.org/pdf/1004.0403.pdf)或[Roaring](https://roaringbitmap.org/)压缩的位图索引来创建索引,以支持快速过滤和跨多列搜索。
|
||||
8. **基于时间的分区**,Druid首先按时间对数据进行分区,另外同时可以根据其他字段进行分区。这意味着基于时间的查询将仅访问与查询时间范围匹配的分区,这将大大提高基于时间的数据的性能。
|
||||
9. **近似算法**,Druid应用了近似count-distinct,近似排序以及近似直方图和分位数计算的算法。这些算法占用有限的内存使用量,通常比精确计算要快得多。对于精度要求比速度更重要的场景,Druid还提供了精确count-distinct和精确排序。
|
||||
10. **摄取时自动汇总聚合**,Druid支持在数据摄取阶段可选地进行数据汇总,这种汇总会部分预先聚合您的数据,并可以节省大量成本并提高性能。
|
||||
|
||||
### 什么场景下应该使用Druid
|
||||
|
||||
许多公司都已经将Druid应用于多种不同的应用场景,详情可查看[Powered by Apache Druid](https://druid.apache.org/druid-powered)页面。
|
||||
|
||||
如果您的使用场景符合以下的几个特征,那么Druid是一个非常不错的选择:
|
||||
|
||||
* 数据插入频率比较高,但较少更新数据
|
||||
* 大多数查询场景为聚合查询和分组查询(GroupBy),同时还有一定得检索与扫描查询
|
||||
* 将数据查询延迟目标定位100毫秒到几秒钟之间
|
||||
* 数据具有时间属性(Druid针对时间做了优化和设计)
|
||||
* 在多表场景下,每次查询仅命中一个大的分布式表,查询又可能命中多个较小的lookup表
|
||||
* 场景中包含高基维度数据列(例如URL,用户ID等),并且需要对其进行快速计数和排序
|
||||
* 需要从Kafka、HDFS、对象存储(如Amazon S3)中加载数据
|
||||
|
||||
如果您的使用场景符合以下特征,那么使用Druid可能是一个不好的选择:
|
||||
|
||||
* 根据主键对现有数据进行低延迟更新操作。Druid支持流式插入,但不支持流式更新(更新操作是通过后台批处理作业完成)
|
||||
* 延迟不重要的离线数据系统
|
||||
* 场景中包括大连接(将一个大事实表连接到另一个大事实表),并且可以接受花费很长时间来完成这些查询
|
||||
|
|
@ -0,0 +1,163 @@
|
|||
<!-- toc -->
|
||||
<script async src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script>
|
||||
<ins class="adsbygoogle"
|
||||
style="display:block; text-align:center;"
|
||||
data-ad-layout="in-article"
|
||||
data-ad-format="fluid"
|
||||
data-ad-client="ca-pub-8828078415045620"
|
||||
data-ad-slot="7586680510"></ins>
|
||||
<script>
|
||||
(adsbygoogle = window.adsbygoogle || []).push({});
|
||||
</script>
|
||||
|
||||
### 快速开始
|
||||
|
||||
在本快速入门教程中,我们将下载Druid并将其安装在一台服务器上,完成初始安装后,向集群中加载数据。
|
||||
|
||||
在开始快速入门之前,阅读[Druid概述](./chapter-1.md)和[数据摄取概述](../DataIngestion/index.md)会很有帮助,因为当前教程会引用这些页面上讨论的概念。
|
||||
|
||||
#### 预备条件
|
||||
##### 软件
|
||||
* **Java 8(8u92+)**
|
||||
* Linux, Mac OS X, 或者其他类UNIX系统(Windows不支持)
|
||||
|
||||
> [!WARNING]
|
||||
> Druid服务运行依赖Java 8,可以使用环境变量`DRUID_JAVA_HOME`或`JAVA_HOME`指定在何处查找Java,有关更多详细信息,请运行`verify-java`脚本。
|
||||
|
||||
##### 硬件
|
||||
|
||||
Druid安装包提供了几个[单服务器配置](./chapter-3.md)的示例,以及使用这些配置启动Druid进程的脚本。
|
||||
|
||||
如果您正在使用便携式等小型计算机上运行服务,则配置为4CPU/16GB RAM环境的`micro-quickstart`配置是一个不错的选择。
|
||||
|
||||
如果您打算在本教程之外使用单机部署进行进一步试验评估,则建议使用比`micro-quickstart`更大的配置。
|
||||
|
||||
#### 入门开始
|
||||
|
||||
[下载](https://www.apache.org/dyn/closer.cgi?path=/druid/0.17.0/apache-druid-0.17.0-bin.tar.gz)Druid最新0.17.0release安装包
|
||||
|
||||
在终端中运行以下命令来提取Druid
|
||||
|
||||
```json
|
||||
tar -xzf apache-druid-0.17.0-bin.tar.gz
|
||||
cd apache-druid-0.17.0
|
||||
```
|
||||
|
||||
在安装包中有以下文件:
|
||||
|
||||
* `LICENSE`和`NOTICE`文件
|
||||
* `bin/*` - 启停等脚本
|
||||
* `conf/*` - 用于单节点部署和集群部署的示例配置
|
||||
* `extensions/*` - Druid核心扩展
|
||||
* `hadoop-dependencies/*` - Druid Hadoop依赖
|
||||
* `lib/*` - Druid核心库和依赖
|
||||
* `quickstart/*` - 配置文件,样例数据,以及快速入门教材的其他文件
|
||||
|
||||
#### 启动服务
|
||||
|
||||
以下命令假定您使用的是`micro-quickstart`单机配置,如果使用的是其他配置,在`bin`目录下有每一种配置对应的脚本,如`bin/start-single-server-small`
|
||||
|
||||
在`apache-druid-0.17.0`安装包的根目录下执行命令:
|
||||
|
||||
```json
|
||||
./bin/start-micro-quickstart
|
||||
```
|
||||
然后将在本地计算机上启动Zookeeper和Druid服务实例,例如:
|
||||
|
||||
```json
|
||||
$ ./bin/start-micro-quickstart
|
||||
[Fri May 3 11:40:50 2019] Running command[zk], logging to[/apache-druid-0.17.0/var/sv/zk.log]: bin/run-zk conf
|
||||
[Fri May 3 11:40:50 2019] Running command[coordinator-overlord], logging to[/apache-druid-0.17.0/var/sv/coordinator-overlord.log]: bin/run-druid coordinator-overlord conf/druid/single-server/micro-quickstart
|
||||
[Fri May 3 11:40:50 2019] Running command[broker], logging to[/apache-druid-0.17.0/var/sv/broker.log]: bin/run-druid broker conf/druid/single-server/micro-quickstart
|
||||
[Fri May 3 11:40:50 2019] Running command[router], logging to[/apache-druid-0.17.0/var/sv/router.log]: bin/run-druid router conf/druid/single-server/micro-quickstart
|
||||
[Fri May 3 11:40:50 2019] Running command[historical], logging to[/apache-druid-0.17.0/var/sv/historical.log]: bin/run-druid historical conf/druid/single-server/micro-quickstart
|
||||
[Fri May 3 11:40:50 2019] Running command[middleManager], logging to[/apache-druid-0.17.0/var/sv/middleManager.log]: bin/run-druid middleManager conf/druid/single-server/micro-quickstart
|
||||
```
|
||||
|
||||
所有的状态(例如集群元数据存储和服务的segment文件)将保留在`apache-druid-0.17.0`软件包根目录下的`var`目录中, 服务的日志位于 `var/sv`。
|
||||
|
||||
稍后,如果您想停止服务,请按`CTRL-C`退出`bin/start-micro-quickstart`脚本,该脚本将终止Druid进程。
|
||||
|
||||
集群启动后,可以访问[http://localhost:8888](http://localhost:8888)来Druid控制台,控制台由Druid Router进程启动。
|
||||
|
||||
![tutorial-quickstart](img/tutorial-quickstart-01.png)
|
||||
|
||||
所有Druid进程完全启动需要花费几秒钟。 如果在启动服务后立即打开控制台,则可能会看到一些可以安全忽略的错误。
|
||||
|
||||
#### 加载数据
|
||||
##### 教程使用的数据集
|
||||
|
||||
对于以下数据加载教程,我们提供了一个示例数据文件,其中包含2015年9月12日发生的Wikipedia页面编辑事件。
|
||||
|
||||
该样本数据位于Druid包根目录的`quickstart/tutorial/wikiticker-2015-09-12-sampled.json.gz`中,页面编辑事件作为JSON对象存储在文本文件中。
|
||||
|
||||
示例数据包含以下几列,示例事件如下所示:
|
||||
|
||||
* added
|
||||
* channel
|
||||
* cityName
|
||||
* comment
|
||||
* countryIsoCode
|
||||
* countryName
|
||||
* deleted
|
||||
* delta
|
||||
* isAnonymous
|
||||
* isMinor
|
||||
* isNew
|
||||
* isRobot
|
||||
* isUnpatrolled
|
||||
* metroCode
|
||||
* namespace
|
||||
* page
|
||||
* regionIsoCode
|
||||
* regionName
|
||||
* user
|
||||
|
||||
```json
|
||||
{
|
||||
"timestamp":"2015-09-12T20:03:45.018Z",
|
||||
"channel":"#en.wikipedia",
|
||||
"namespace":"Main",
|
||||
"page":"Spider-Man's powers and equipment",
|
||||
"user":"foobar",
|
||||
"comment":"/* Artificial web-shooters */",
|
||||
"cityName":"New York",
|
||||
"regionName":"New York",
|
||||
"regionIsoCode":"NY",
|
||||
"countryName":"United States",
|
||||
"countryIsoCode":"US",
|
||||
"isAnonymous":false,
|
||||
"isNew":false,
|
||||
"isMinor":false,
|
||||
"isRobot":false,
|
||||
"isUnpatrolled":false,
|
||||
"added":99,
|
||||
"delta":99,
|
||||
"deleted":0,
|
||||
}
|
||||
```
|
||||
|
||||
##### 数据加载
|
||||
|
||||
以下教程演示了将数据加载到Druid的各种方法,包括批处理和流处理用例。 所有教程均假定您使用的是上面提到的`micro-quickstart`单机配置。
|
||||
|
||||
* [加载本地文件](../Tutorials/chapter-1.md) - 本教程演示了如何使用Druid的本地批处理摄取来执行批文件加载
|
||||
* [从Kafka加载流数据](../Tutorials/chapter-2.md) - 本教程演示了如何从Kafka主题加载流数据
|
||||
* [从Hadoop加载数据](../Tutorials/chapter-3.md) - 本教程演示了如何使用远程Hadoop集群执行批处理文件加载
|
||||
* [编写一个自己的数据摄取规范](../Tutorials/chapter-10.md) - 本教程演示了如何编写新的数据摄取规范并使用它来加载数据
|
||||
|
||||
##### 重置集群状态
|
||||
|
||||
如果要在清理服务后重新启动,请删除`var`目录,然后再次运行`bin/start-micro-quickstart`脚本。
|
||||
|
||||
一旦每个服务都启动,您就可以加载数据了。
|
||||
|
||||
##### 重置Kafka
|
||||
|
||||
如果您完成了[教程:从Kafka加载流数据](../Tutorials/chapter-2.md)并希望重置集群状态,则还应该清除所有Kafka状态。
|
||||
|
||||
在停止ZooKeeper和Druid服务之前,使用`CTRL-C`关闭`Kafka Broker`,然后删除`/tmp/kafka-logs`中的Kafka日志目录:
|
||||
|
||||
```
|
||||
rm -rf /tmp/kafka-logs
|
||||
```
|
|
@ -0,0 +1,69 @@
|
|||
<!-- toc -->
|
||||
|
||||
<script async src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script>
|
||||
<ins class="adsbygoogle"
|
||||
style="display:block; text-align:center;"
|
||||
data-ad-layout="in-article"
|
||||
data-ad-format="fluid"
|
||||
data-ad-client="ca-pub-8828078415045620"
|
||||
data-ad-slot="7586680510"></ins>
|
||||
<script>
|
||||
(adsbygoogle = window.adsbygoogle || []).push({});
|
||||
</script>
|
||||
|
||||
### 单服务器部署
|
||||
|
||||
Druid包括一组参考配置和用于单机部署的启动脚本:
|
||||
|
||||
* `nano-quickstart`
|
||||
* `micro-quickstart`
|
||||
* `small`
|
||||
* `medium`
|
||||
* `large`
|
||||
* `large`
|
||||
* `xlarge`
|
||||
|
||||
`micro-quickstart`适合于笔记本电脑等小型机器,旨在用于快速评估测试使用场景。
|
||||
|
||||
`nano-quickstart`是一种甚至更小的配置,目标是具有1个CPU和4GB内存的计算机。它旨在在资源受限的环境(例如小型Docker容器)中进行有限的评估测试。
|
||||
|
||||
其他配置旨在用于一般用途的单机部署,它们的大小适合大致基于亚马逊i3系列EC2实例的硬件。
|
||||
|
||||
这些示例配置的启动脚本与Druid服务一起运行单个ZK实例,您也可以选择单独部署ZK。
|
||||
|
||||
通过[Coordinator配置文档](../Configuration/configuration.md#Coordinator)中描述的可选配置`druid.coordinator.asOverlord.enabled = true`可以在单个进程中同时运行Druid Coordinator和Overlord。
|
||||
|
||||
虽然为大型单台计算机提供了示例配置,但在更高规模下,我们建议在集群部署中运行Druid,以实现容错和减少资源争用。
|
||||
|
||||
#### 单服务器参考配置
|
||||
##### Nano-Quickstart: 1 CPU, 4GB 内存
|
||||
|
||||
* 启动命令: `bin/start-nano-quickstart`
|
||||
* 配置目录: `conf/druid/single-server/nano-quickstart`
|
||||
|
||||
##### Micro-Quickstart: 4 CPU, 16GB 内存
|
||||
|
||||
* 启动命令: `bin/start-micro-quickstart`
|
||||
* 配置目录: `conf/druid/single-server/micro-quickstart`
|
||||
|
||||
##### Small: 8 CPU, 64GB 内存 (~i3.2xlarge)
|
||||
|
||||
* 启动命令: `bin/start-small`
|
||||
* 配置目录: `conf/druid/single-server/small`
|
||||
|
||||
##### Medium: 16 CPU, 128GB 内存 (~i3.4xlarge)
|
||||
|
||||
* 启动命令: `bin/start-medium`
|
||||
* 配置目录: `conf/druid/single-server/medium`
|
||||
|
||||
##### Large: 32 CPU, 256GB 内存 (~i3.8xlarge)
|
||||
|
||||
* 启动命令: `bin/start-large`
|
||||
* 配置目录: `conf/druid/single-server/large`
|
||||
|
||||
##### X-Large: 64 CPU, 512GB 内存 (~i3.16xlarge)
|
||||
|
||||
* 启动命令: `bin/start-xlarge`
|
||||
* 配置目录: `conf/druid/single-server/xlarge`
|
||||
|
||||
---
|
|
@ -0,0 +1,421 @@
|
|||
<!-- toc -->
|
||||
|
||||
<script async src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script>
|
||||
<ins class="adsbygoogle"
|
||||
style="display:block; text-align:center;"
|
||||
data-ad-layout="in-article"
|
||||
data-ad-format="fluid"
|
||||
data-ad-client="ca-pub-8828078415045620"
|
||||
data-ad-slot="7586680510"></ins>
|
||||
<script>
|
||||
(adsbygoogle = window.adsbygoogle || []).push({});
|
||||
</script>
|
||||
|
||||
## 集群部署
|
||||
|
||||
Apache Druid旨在作为可伸缩的容错集群进行部署。
|
||||
|
||||
在本文档中,我们将安装一个简单的集群,并讨论如何对其进行进一步配置以满足您的需求。
|
||||
|
||||
这个简单的集群将具有以下特点:
|
||||
* 一个Master服务同时起Coordinator和Overlord进程
|
||||
* 两个可伸缩、容错的Data服务来运行Historical和MiddleManager进程
|
||||
* 一个Query服务,运行Druid Broker和Router进程
|
||||
|
||||
在生产中,我们建议根据您的特定容错需求部署多个Master服务器和多个Query服务器,但是您可以使用一台Master服务器和一台Query服务器将服务快速运行起来,然后再添加更多服务器。
|
||||
### 选择硬件
|
||||
#### 首次部署
|
||||
|
||||
如果您现在没有Druid集群,并打算首次以集群模式部署运行Druid,则本指南提供了一个包含预先配置的集群部署示例。
|
||||
|
||||
##### Master服务
|
||||
|
||||
Coordinator进程和Overlord进程负责处理集群的元数据和协调需求,它们可以运行在同一台服务器上。
|
||||
|
||||
在本示例中,我们将在等效于AWS[m5.2xlarge](https://aws.amazon.com/ec2/instance-types/m5/)实例的硬件环境上部署。
|
||||
|
||||
硬件规格为:
|
||||
|
||||
* 8核CPU
|
||||
* 31GB内存
|
||||
|
||||
可以在`conf/druid/cluster/master`下找到适用于此硬件规格的Master示例服务配置。
|
||||
|
||||
##### Data服务
|
||||
|
||||
Historical和MiddleManager可以分配在同一台服务器上运行,以处理集群中的实际数据,这两个服务受益于CPU、内存和固态硬盘。
|
||||
|
||||
在本示例中,我们将在等效于AWS[i3.4xlarge](https://aws.amazon.com/cn/ec2/instance-types/i3/)实例的硬件环境上部署。
|
||||
|
||||
硬件规格为:
|
||||
* 16核CPU
|
||||
* 122GB内存
|
||||
* 2 * 1.9TB 固态硬盘
|
||||
|
||||
可以在`conf/druid/cluster/data`下找到适用于此硬件规格的Data示例服务配置。
|
||||
|
||||
##### Query服务
|
||||
|
||||
Druid Broker服务接收查询请求,并将其转发到集群中的其他部分,同时其可以可选的配置内存缓存。 Broker服务受益于CPU和内存。
|
||||
|
||||
在本示例中,我们将在等效于AWS[m5.2xlarge](https://aws.amazon.com/ec2/instance-types/m5/)实例的硬件环境上部署。
|
||||
|
||||
硬件规格为:
|
||||
|
||||
* 8核CPU
|
||||
* 31GB内存
|
||||
|
||||
您可以考虑将所有的其他开源UI工具或者查询依赖等与Broker服务部署在同一台服务器上。
|
||||
|
||||
可以在`conf/druid/cluster/query`下找到适用于此硬件规格的Query示例服务配置。
|
||||
|
||||
##### 其他硬件配置
|
||||
|
||||
上面的示例集群是从多种确定Druid集群大小的可能方式中选择的一个示例。
|
||||
|
||||
您可以根据自己的特定需求和限制选择较小/较大的硬件或较少/更多的服务器。
|
||||
|
||||
如果您的使用场景具有复杂的扩展要求,则还可以选择不将Druid服务混合部署(例如,独立的Historical Server)。
|
||||
|
||||
[基本集群调整指南](../Operations/basicClusterTuning.md)中的信息可以帮助您进行决策,并可以调整配置大小。
|
||||
|
||||
#### 从单服务器环境迁移部署
|
||||
|
||||
如果您现在已有单服务器部署的环境,例如[单服务器部署示例](./chapter-3.md)中的部署,并且希望迁移到类似规模的集群部署,则以下部分包含一些选择Master/Data/Query服务等效硬件的准则。
|
||||
|
||||
##### Master服务
|
||||
|
||||
Master服务的主要考虑点是可用CPU以及用于Coordinator和Overlord进程的堆内存。
|
||||
|
||||
首先计算出来在单服务器环境下Coordinator和Overlord已分配堆内存之和,然后选择具有足够内存的Master服务硬件,同时还需要考虑到为服务器上其他进程预留一些额外的内存。
|
||||
|
||||
对于CPU,可以选择接近于单服务器环境核数1/4的硬件。
|
||||
|
||||
##### Data服务
|
||||
|
||||
在为集群Data服务选择硬件时,主要考虑可用的CPU和内存,可行时使用SSD存储。
|
||||
|
||||
在集群化部署时,出于容错的考虑,最好是部署多个Data服务。
|
||||
|
||||
在选择Data服务的硬件时,可以假定一个分裂因子`N`,将原来的单服务器环境的CPU和内存除以`N`,然后在新集群中部署`N`个硬件规格缩小的Data服务。
|
||||
|
||||
##### Query服务
|
||||
|
||||
Query服务的硬件选择主要考虑可用的CPU、Broker服务的堆内和堆外内存、Router服务的堆内存。
|
||||
|
||||
首先计算出来在单服务器环境下Broker和Router已分配堆内存之和,然后选择可以覆盖Broker和Router内存的Query服务硬件,同时还需要考虑到为服务器上其他进程预留一些额外的内存。
|
||||
|
||||
对于CPU,可以选择接近于单服务器环境核数1/4的硬件。
|
||||
|
||||
[基本集群调优指南](../Operations/basicClusterTuning.md)包含有关如何计算Broker和Router服务内存使用量的信息。
|
||||
|
||||
### 选择操作系统
|
||||
|
||||
我们建议运行您喜欢的Linux发行版,同时还需要:
|
||||
|
||||
* **Java 8**
|
||||
|
||||
> [!WARNING]
|
||||
> Druid服务运行依赖Java 8,可以使用环境变量`DRUID_JAVA_HOME`或`JAVA_HOME`指定在何处查找Java,有关更多详细信息,请运行`verify-java`脚本。
|
||||
|
||||
### 下载发行版
|
||||
|
||||
首先,下载并解压缩发布安装包。最好首先在单台计算机上执行此操作,因为您将编辑配置,然后将修改后的配置分发到所有服务器上。
|
||||
|
||||
[下载](https://www.apache.org/dyn/closer.cgi?path=/druid/0.17.0/apache-druid-0.17.0-bin.tar.gz)Druid最新0.17.0release安装包
|
||||
|
||||
在终端中运行以下命令来提取Druid
|
||||
|
||||
```
|
||||
tar -xzf apache-druid-0.17.0-bin.tar.gz
|
||||
cd apache-druid-0.17.0
|
||||
```
|
||||
|
||||
在安装包中有以下文件:
|
||||
|
||||
* `LICENSE`和`NOTICE`文件
|
||||
* `bin/*` - 启停等脚本
|
||||
* `conf/druid/cluster/*` - 用于集群部署的模板配置
|
||||
* `extensions/*` - Druid核心扩展
|
||||
* `hadoop-dependencies/*` - Druid Hadoop依赖
|
||||
* `lib/*` - Druid核心库和依赖
|
||||
* `quickstart/*` - 与[快速入门](./chapter-2.md)相关的文件
|
||||
|
||||
我们主要是编辑`conf/druid/cluster/`中的文件。
|
||||
|
||||
#### 从单服务器环境迁移部署
|
||||
|
||||
在以下各节中,我们将在`conf/druid/cluster`下编辑配置。
|
||||
|
||||
如果您已经有一个单服务器部署,请将您的现有配置复制到`conf/druid /cluster`以保留您所做的所有配置更改。
|
||||
|
||||
### 配置元数据存储和深度存储
|
||||
#### 从单服务器环境迁移部署
|
||||
|
||||
如果您已经有一个单服务器部署,并且希望在整个迁移过程中保留数据,请在更新元数据/深层存储配置之前,按照[元数据迁移](../Operations/metadataMigration.md)和[深层存储迁移](../Operations/DeepstorageMigration.md)中的说明进行操作。
|
||||
|
||||
这些指南针对使用Derby元数据存储和本地深度存储的单服务器部署。 如果您已经在单服务器集群中使用了非Derby元数据存储,则可以在新集群中可以继续使用当前的元数据存储。
|
||||
|
||||
这些指南还提供了有关从本地深度存储迁移段的信息。集群部署需要分布式深度存储,例如S3或HDFS。 如果单服务器部署已在使用分布式深度存储,则可以在新集群中继续使用当前的深度存储。
|
||||
|
||||
#### 元数据存储
|
||||
|
||||
在`conf/druid/cluster/_common/common.runtime.properties`中,使用您将用作元数据存储的服务器地址来替换"metadata.storage.*":
|
||||
|
||||
* `druid.metadata.storage.connector.connectURI`
|
||||
* `druid.metadata.storage.connector.host`
|
||||
|
||||
在生产部署中,我们建议运行专用的元数据存储,例如具有复制功能的MySQL或PostgreSQL,与Druid服务器分开部署。
|
||||
|
||||
[MySQL扩展](../Configuration/core-ext/mysql.md)和[PostgreSQL](../Configuration/core-ext/postgresql.md)扩展文档包含有关扩展配置和初始数据库安装的说明。
|
||||
|
||||
#### 深度存储
|
||||
|
||||
Druid依赖于分布式文件系统或大对象(blob)存储来存储数据,最常用的深度存储实现是S3(适合于在AWS上)和HDFS(适合于已有Hadoop集群)。
|
||||
|
||||
##### S3
|
||||
|
||||
在`conf/druid/cluster/_common/common.runtime.properties`中,
|
||||
|
||||
* 在`druid.extension.loadList`配置项中增加"druid-s3-extensions"扩展
|
||||
* 注释掉配置文件中用于本地存储的"Deep Storage"和"Indexing service logs"
|
||||
* 打开配置文件中关于"For S3"部分中"Deep Storage"和"Indexing service logs"的配置
|
||||
|
||||
上述操作之后,您将看到以下的变化:
|
||||
|
||||
```json
|
||||
druid.extensions.loadList=["druid-s3-extensions"]
|
||||
|
||||
#druid.storage.type=local
|
||||
#druid.storage.storageDirectory=var/druid/segments
|
||||
|
||||
druid.storage.type=s3
|
||||
druid.storage.bucket=your-bucket
|
||||
druid.storage.baseKey=druid/segments
|
||||
druid.s3.accessKey=...
|
||||
druid.s3.secretKey=...
|
||||
|
||||
#druid.indexer.logs.type=file
|
||||
#druid.indexer.logs.directory=var/druid/indexing-logs
|
||||
|
||||
druid.indexer.logs.type=s3
|
||||
druid.indexer.logs.s3Bucket=your-bucket
|
||||
druid.indexer.logs.s3Prefix=druid/indexing-logs
|
||||
```
|
||||
更多信息可以看[S3扩展](../Configuration/core-ext/s3.md)部分的文档。
|
||||
|
||||
##### HDFS
|
||||
|
||||
在`conf/druid/cluster/_common/common.runtime.properties`中,
|
||||
|
||||
* 在`druid.extension.loadList`配置项中增加"druid-hdfs-storage"扩展
|
||||
* 注释掉配置文件中用于本地存储的"Deep Storage"和"Indexing service logs"
|
||||
* 打开配置文件中关于"For HDFS"部分中"Deep Storage"和"Indexing service logs"的配置
|
||||
|
||||
上述操作之后,您将看到以下的变化:
|
||||
|
||||
```json
|
||||
druid.extensions.loadList=["druid-hdfs-storage"]
|
||||
|
||||
#druid.storage.type=local
|
||||
#druid.storage.storageDirectory=var/druid/segments
|
||||
|
||||
druid.storage.type=hdfs
|
||||
druid.storage.storageDirectory=/druid/segments
|
||||
|
||||
#druid.indexer.logs.type=file
|
||||
#druid.indexer.logs.directory=var/druid/indexing-logs
|
||||
|
||||
druid.indexer.logs.type=hdfs
|
||||
druid.indexer.logs.directory=/druid/indexing-logs
|
||||
```
|
||||
|
||||
同时:
|
||||
|
||||
* 需要将Hadoop的配置文件(core-site.xml, hdfs-site.xml, yarn-site.xml, mapred-site.xml)放置在Druid进程的classpath中,可以将他们拷贝到`conf/druid/cluster/_common`目录中
|
||||
|
||||
更多信息可以看[HDFS扩展](../Configuration/core-ext/hdfs.md)部分的文档。
|
||||
|
||||
### Hadoop连接配置
|
||||
|
||||
如果要从Hadoop集群加载数据,那么此时应对Druid做如下配置:
|
||||
|
||||
* 在`conf/druid/cluster/_common/common.runtime.properties`文件中更新`druid.indexer.task.hadoopWorkingPath`配置项,将其更新为您期望的一个用于临时文件存储的HDFS路径。 通常会配置为`druid.indexer.task.hadoopWorkingPath=/tmp/druid-indexing`
|
||||
* 需要将Hadoop的配置文件(core-site.xml, hdfs-site.xml, yarn-site.xml, mapred-site.xml)放置在Druid进程的classpath中,可以将他们拷贝到`conf/druid/cluster/_common`目录中
|
||||
|
||||
请注意,您无需为了可以从Hadoop加载数据而使用HDFS深度存储。例如,如果您的集群在Amazon Web Services上运行,即使您使用Hadoop或Elastic MapReduce加载数据,我们也建议使用S3进行深度存储。
|
||||
|
||||
更多信息可以看[基于Hadoop的数据摄取](../DataIngestion/hadoopbased.md)部分的文档。
|
||||
|
||||
### Zookeeper连接配置
|
||||
|
||||
在生产集群中,我们建议使用专用的ZK集群,该集群与Druid服务器分开部署。
|
||||
|
||||
在 `conf/druid/cluster/_common/common.runtime.properties` 中,将 `druid.zk.service.host` 设置为包含用逗号分隔的host:port对列表的连接字符串,每个对与ZK中的ZooKeeper服务器相对应。(例如" 127.0.0.1:4545"或"127.0.0.1:3000,127.0.0.1:3001、127.0.0.1:3002")
|
||||
|
||||
您也可以选择在Master服务上运行ZK,而不使用专用的ZK集群。如果这样做,我们建议部署3个Master服务,以便您具有ZK仲裁。
|
||||
|
||||
### 配置调整
|
||||
#### 从单服务器环境迁移部署
|
||||
##### Master服务
|
||||
|
||||
如果您使用的是[单服务器部署示例](./chapter-3.md)中的示例配置,则这些示例中将Coordinator和Overlord进程合并为一个合并的进程。
|
||||
|
||||
`conf/druid/cluster/master/coordinator-overlord` 下的示例配置同样合并了Coordinator和Overlord进程。
|
||||
|
||||
您可以将现有的 `coordinator-overlord` 配置从单服务器部署复制到`conf/druid/cluster/master/coordinator-overlord`
|
||||
|
||||
##### Data服务
|
||||
|
||||
假设我们正在从一个32CPU和256GB内存的单服务器部署环境进行迁移,在老的环境中,Historical和MiddleManager使用了如下的配置:
|
||||
|
||||
Historical(单服务器)
|
||||
|
||||
```json
|
||||
druid.processing.buffer.sizeBytes=500000000
|
||||
druid.processing.numMergeBuffers=8
|
||||
druid.processing.numThreads=31
|
||||
```
|
||||
|
||||
MiddleManager(单服务器)
|
||||
|
||||
```json
|
||||
druid.worker.capacity=8
|
||||
druid.indexer.fork.property.druid.processing.numMergeBuffers=2
|
||||
druid.indexer.fork.property.druid.processing.buffer.sizeBytes=100000000
|
||||
druid.indexer.fork.property.druid.processing.numThreads=1
|
||||
```
|
||||
|
||||
在集群部署中,我们选择一个分裂因子(假设为2),则部署2个16CPU和128GB内存的Data服务,各项的调整如下:
|
||||
|
||||
Historical
|
||||
|
||||
* `druid.processing.numThreads`设置为新硬件的(`CPU核数 - 1`)
|
||||
* `druid.processing.numMergeBuffers` 使用分裂因子去除单服务部署环境的值
|
||||
* `druid.processing.buffer.sizeBytes` 该值保持不变
|
||||
|
||||
MiddleManager:
|
||||
|
||||
* `druid.worker.capacity`: 使用分裂因子去除单服务部署环境的值
|
||||
* `druid.indexer.fork.property.druid.processing.numMergeBuffers`: 该值保持不变
|
||||
* `druid.indexer.fork.property.druid.processing.buffer.sizeBytes`: 该值保持不变
|
||||
* `druid.indexer.fork.property.druid.processing.numThreads`: 该值保持不变
|
||||
|
||||
调整后的结果配置如下:
|
||||
|
||||
新的Historical(2 Data服务器)
|
||||
|
||||
```json
|
||||
druid.processing.buffer.sizeBytes=500000000
|
||||
druid.processing.numMergeBuffers=8
|
||||
druid.processing.numThreads=31
|
||||
```
|
||||
|
||||
新的MiddleManager(2 Data服务器)
|
||||
|
||||
```json
|
||||
druid.worker.capacity=4
|
||||
druid.indexer.fork.property.druid.processing.numMergeBuffers=2
|
||||
druid.indexer.fork.property.druid.processing.buffer.sizeBytes=100000000
|
||||
druid.indexer.fork.property.druid.processing.numThreads=1
|
||||
```
|
||||
|
||||
##### Query服务
|
||||
|
||||
您可以将现有的Broker和Router配置复制到`conf/druid/cluster/query`下的目录中,无需进行任何修改.
|
||||
|
||||
#### 首次部署
|
||||
|
||||
如果您正在使用如下描述的示例集群规格:
|
||||
|
||||
* 1 Master 服务器(m5.2xlarge)
|
||||
* 2 Data 服务器(i3.4xlarge)
|
||||
* 1 Query 服务器(m5.2xlarge)
|
||||
|
||||
`conf/druid/cluster`下的配置已经为此硬件确定了,一般情况下您无需做进一步的修改。
|
||||
|
||||
如果您选择了其他硬件,则[基本的集群调整指南](../Operations/basicClusterTuning.md)可以帮助您调整配置大小。
|
||||
|
||||
### 开启端口(如果使用了防火墙)
|
||||
|
||||
如果您正在使用防火墙或其他仅允许特定端口上流量准入的系统,请在以下端口上允许入站连接:
|
||||
|
||||
#### Master服务
|
||||
|
||||
* 1527(Derby元数据存储,如果您正在使用一个像MySQL或者PostgreSQL的分离的元数据存储则不需要)
|
||||
* 2181(Zookeeper,如果使用了独立的ZK集群则不需要)
|
||||
* 8081(Coordinator)
|
||||
* 8090(Overlord)
|
||||
|
||||
#### Data服务
|
||||
|
||||
* 8083(Historical)
|
||||
* 8091,8100-8199(Druid MiddleManager,如果`druid.worker.capacity`参数设置较大的话,则需要更多高于8199的端口)
|
||||
|
||||
#### Query服务
|
||||
|
||||
* 8082(Broker)
|
||||
* 8088(Router,如果使用了)
|
||||
|
||||
> [!WARNING]
|
||||
> 在生产中,我们建议将ZooKeeper和元数据存储部署在其专用硬件上,而不是在Master服务器上。
|
||||
|
||||
### 启动Master服务
|
||||
|
||||
将Druid发行版和您编辑的配置文件复制到Master服务器上。
|
||||
|
||||
如果您一直在本地计算机上编辑配置,则可以使用rsync复制它们:
|
||||
|
||||
```json
|
||||
rsync -az apache-druid-0.17.0/ MASTER_SERVER:apache-druid-0.17.0/
|
||||
```
|
||||
|
||||
#### 不带Zookeeper启动
|
||||
|
||||
在发行版根目录中,运行以下命令以启动Master服务:
|
||||
```json
|
||||
bin/start-cluster-master-no-zk-server
|
||||
```
|
||||
|
||||
#### 带Zookeeper启动
|
||||
|
||||
如果计划在Master服务器上运行ZK,请首先更新`conf/zoo.cfg`以标识您计划如何运行ZK,然后,您可以使用以下命令与ZK一起启动Master服务进程:
|
||||
```json
|
||||
bin/start-cluster-master-with-zk-server
|
||||
```
|
||||
|
||||
> [!WARNING]
|
||||
> 在生产中,我们建议将ZooKeeper运行在其专用硬件上。
|
||||
|
||||
### 启动Data服务
|
||||
|
||||
将Druid发行版和您编辑的配置文件复制到您的Data服务器。
|
||||
|
||||
在发行版根目录中,运行以下命令以启动Data服务:
|
||||
```json
|
||||
bin/start-cluster-data-server
|
||||
```
|
||||
|
||||
您可以在需要的时候增加更多的Data服务器。
|
||||
|
||||
> [!WARNING]
|
||||
> 对于具有复杂资源分配需求的集群,您可以将Historical和MiddleManager分开部署,并分别扩容组件。这也使您能够利用Druid的内置MiddleManager自动伸缩功能。
|
||||
|
||||
### 启动Query服务
|
||||
将Druid发行版和您编辑的配置文件复制到您的Query服务器。
|
||||
|
||||
在发行版根目录中,运行以下命令以启动Query服务:
|
||||
|
||||
```json
|
||||
bin/start-cluster-query-server
|
||||
```
|
||||
|
||||
您可以根据查询负载添加更多查询服务器。 如果增加了查询服务器的数量,请确保按照[基本集群调优指南](../Operations/basicClusterTuning.md)中的说明调整Historical和Task上的连接池。
|
||||
|
||||
### 加载数据
|
||||
|
||||
恭喜,您现在有了Druid集群!下一步是根据使用场景来了解将数据加载到Druid的推荐方法。
|
||||
|
||||
了解有关[加载数据](../DataIngestion/index.md)的更多信息。
|
||||
|
||||
|
Binary file not shown.
After Width: | Height: | Size: 65 KiB |
66
SUMMARY.md
66
SUMMARY.md
|
@ -66,39 +66,39 @@
|
|||
* [问题FAQ](DataIngestion/faq.md)
|
||||
|
||||
* [数据查询]()
|
||||
* [Druid SQL](Querying/druidsql.md)
|
||||
* [原生查询](Querying/makeNativeQueries.md)
|
||||
* [查询执行](Querying/queryexecution.md)
|
||||
* [一些概念](Querying/datasource.md)
|
||||
* [数据源](Querying/datasource.md)
|
||||
* [Joins](Querying/joins.md)
|
||||
* [Lookups](Querying/lookups.md)
|
||||
* [多值维度](Querying/multi-value-dimensions.md)
|
||||
* [多租户](Querying/multitenancy.md)
|
||||
* [查询缓存](Querying/querycached.md)
|
||||
* [上下文参数](Querying/query-context.md)
|
||||
* [原生查询类型](Querying/timeseriesquery.md)
|
||||
* [Timeseries](Querying/timeseriesquery.md)
|
||||
* [TopN](Querying/topn.md)
|
||||
* [GroupBy](Querying/groupby.md)
|
||||
* [Scan](Querying/scan.md)
|
||||
* [Search](Querying/searchquery.md)
|
||||
* [TimeBoundary](Querying/timeboundaryquery.md)
|
||||
* [SegmentMetadata](Querying/segmentMetadata.md)
|
||||
* [DatasourceMetadata](Querying/datasourcemetadataquery.md)
|
||||
* [原生查询组件](Querying/filters.md)
|
||||
* [过滤](Querying/filters.md)
|
||||
* [粒度](Querying/granularity.md)
|
||||
* [维度](Querying/dimensionspec.md)
|
||||
* [聚合](Querying/Aggregations.md)
|
||||
* [后聚合](Querying/postaggregation.md)
|
||||
* [表达式](Querying/expression.md)
|
||||
* [Having(GroupBy)](Querying/having.md)
|
||||
* [排序和Limit(GroupBy)](Querying/limitspec.md)
|
||||
* [排序(TopN)](Querying/topnsorting.md)
|
||||
* [字符串比较器(String Comparators)](Querying/sorting-orders.md)
|
||||
* [虚拟列(Virtual Columns)](Querying/virtual-columns.md)
|
||||
* [空间过滤器(Spatial Filter)](Querying/spatialfilter.md)
|
||||
* [Druid SQL](querying/druidsql.md)
|
||||
* [原生查询](querying/makeNativeQueries.md)
|
||||
* [查询执行](querying/queryexecution.md)
|
||||
* [一些概念](querying/datasource.md)
|
||||
* [数据源](querying/datasource.md)
|
||||
* [Joins](querying/joins.md)
|
||||
* [Lookups](querying/lookups.md)
|
||||
* [多值维度](querying/multi-value-dimensions.md)
|
||||
* [多租户](querying/multitenancy.md)
|
||||
* [查询缓存](querying/querycached.md)
|
||||
* [上下文参数](querying/query-context.md)
|
||||
* [原生查询类型](querying/timeseriesquery.md)
|
||||
* [Timeseries](querying/timeseriesquery.md)
|
||||
* [TopN](querying/topn.md)
|
||||
* [GroupBy](querying/groupby.md)
|
||||
* [Scan](querying/scan.md)
|
||||
* [Search](querying/searchquery.md)
|
||||
* [TimeBoundary](querying/timeboundaryquery.md)
|
||||
* [SegmentMetadata](querying/segmentMetadata.md)
|
||||
* [DatasourceMetadata](querying/datasourcemetadataquery.md)
|
||||
* [原生查询组件](querying/filters.md)
|
||||
* [过滤](querying/filters.md)
|
||||
* [粒度](querying/granularity.md)
|
||||
* [维度](querying/dimensionspec.md)
|
||||
* [聚合](querying/Aggregations.md)
|
||||
* [后聚合](querying/postaggregation.md)
|
||||
* [表达式](querying/expression.md)
|
||||
* [Having(GroupBy)](querying/having.md)
|
||||
* [排序和Limit(GroupBy)](querying/limitspec.md)
|
||||
* [排序(TopN)](querying/topnsorting.md)
|
||||
* [字符串比较器(String Comparators)](querying/sorting-orders.md)
|
||||
* [虚拟列(Virtual Columns)](querying/virtual-columns.md)
|
||||
* [空间过滤器(Spatial Filter)](querying/spatialfilter.md)
|
||||
|
||||
* [配置列表]()
|
||||
* [配置列表](Configuration/configuration.md)
|
||||
|
|
|
@ -39,6 +39,6 @@ org.apache.druid.cli.Main server historical
|
|||
|
||||
### 查询段
|
||||
|
||||
有关查询Historical的详细信息,请参阅 [数据查询](../Querying/makeNativeQueries.md)。
|
||||
有关查询Historical的详细信息,请参阅 [数据查询](../querying/makeNativeQueries.md)。
|
||||
|
||||
Historical可以被配置记录和报告每个服务查询的指标。
|
|
@ -0,0 +1,282 @@
|
|||
# Druid 系统架构
|
||||
|
||||
|
||||
Druid has a multi-process, distributed architecture that is designed to be cloud-friendly and easy to operate. Each
|
||||
Druid process type can be configured and scaled independently, giving you maximum flexibility over your cluster. This
|
||||
design also provides enhanced fault tolerance: an outage of one component will not immediately affect other components.
|
||||
|
||||
## Processes and Servers
|
||||
|
||||
Druid has several process types, briefly described below:
|
||||
|
||||
* [**Coordinator**](../design/coordinator.md) processes manage data availability on the cluster.
|
||||
* [**Overlord**](../design/overlord.md) processes control the assignment of data ingestion workloads.
|
||||
* [**Broker**](../design/broker.md) processes handle queries from external clients.
|
||||
* [**Router**](../design/router.md) processes are optional processes that can route requests to Brokers, Coordinators, and Overlords.
|
||||
* [**Historical**](../design/historical.md) processes store queryable data.
|
||||
* [**MiddleManager**](../design/middlemanager.md) processes are responsible for ingesting data.
|
||||
|
||||
Druid processes can be deployed any way you like, but for ease of deployment we suggest organizing them into three server types: Master, Query, and Data.
|
||||
|
||||
* **Master**: Runs Coordinator and Overlord processes, manages data availability and ingestion.
|
||||
* **Query**: Runs Broker and optional Router processes, handles queries from external clients.
|
||||
* **Data**: Runs Historical and MiddleManager processes, executes ingestion workloads and stores all queryable data.
|
||||
|
||||
For more details on process and server organization, please see [Druid Processes and Servers](../design/processes.md).
|
||||
|
||||
## External dependencies
|
||||
|
||||
In addition to its built-in process types, Druid also has three external dependencies. These are intended to be able to
|
||||
leverage existing infrastructure, where present.
|
||||
|
||||
### Deep storage
|
||||
Shared file storage accessible by every Druid server. In a clustered deployment, this is typically going to
|
||||
be a distributed object store like S3 or HDFS, or a network mounted filesystem. In a single-server deployment,
|
||||
this is typically going to be local disk. Druid uses deep storage to store any data that has been ingested into the
|
||||
system.
|
||||
|
||||
Druid uses deep storage only as a backup of your data and as a way to transfer data in the background between
|
||||
Druid processes. To respond to queries, Historical processes do not read from deep storage, but instead read prefetched
|
||||
segments from their local disks before any queries are served. This means that Druid never needs to access deep storage
|
||||
during a query, helping it offer the best query latencies possible. It also means that you must have enough disk space
|
||||
both in deep storage and across your Historical processes for the data you plan to load.
|
||||
|
||||
Deep storage is an important part of Druid's elastic, fault-tolerant design. Druid can bootstrap from deep storage even
|
||||
if every single data server is lost and re-provisioned.
|
||||
|
||||
For more details, please see the [Deep storage](../dependencies/deep-storage.md) page.
|
||||
|
||||
### Metadata storage
|
||||
The metadata storage holds various shared system metadata such as segment usage information and task information. In a
|
||||
clustered deployment, this is typically going to be a traditional RDBMS like PostgreSQL or MySQL. In a single-server
|
||||
deployment, it is typically going to be a locally-stored Apache Derby database.
|
||||
|
||||
For more details, please see the [Metadata storage](../dependencies/metadata-storage.md) page.
|
||||
|
||||
### ZooKeeper
|
||||
Used for internal service discovery, coordination, and leader election.
|
||||
|
||||
For more details, please see the [ZooKeeper](../dependencies/zookeeper.md) page.
|
||||
|
||||
## Architecture diagram
|
||||
|
||||
The following diagram shows how queries and data flow through this architecture, using the suggested Master/Query/Data server organization:
|
||||
|
||||
<img src="../assets/druid-architecture.png" width="800"/>
|
||||
|
||||
|
||||
|
||||
## Storage design
|
||||
|
||||
### Datasources and segments
|
||||
|
||||
Druid data is stored in "datasources", which are similar to tables in a traditional RDBMS. Each datasource is
|
||||
partitioned by time and, optionally, further partitioned by other attributes. Each time range is called a "chunk" (for
|
||||
example, a single day, if your datasource is partitioned by day). Within a chunk, data is partitioned into one or more
|
||||
["segments"](../design/segments.md). Each segment is a single file, typically comprising up to a few million rows of data. Since segments are
|
||||
organized into time chunks, it's sometimes helpful to think of segments as living on a timeline like the following:
|
||||
|
||||
<img src="../assets/druid-timeline.png" width="800" />
|
||||
|
||||
A datasource may have anywhere from just a few segments, up to hundreds of thousands and even millions of segments. Each
|
||||
segment starts life off being created on a MiddleManager, and at that point, is mutable and uncommitted. The segment
|
||||
building process includes the following steps, designed to produce a data file that is compact and supports fast
|
||||
queries:
|
||||
|
||||
- Conversion to columnar format
|
||||
- Indexing with bitmap indexes
|
||||
- Compression using various algorithms
|
||||
- Dictionary encoding with id storage minimization for String columns
|
||||
- Bitmap compression for bitmap indexes
|
||||
- Type-aware compression for all columns
|
||||
|
||||
Periodically, segments are committed and published. At this point, they are written to [deep storage](#deep-storage),
|
||||
become immutable, and move from MiddleManagers to the Historical processes. An entry about the segment is also written
|
||||
to the [metadata store](#metadata-storage). This entry is a self-describing bit of metadata about the segment, including
|
||||
things like the schema of the segment, its size, and its location on deep storage. These entries are what the
|
||||
Coordinator uses to know what data *should* be available on the cluster.
|
||||
|
||||
For details on the segment file format, please see [segment files](segments.md).
|
||||
|
||||
For details on modeling your data in Druid, see [schema design](../ingestion/schema-design.md).
|
||||
|
||||
### Indexing and handoff
|
||||
|
||||
_Indexing_ is the mechanism by which new segments are created, and _handoff_ is the mechanism by which they are published
|
||||
and begin being served by Historical processes. The mechanism works like this on the indexing side:
|
||||
|
||||
1. An _indexing task_ starts running and building a new segment. It must determine the identifier of the segment before
|
||||
it starts building it. For a task that is appending (like a Kafka task, or an index task in append mode) this will be
|
||||
done by calling an "allocate" API on the Overlord to potentially add a new partition to an existing set of segments. For
|
||||
a task that is overwriting (like a Hadoop task, or an index task _not_ in append mode) this is done by locking an
|
||||
interval and creating a new version number and new set of segments.
|
||||
2. If the indexing task is a realtime task (like a Kafka task) then the segment is immediately queryable at this point.
|
||||
It's available, but unpublished.
|
||||
3. When the indexing task has finished reading data for the segment, it pushes it to deep storage and then publishes it
|
||||
by writing a record into the metadata store.
|
||||
4. If the indexing task is a realtime task, at this point it waits for a Historical process to load the segment. If the
|
||||
indexing task is not a realtime task, it exits immediately.
|
||||
|
||||
And like this on the Coordinator / Historical side:
|
||||
|
||||
1. The Coordinator polls the metadata store periodically (by default, every 1 minute) for newly published segments.
|
||||
2. When the Coordinator finds a segment that is published and used, but unavailable, it chooses a Historical process
|
||||
to load that segment and instructs that Historical to do so.
|
||||
3. The Historical loads the segment and begins serving it.
|
||||
4. At this point, if the indexing task was waiting for handoff, it will exit.
|
||||
|
||||
### Segment identifiers
|
||||
|
||||
Segments all have a four-part identifier with the following components:
|
||||
|
||||
- Datasource name.
|
||||
- Time interval (for the time chunk containing the segment; this corresponds to the `segmentGranularity` specified
|
||||
at ingestion time).
|
||||
- Version number (generally an ISO8601 timestamp corresponding to when the segment set was first started).
|
||||
- Partition number (an integer, unique within a datasource+interval+version; may not necessarily be contiguous).
|
||||
|
||||
For example, this is the identifier for a segment in datasource `clarity-cloud0`, time chunk
|
||||
`2018-05-21T16:00:00.000Z/2018-05-21T17:00:00.000Z`, version `2018-05-21T15:56:09.909Z`, and partition number 1:
|
||||
|
||||
```
|
||||
clarity-cloud0_2018-05-21T16:00:00.000Z_2018-05-21T17:00:00.000Z_2018-05-21T15:56:09.909Z_1
|
||||
```
|
||||
|
||||
Segments with partition number 0 (the first partition in a chunk) omit the partition number, like the following
|
||||
example, which is a segment in the same time chunk as the previous one, but with partition number 0 instead of 1:
|
||||
|
||||
```
|
||||
clarity-cloud0_2018-05-21T16:00:00.000Z_2018-05-21T17:00:00.000Z_2018-05-21T15:56:09.909Z
|
||||
```
|
||||
|
||||
### Segment versioning
|
||||
|
||||
You may be wondering what the "version number" described in the previous section is for. Or, you might not be, in which
|
||||
case good for you and you can skip this section!
|
||||
|
||||
It's there to support batch-mode overwriting. In Druid, if all you ever do is append data, then there will be just a
|
||||
single version for each time chunk. But when you overwrite data, what happens behind the scenes is that a new set of
|
||||
segments is created with the same datasource, same time interval, but a higher version number. This is a signal to the
|
||||
rest of the Druid system that the older version should be removed from the cluster, and the new version should replace
|
||||
it.
|
||||
|
||||
The switch appears to happen instantaneously to a user, because Druid handles this by first loading the new data (but
|
||||
not allowing it to be queried), and then, as soon as the new data is all loaded, switching all new queries to use those
|
||||
new segments. Then it drops the old segments a few minutes later.
|
||||
|
||||
### Segment lifecycle
|
||||
|
||||
Each segment has a lifecycle that involves the following three major areas:
|
||||
|
||||
1. **Metadata store:** Segment metadata (a small JSON payload generally no more than a few KB) is stored in the
|
||||
[metadata store](../dependencies/metadata-storage.md) once a segment is done being constructed. The act of inserting
|
||||
a record for a segment into the metadata store is called _publishing_. These metadata records have a boolean flag
|
||||
named `used`, which controls whether the segment is intended to be queryable or not. Segments created by realtime tasks will be
|
||||
available before they are published, since they are only published when the segment is complete and will not accept
|
||||
any additional rows of data.
|
||||
2. **Deep storage:** Segment data files are pushed to deep storage once a segment is done being constructed. This
|
||||
happens immediately before publishing metadata to the metadata store.
|
||||
3. **Availability for querying:** Segments are available for querying on some Druid data server, like a realtime task
|
||||
or a Historical process.
|
||||
|
||||
You can inspect the state of currently active segments using the Druid SQL
|
||||
[`sys.segments` table](../querying/sql.md#segments-table). It includes the following flags:
|
||||
|
||||
- `is_published`: True if segment metadata has been published to the metadata store and `used` is true.
|
||||
- `is_available`: True if the segment is currently available for querying, either on a realtime task or Historical
|
||||
process.
|
||||
- `is_realtime`: True if the segment is _only_ available on realtime tasks. For datasources that use realtime ingestion,
|
||||
this will generally start off `true` and then become `false` as the segment is published and handed off.
|
||||
- `is_overshadowed`: True if the segment is published (with `used` set to true) and is fully overshadowed by some other
|
||||
published segments. Generally this is a transient state, and segments in this state will soon have their `used` flag
|
||||
automatically set to false.
|
||||
|
||||
### Availability and consistency
|
||||
|
||||
Druid has an architectural separation between ingestion and querying, as described above in
|
||||
[Indexing and handoff](#indexing-and-handoff). This means that when understanding Druid's availability and
|
||||
consistency properties, we must look at each function separately.
|
||||
|
||||
On the **ingestion side**, Druid's primary [ingestion methods](../ingestion/index.md#ingestion-methods) are all
|
||||
pull-based and offer transactional guarantees. This means that you are guaranteed that ingestion using these will
|
||||
publish in an all-or-nothing manner:
|
||||
|
||||
- Supervised "seekable-stream" ingestion methods like [Kafka](../development/extensions-core/kafka-ingestion.md) and
|
||||
[Kinesis](../development/extensions-core/kinesis-ingestion.md). With these methods, Druid commits stream offsets to its
|
||||
[metadata store](#metadata-storage) alongside segment metadata, in the same transaction. Note that ingestion of data
|
||||
that has not yet been published can be rolled back if ingestion tasks fail. In this case, partially-ingested data is
|
||||
discarded, and Druid will resume ingestion from the last committed set of stream offsets. This ensures exactly-once
|
||||
publishing behavior.
|
||||
- [Hadoop-based batch ingestion](../ingestion/hadoop.md). Each task publishes all segment metadata in a single
|
||||
transaction.
|
||||
- [Native batch ingestion](../ingestion/native-batch.md). In parallel mode, the supervisor task publishes all segment
|
||||
metadata in a single transaction after the subtasks are finished. In simple (single-task) mode, the single task
|
||||
publishes all segment metadata in a single transaction after it is complete.
|
||||
|
||||
Additionally, some ingestion methods offer an _idempotency_ guarantee. This means that repeated executions of the same
|
||||
ingestion will not cause duplicate data to be ingested:
|
||||
|
||||
- Supervised "seekable-stream" ingestion methods like [Kafka](../development/extensions-core/kafka-ingestion.md) and
|
||||
[Kinesis](../development/extensions-core/kinesis-ingestion.md) are idempotent due to the fact that stream offsets and
|
||||
segment metadata are stored together and updated in lock-step.
|
||||
- [Hadoop-based batch ingestion](../ingestion/hadoop.md) is idempotent unless one of your input sources
|
||||
is the same Druid datasource that you are ingesting into. In this case, running the same task twice is non-idempotent,
|
||||
because you are adding to existing data instead of overwriting it.
|
||||
- [Native batch ingestion](../ingestion/native-batch.md) is idempotent unless
|
||||
[`appendToExisting`](../ingestion/native-batch.md) is true, or one of your input sources is the same Druid datasource
|
||||
that you are ingesting into. In either of these two cases, running the same task twice is non-idempotent, because you
|
||||
are adding to existing data instead of overwriting it.
|
||||
|
||||
On the **query side**, the Druid Broker is responsible for ensuring that a consistent set of segments is involved in a
|
||||
given query. It selects the appropriate set of segments to use when the query starts based on what is currently
|
||||
available. This is supported by _atomic replacement_, a feature that ensures that from a user's perspective, queries
|
||||
flip instantaneously from an older set of data to a newer set of data, with no consistency or performance impact.
|
||||
This is used for Hadoop-based batch ingestion, native batch ingestion when `appendToExisting` is false, and compaction.
|
||||
|
||||
Note that atomic replacement happens for each time chunk individually. If a batch ingestion task or compaction
|
||||
involves multiple time chunks, then each time chunk will undergo atomic replacement soon after the task finishes, but
|
||||
the replacements will not all happen simultaneously.
|
||||
|
||||
Typically, atomic replacement in Druid is based on a _core set_ concept that works in conjunction with segment versions.
|
||||
When a time chunk is overwritten, a new core set of segments is created with a higher version number. The core set
|
||||
must _all_ be available before the Broker will use them instead of the older set. There can also only be one core set
|
||||
per version per time chunk. Druid will also only use a single version at a time per time chunk. Together, these
|
||||
properties provide Druid's atomic replacement guarantees.
|
||||
|
||||
Druid also supports an experimental _segment locking_ mode that is activated by setting
|
||||
[`forceTimeChunkLock`](../ingestion/tasks.md#context) to false in the context of an ingestion task. In this case, Druid
|
||||
creates an _atomic update group_ using the existing version for the time chunk, instead of creating a new core set
|
||||
with a new version number. There can be multiple atomic update groups with the same version number per time chunk. Each
|
||||
one replaces a specific set of earlier segments in the same time chunk and with the same version number. Druid will
|
||||
query the latest one that is fully available. This is a more powerful version of the core set concept, because it
|
||||
enables atomically replacing a subset of data for a time chunk, as well as doing atomic replacement and appending
|
||||
simultaneously.
|
||||
|
||||
If segments become unavailable due to multiple Historicals going offline simultaneously (beyond your replication
|
||||
factor), then Druid queries will include only the segments that are still available. In the background, Druid will
|
||||
reload these unavailable segments on other Historicals as quickly as possible, at which point they will be included in
|
||||
queries again.
|
||||
|
||||
## Query processing
|
||||
|
||||
Queries first enter the [Broker](../design/broker.md), where the Broker will identify which segments have data that may pertain to that query.
|
||||
The list of segments is always pruned by time, and may also be pruned by other attributes depending on how your
|
||||
datasource is partitioned. The Broker will then identify which [Historicals](../design/historical.md) and
|
||||
[MiddleManagers](../design/middlemanager.md) are serving those segments and send a rewritten subquery to each of those processes. The Historical/MiddleManager processes will take in the
|
||||
queries, process them and return results. The Broker receives results and merges them together to get the final answer,
|
||||
which it returns to the original caller.
|
||||
|
||||
Broker pruning is an important way that Druid limits the amount of data that must be scanned for each query, but it is
|
||||
not the only way. For filters at a more granular level than what the Broker can use for pruning, indexing structures
|
||||
inside each segment allow Druid to figure out which (if any) rows match the filter set before looking at any row of
|
||||
data. Once Druid knows which rows match a particular query, it only accesses the specific columns it needs for that
|
||||
query. Within those columns, Druid can skip from row to row, avoiding reading data that doesn't match the query filter.
|
||||
|
||||
So Druid uses three different techniques to maximize query performance:
|
||||
|
||||
- Pruning which segments are accessed for each query.
|
||||
- Within each segment, using indexes to identify which rows must be accessed.
|
||||
- Within each segment, only reading the specific rows and columns that are relevant to a particular query.
|
||||
|
||||
For more details about how Druid executes queries, refer to the [Query execution](../querying/query-execution.md)
|
||||
documentation.
|
167
design/index.md
167
design/index.md
|
@ -1,100 +1,93 @@
|
|||
---
|
||||
id: index
|
||||
title: "Introduction to Apache Druid"
|
||||
---
|
||||
# Druid 介绍
|
||||
本页面对 Druid 的基本情况进行了一些介绍和简要说明。
|
||||
|
||||
<!--
|
||||
~ Licensed to the Apache Software Foundation (ASF) under one
|
||||
~ or more contributor license agreements. See the NOTICE file
|
||||
~ distributed with this work for additional information
|
||||
~ regarding copyright ownership. The ASF licenses this file
|
||||
~ to you under the Apache License, Version 2.0 (the
|
||||
~ "License"); you may not use this file except in compliance
|
||||
~ with the License. You may obtain a copy of the License at
|
||||
~
|
||||
~ http://www.apache.org/licenses/LICENSE-2.0
|
||||
~
|
||||
~ Unless required by applicable law or agreed to in writing,
|
||||
~ software distributed under the License is distributed on an
|
||||
~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
~ KIND, either express or implied. See the License for the
|
||||
~ specific language governing permissions and limitations
|
||||
~ under the License.
|
||||
-->
|
||||
## 什么是 Druid
|
||||
|
||||
## What is Druid?
|
||||
Apache Druid 是一个实时分析型数据库,旨在对大型数据集进行快速查询和分析("[OLAP](https://en.wikipedia.org/wiki/Online_analytical_processing)" 查询)。
|
||||
|
||||
Apache Druid is a real-time analytics database designed for fast slice-and-dice analytics
|
||||
("[OLAP](http://en.wikipedia.org/wiki/Online_analytical_processing)" queries) on large data sets. Druid is most often
|
||||
used as a database for powering use cases where real-time ingest, fast query performance, and high uptime are important.
|
||||
As such, Druid is commonly used for powering GUIs of analytical applications, or as a backend for highly-concurrent APIs
|
||||
that need fast aggregations. Druid works best with event-oriented data.
|
||||
Druid 最常被当做数据库,用以支持实时摄取、高查询性能和高稳定运行的应用场景。
|
||||
例如,Druid 通常被用来作为图形分析工具的数据源来提供数据,或当有需要高聚和高并发的后端 API。
|
||||
同时 Druid 也非常适合针对面向事件类型的数据。
|
||||
|
||||
Common application areas for Druid include:
|
||||
通常可以使用 Druid 作为数据源的系统包括有:
|
||||
- 点击流量分析(Web 或者移动分析)
|
||||
- 网络监测分析(网络性能监控)
|
||||
- 服务器存储指标
|
||||
- 供应链分析(生产数据指标)
|
||||
- 应用性能指标
|
||||
- 数字广告分析
|
||||
- 商业整合 / OLAP
|
||||
|
||||
- Clickstream analytics (web and mobile analytics)
|
||||
- Network telemetry analytics (network performance monitoring)
|
||||
- Server metrics storage
|
||||
- Supply chain analytics (manufacturing metrics)
|
||||
- Application performance metrics
|
||||
- Digital marketing/advertising analytics
|
||||
- Business intelligence / OLAP
|
||||
Druid 的核心架构集合了数据仓库(data warehouses),时序数据库(timeseries databases),日志分析系统(logsearch systems)的概念。
|
||||
|
||||
Druid's core architecture combines ideas from data warehouses, timeseries databases, and logsearch systems. Some of
|
||||
Druid's key features are:
|
||||
如果你对上面的各种数据类型,数据库不是非常了解的话,那么我们建议你进行一些搜索来了解相关的一些定义和提供的功能。
|
||||
|
||||
1. **Columnar storage format.** Druid uses column-oriented storage, meaning it only needs to load the exact columns
|
||||
needed for a particular query. This gives a huge speed boost to queries that only hit a few columns. In addition, each
|
||||
column is stored optimized for its particular data type, which supports fast scans and aggregations.
|
||||
2. **Scalable distributed system.** Druid is typically deployed in clusters of tens to hundreds of servers, and can
|
||||
offer ingest rates of millions of records/sec, retention of trillions of records, and query latencies of sub-second to a
|
||||
few seconds.
|
||||
3. **Massively parallel processing.** Druid can process a query in parallel across the entire cluster.
|
||||
4. **Realtime or batch ingestion.** Druid can ingest data either real-time (ingested data is immediately available for
|
||||
querying) or in batches.
|
||||
5. **Self-healing, self-balancing, easy to operate.** As an operator, to scale the cluster out or in, simply add or
|
||||
remove servers and the cluster will rebalance itself automatically, in the background, without any downtime. If any
|
||||
Druid servers fail, the system will automatically route around the damage until those servers can be replaced. Druid
|
||||
is designed to run 24/7 with no need for planned downtimes for any reason, including configuration changes and software
|
||||
updates.
|
||||
6. **Cloud-native, fault-tolerant architecture that won't lose data.** Once Druid has ingested your data, a copy is
|
||||
stored safely in [deep storage](architecture.html#deep-storage) (typically cloud storage, HDFS, or a shared filesystem).
|
||||
Your data can be recovered from deep storage even if every single Druid server fails. For more limited failures affecting
|
||||
just a few Druid servers, replication ensures that queries are still possible while the system recovers.
|
||||
7. **Indexes for quick filtering.** Druid uses [Roaring](https://roaringbitmap.org/) or
|
||||
[CONCISE](https://arxiv.org/pdf/1004.0403) compressed bitmap indexes to create indexes that power fast filtering and
|
||||
searching across multiple columns.
|
||||
8. **Time-based partitioning.** Druid first partitions data by time, and can additionally partition based on other fields.
|
||||
This means time-based queries will only access the partitions that match the time range of the query. This leads to
|
||||
significant performance improvements for time-based data.
|
||||
9. **Approximate algorithms.** Druid includes algorithms for approximate count-distinct, approximate ranking, and
|
||||
computation of approximate histograms and quantiles. These algorithms offer bounded memory usage and are often
|
||||
substantially faster than exact computations. For situations where accuracy is more important than speed, Druid also
|
||||
offers exact count-distinct and exact ranking.
|
||||
10. **Automatic summarization at ingest time.** Druid optionally supports data summarization at ingestion time. This
|
||||
summarization partially pre-aggregates your data, and can lead to big costs savings and performance boosts.
|
||||
Druid 的一些关键特性包括有:
|
||||
1. **列示存储格式(Columnar storage format)** Druid 使用列式存储,这意味着在一个特定的数据查询中它只需要查询特定的列。
|
||||
这样的设计极大的提高了部分列查询场景性能。另外,每一列数据都针对特定数据类型做了优化存储,从而能够支持快速扫描和聚合。
|
||||
|
||||
## When should I use Druid?
|
||||
2. **可扩展的分布式系统(Scalable distributed system)** Druid通常部署在数十到数百台服务器的集群中,
|
||||
并且可以提供每秒数百万级的数据导入,并且保存有万亿级的数据,同时提供 100ms 到 几秒钟之间的查询延迟。
|
||||
|
||||
3. **高性能并发处理(Massively parallel processing)** Druid 可以在整个集群中并行处理查询。
|
||||
|
||||
Druid is used by many companies of various sizes for many different use cases. Check out the
|
||||
[Powered by Apache Druid](/druid-powered) page
|
||||
4. **实时或者批量数据处理(Realtime or batch ingestion)** Druid 可以实时(已经被导入和摄取的数据可立即用于查询)导入摄取数据库或批量导入摄取数据。
|
||||
|
||||
5. **自我修复、自我平衡、易于操作(Self-healing, self-balancing, easy to operate)** 为集群运维操作人员,要伸缩集群只需添加或删除服务,集群就会在后台自动重新平衡自身,而不会造成任何停机。
|
||||
如果任何一台 Druid 服务器发生故障,系统将自动绕过损坏的节点而保持无间断运行。
|
||||
Druid 被设计为 7*24 运行,无需设计任何原因的计划内停机(例如需要更改配置或者进行软件更新)。
|
||||
|
||||
6. **原生结合云的容错架构,不丢失数据(Cloud-native, fault-tolerant architecture that won't lose data)** 一旦 Druid 获得了数据,那么获得的数据将会安全的保存在 [深度存储](architecture.md#deep-storage) (通常是云存储,HDFS 或共享文件系统)中。
|
||||
即使单个个 Druid 服务发生故障,你的数据也可以从深度存储中进行恢复。对于仅影响少数 Druid 服务的有限故障,保存的副本可确保在系统恢复期间仍然可以进行查询。
|
||||
|
||||
7. **针对快速过滤的索引(Indexes for quick filtering)** Druid 使用 [Roaring](https://roaringbitmap.org/) 或
|
||||
[CONCISE](https://arxiv.org/pdf/1004.0403) 来压缩 bitmap indexes 后来创建索引,以支持快速过滤和跨多列搜索。
|
||||
|
||||
8. **基于时间的分区(Time-based partitioning)** Druid 首先按时间对数据进行分区,同时也可以根据其他字段进行分区。
|
||||
这意味着基于时间的查询将仅访问与查询时间范围匹配的分区,这将大大提高基于时间的数据处理性能。
|
||||
|
||||
9. **近似算法(Approximate algorithms)** Druid应用了近似 `count-distinct`,近似排序以及近似直方图和分位数计算的算法。
|
||||
这些算法占用有限的内存使用量,通常比精确计算要快得多。对于精度要求比速度更重要的场景,Druid 还提供了exact count-distinct 和 exact ranking。
|
||||
|
||||
10. **在数据摄取的时候自动进行汇总(Automatic summarization at ingest time)** Druid 支持在数据摄取阶段可选地进行数据汇总,这种汇总会部分预先聚合您的数据,并可以节省大量成本并提高性能。
|
||||
|
||||
Druid is likely a good choice if your use case fits a few of the following descriptors:
|
||||
|
||||
- Insert rates are very high, but updates are less common.
|
||||
- Most of your queries are aggregation and reporting queries ("group by" queries). You may also have searching and
|
||||
scanning queries.
|
||||
- You are targeting query latencies of 100ms to a few seconds.
|
||||
- Your data has a time component (Druid includes optimizations and design choices specifically related to time).
|
||||
- You may have more than one table, but each query hits just one big distributed table. Queries may potentially hit more
|
||||
than one smaller "lookup" table.
|
||||
- You have high cardinality data columns (e.g. URLs, user IDs) and need fast counting and ranking over them.
|
||||
- You want to load data from Kafka, HDFS, flat files, or object storage like Amazon S3.
|
||||
## 我应该在什么时候使用 Druid
|
||||
|
||||
Situations where you would likely _not_ want to use Druid include:
|
||||
许多公司都已经将 Druid 应用于多种不同的应用场景。请访问 [使用 Apache Druid 的公司](https://druid.apache.org/druid-powered) 页面来了解都有哪些公司使用了 Druid。
|
||||
|
||||
- You need low-latency updates of _existing_ records using a primary key. Druid supports streaming inserts, but not streaming updates (updates are done using
|
||||
background batch jobs).
|
||||
- You are building an offline reporting system where query latency is not very important.
|
||||
- You want to do "big" joins (joining one big fact table to another big fact table) and you are okay with these queries
|
||||
taking a long time to complete.
|
||||
如果您的使用场景符合下面的一些特性,那么Druid 将会是一个非常不错的选择:
|
||||
|
||||
- 数据的插入频率非常高,但是更新频率非常低。
|
||||
- 大部分的查询为聚合查询(aggregation)和报表查询(reporting queries),例如我们常使用的 "group by" 查询。同时还有一些检索和扫描查询。
|
||||
- 查询的延迟被限制在 100ms 到 几秒钟之间。
|
||||
- 你的数据具有时间组件(属性)。针对时间相关的属性,Druid 进行特殊的设计和优化。
|
||||
- 你可能具有多个数据表,但是查询通常只针对一个大型的分布数据表,但是,查询又可能需要查询多个较小的 `lookup` 表。
|
||||
- 如果你的数据中具有高基数(high cardinality)数据字段,例如 URLs、用户 IDs,但是你需要对这些字段进行快速计数和排序。
|
||||
- 你需要从 Kafka,HDFS,文本文件,或者对象存储(例如,AWS S3)中载入数据。
|
||||
|
||||
|
||||
如果你的使用场景是下面的一些情况的话,Druid **不是**一个较好的选择:
|
||||
|
||||
- 针对一个已经存在的记录,使用主键(primary key)进行低延迟的更新操作。Druid 支持流式插入(streaming inserts)数据,但是并不很好的支持流式更新(streaming updates)数据。
|
||||
Druid 的更新操作是通过后台批处理完成的。
|
||||
- 你的系统类似的是一个离线的报表系统,查询的延迟不是系统设计的重要考虑。
|
||||
- 使用场景中需要对表(Fact Table)进行连接查询,并且针对这个查询你可以介绍比较高的延迟来等待查询的完成。
|
||||
|
||||
|
||||
### 高基数
|
||||
在 SQL 中,基数(cardinality)的定义为一个数据列中独一无二数据的数量。
|
||||
|
||||
高基数(High-Cardinality)的定义为在一个数据列中的数据基本上不重复,或者说重复率非常低。
|
||||
|
||||
例如我们常见的识别号,邮件地址,用户名等都可以被认为是高基数数据。
|
||||
例如我们常定义的 USERS 数据表中的 USER_ID 字段,这个字段中的数据通常被定义为 1 到 n。每一次一个新的用户被作为记录插入到 USERS 表中,一个新的记录将会被创建,
|
||||
字段 USER_ID 将会使用一个新的数据来标识这个被插入的数据。因为 USER_ID 中插入的数据是独一无二的,因此这个字段的数据技术就可以被考虑认为是 高基数(High-Cardinality) 数据。
|
||||
|
||||
|
||||
### Fact Table
|
||||
与 Fact Table 对应的表是 Dimension Table。
|
||||
|
||||
这 2 个表是数据仓库的两个概念,为数据仓库的两种类型表。 从保存数据的角度来说,本质上没区别,都是表。
|
||||
区别主要在数据和用途上,Fact Table 用来存 fact 数据,就是一些可以计量的数据和可加性数据,数据数量,金额等。
|
||||
Dimension Table 用来存描述性的数据,比如说用来描述 Fact 表中的数据,如区域,销售代表,产品等。
|
|
@ -132,7 +132,7 @@ Druid的体系结构需要一个主时间列(内部存储为名为__time的列
|
|||
|
||||
![](img-2/tutorial-kafka-data-loader-12.png)
|
||||
|
||||
查看[查询教程](../Querying/makeNativeQueries.md)以对新加载的数据运行一些示例查询。
|
||||
查看[查询教程](../querying/makeNativeQueries.md)以对新加载的数据运行一些示例查询。
|
||||
|
||||
#### 通过控制台提交supervisor
|
||||
|
||||
|
|
|
@ -284,5 +284,5 @@ curl -X 'POST' -H 'Content-Type:application/json' -d @quickstart/tutorial/wikipe
|
|||
|
||||
### 进一步阅读
|
||||
|
||||
[查询文档](../Querying/makeNativeQueries.md)有更多关于Druid原生JSON查询的信息
|
||||
[Druid SQL文档](../Querying/druidsql.md)有更多关于Druid SQL查询的信息
|
||||
[查询文档](../querying/makeNativeQueries.md)有更多关于Druid原生JSON查询的信息
|
||||
[Druid SQL文档](../querying/druidsql.md)有更多关于Druid SQL查询的信息
|
|
@ -91,7 +91,7 @@ Druid的体系结构需要一个主时间列(内部存储为名为__time的列
|
|||
|
||||
运行 `SELECT * FROM wikipedia` 查询可以看到详细的结果。
|
||||
|
||||
查看[查询教程](../Querying/makeNativeQueries.md)以对新加载的数据运行一些示例查询。
|
||||
查看[查询教程](../querying/makeNativeQueries.md)以对新加载的数据运行一些示例查询。
|
||||
|
||||
### 使用spec加载数据(通过控制台)
|
||||
|
||||
|
|
Loading…
Reference in New Issue