2018-12-13 14:47:20 -05:00
|
|
|
---
|
2019-08-21 00:48:59 -04:00
|
|
|
id: segment-optimization
|
2022-05-03 19:22:25 -04:00
|
|
|
title: "Segment size optimization"
|
2018-12-13 14:47:20 -05:00
|
|
|
---
|
|
|
|
|
2018-11-13 12:38:37 -05:00
|
|
|
<!--
|
|
|
|
~ Licensed to the Apache Software Foundation (ASF) under one
|
|
|
|
~ or more contributor license agreements. See the NOTICE file
|
|
|
|
~ distributed with this work for additional information
|
|
|
|
~ regarding copyright ownership. The ASF licenses this file
|
|
|
|
~ to you under the Apache License, Version 2.0 (the
|
|
|
|
~ "License"); you may not use this file except in compliance
|
|
|
|
~ with the License. You may obtain a copy of the License at
|
|
|
|
~
|
|
|
|
~ http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
~
|
|
|
|
~ Unless required by applicable law or agreed to in writing,
|
|
|
|
~ software distributed under the License is distributed on an
|
|
|
|
~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
|
|
|
~ KIND, either express or implied. See the License for the
|
|
|
|
~ specific language governing permissions and limitations
|
|
|
|
~ under the License.
|
|
|
|
-->
|
|
|
|
|
2018-01-12 23:52:37 -05:00
|
|
|
|
2020-01-03 12:33:19 -05:00
|
|
|
In Apache Druid, it's important to optimize the segment size because
|
2018-01-12 23:52:37 -05:00
|
|
|
|
2021-08-13 11:42:03 -04:00
|
|
|
1. Druid stores data in segments. If you're using the [best-effort roll-up](../ingestion/rollup.md) mode,
|
2018-01-12 23:52:37 -05:00
|
|
|
increasing the segment size might introduce further aggregation which reduces the dataSource size.
|
2019-03-02 15:21:50 -05:00
|
|
|
2. When a query is submitted, that query is distributed to all Historicals and realtime tasks
|
|
|
|
which hold the input segments of the query. Each process and task picks a thread from its own processing thread pool
|
|
|
|
to process a single segment. If segment sizes are too large, data might not be well distributed between data
|
|
|
|
servers, decreasing the degree of parallelism possible during query processing.
|
|
|
|
At the other extreme where segment sizes are too small, the scheduling
|
|
|
|
overhead of processing a larger number of segments per query can reduce
|
|
|
|
performance, as the threads that process each segment compete for the fixed
|
|
|
|
slots of the processing pool.
|
2018-01-12 23:52:37 -05:00
|
|
|
|
|
|
|
It would be best if you can optimize the segment size at ingestion time, but sometimes it's not easy
|
2019-03-02 15:21:50 -05:00
|
|
|
especially when it comes to stream ingestion because the amount of data ingested might vary over time. In this case,
|
2021-03-24 14:41:44 -04:00
|
|
|
you can create segments with a sub-optimized size first and optimize them later using [compaction](../ingestion/compaction.md).
|
2019-03-02 15:21:50 -05:00
|
|
|
|
|
|
|
You may need to consider the followings to optimize your segments.
|
|
|
|
|
|
|
|
- Number of rows per segment: it's generally recommended for each segment to have around 5 million rows.
|
|
|
|
This setting is usually _more_ important than the below "segment byte size".
|
|
|
|
This is because Druid uses a single thread to process each segment,
|
|
|
|
and thus this setting can directly control how many rows each thread processes,
|
|
|
|
which in turn means how well the query execution is parallelized.
|
|
|
|
- Segment byte size: it's recommended to set 300 ~ 700MB. If this value
|
|
|
|
doesn't match with the "number of rows per segment", please consider optimizing
|
|
|
|
number of rows per segment rather than this value.
|
|
|
|
|
2019-08-21 00:48:59 -04:00
|
|
|
> The above recommendation works in general, but the optimal setting can
|
|
|
|
> vary based on your workload. For example, if most of your queries
|
|
|
|
> are heavy and take a long time to process each row, you may want to make
|
|
|
|
> segments smaller so that the query processing can be more parallelized.
|
|
|
|
> If you still see some performance issue after optimizing segment size,
|
|
|
|
> you may need to find the optimal settings for your workload.
|
2019-03-02 15:21:50 -05:00
|
|
|
|
|
|
|
There might be several ways to check if the compaction is necessary. One way
|
2022-02-11 17:43:30 -05:00
|
|
|
is using the [System Schema](../querying/sql-metadata-tables.md#system-schema). The
|
2019-03-02 15:21:50 -05:00
|
|
|
system schema provides several tables about the current system status including the `segments` table.
|
|
|
|
By running the below query, you can get the average number of rows and average size for published segments.
|
|
|
|
|
|
|
|
```sql
|
|
|
|
SELECT
|
|
|
|
"start",
|
|
|
|
"end",
|
|
|
|
version,
|
|
|
|
COUNT(*) AS num_segments,
|
|
|
|
AVG("num_rows") AS avg_num_rows,
|
|
|
|
SUM("num_rows") AS total_num_rows,
|
|
|
|
AVG("size") AS avg_size,
|
|
|
|
SUM("size") AS total_size
|
|
|
|
FROM
|
|
|
|
sys.segments A
|
|
|
|
WHERE
|
|
|
|
datasource = 'your_dataSource' AND
|
|
|
|
is_published = 1
|
|
|
|
GROUP BY 1, 2, 3
|
|
|
|
ORDER BY 1, 2, 3 DESC;
|
|
|
|
```
|
|
|
|
|
|
|
|
Please note that the query result might include overshadowed segments.
|
|
|
|
In this case, you may want to see only rows of the max version per interval (pair of `start` and `end`).
|
|
|
|
|
|
|
|
Once you find your segments need compaction, you can consider the below two options:
|
2018-01-12 23:52:37 -05:00
|
|
|
|
2022-05-03 19:22:25 -04:00
|
|
|
- Turning on the [automatic compaction of Coordinators](../design/coordinator.md#automatic-compaction).
|
2019-08-21 00:48:59 -04:00
|
|
|
The Coordinator periodically submits [compaction tasks](../ingestion/tasks.md#compact) to re-index small segments.
|
2019-03-02 15:21:50 -05:00
|
|
|
To enable the automatic compaction, you need to configure it for each dataSource via Coordinator's dynamic configuration.
|
2022-06-09 17:55:12 -04:00
|
|
|
For more information, see [Automatic compaction](../ingestion/automatic-compaction.md).
|
2018-01-12 23:52:37 -05:00
|
|
|
- Running periodic Hadoop batch ingestion jobs and using a `dataSource`
|
|
|
|
inputSpec to read from the segments generated by the Kafka indexing tasks. This might be helpful if you want to compact a lot of segments in parallel.
|
2019-08-21 00:48:59 -04:00
|
|
|
Details on how to do this can be found on the [Updating existing data](../ingestion/data-management.md#update) section
|
|
|
|
of the data management page.
|
2021-03-24 14:41:44 -04:00
|
|
|
|
|
|
|
## Learn more
|
2022-06-09 17:55:12 -04:00
|
|
|
* For an overview of compaction and how to submit a manual compaction task, see [Compaction](../ingestion/compaction.md).
|
|
|
|
* To learn how to enable and configure automatic compaction, see [Automatic compaction](../ingestion/automatic-compaction.md).
|
|
|
|
|