2018-12-13 14:47:20 -05:00
---
2019-08-21 00:48:59 -04:00
id: tutorial-compaction
2018-12-13 14:47:20 -05:00
title: "Tutorial: Compacting segments"
2019-08-21 00:48:59 -04:00
sidebar_label: "Compacting segments"
2018-12-13 14:47:20 -05:00
---
2018-11-13 12:38:37 -05:00
<!--
~ Licensed to the Apache Software Foundation (ASF) under one
~ or more contributor license agreements. See the NOTICE file
~ distributed with this work for additional information
~ regarding copyright ownership. The ASF licenses this file
~ to you under the Apache License, Version 2.0 (the
~ "License"); you may not use this file except in compliance
~ with the License. You may obtain a copy of the License at
~
~ http://www.apache.org/licenses/LICENSE-2.0
~
~ Unless required by applicable law or agreed to in writing,
~ software distributed under the License is distributed on an
~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
~ KIND, either express or implied. See the License for the
~ specific language governing permissions and limitations
~ under the License.
-->
2018-08-09 16:37:52 -04:00
This tutorial demonstrates how to compact existing segments into fewer but larger segments.
Because there is some per-segment memory and processing overhead, it can sometimes be beneficial to reduce the total number of segments.
2019-08-21 00:48:59 -04:00
Please check [Segment size optimization ](../operations/segment-optimization.md ) for details.
2018-08-09 16:37:52 -04:00
2020-01-03 12:33:19 -05:00
For this tutorial, we'll assume you've already downloaded Apache Druid as described in
2019-08-21 00:48:59 -04:00
the [single-machine quickstart ](index.html ) and have it running on your local machine.
2018-08-09 16:37:52 -04:00
2019-08-21 00:48:59 -04:00
It will also be helpful to have finished [Tutorial: Loading a file ](../tutorials/tutorial-batch.md ) and [Tutorial: Querying data ](../tutorials/tutorial-query.md ).
2018-08-09 16:37:52 -04:00
## Load the initial data
2019-02-27 19:02:51 -05:00
For this tutorial, we'll be using the Wikipedia edits sample data, with an ingestion task spec that will create 1-3 segments per hour in the input data.
2018-08-09 16:37:52 -04:00
The ingestion spec can be found at `quickstart/tutorial/compaction-init-index.json` . Let's submit that spec, which will create a datasource called `compaction-tutorial` :
2018-08-13 14:11:32 -04:00
```bash
2019-05-16 14:13:48 -04:00
bin/post-index-task --file quickstart/tutorial/compaction-init-index.json --url http://localhost:8081
2018-08-09 16:37:52 -04:00
```
2019-08-21 00:48:59 -04:00
> Please note that `maxRowsPerSegment` in the ingestion spec is set to 1000. This is to generate multiple segments per hour and _NOT_ recommended in production.
> It's 5000000 by default and may need to be adjusted to make your segments optimized.
2019-02-27 19:02:51 -05:00
2019-02-27 22:50:31 -05:00
After the ingestion completes, go to [http://localhost:8888/unified-console.html#datasources ](http://localhost:8888/unified-console.html#datasources ) in a browser to see the new datasource in the Druid Console.
2018-08-09 16:37:52 -04:00
2019-08-21 00:48:59 -04:00
![compaction-tutorial datasource ](../assets/tutorial-compaction-01.png "compaction-tutorial datasource" )
2019-02-27 22:50:31 -05:00
Click the `51 segments` link next to "Fully Available" for the `compaction-tutorial` datasource to view information about the datasource's segments:
2019-08-21 00:48:59 -04:00
2019-02-27 19:02:51 -05:00
There will be 51 segments for this datasource, 1-3 segments per hour in the input data:
2018-08-09 16:37:52 -04:00
2019-08-21 00:48:59 -04:00
![Original segments ](../assets/tutorial-compaction-02.png "Original segments" )
2018-08-09 16:37:52 -04:00
Running a COUNT(*) query on this datasource shows that there are 39,244 rows:
2018-08-13 14:11:32 -04:00
```bash
2018-08-09 16:37:52 -04:00
dsql> select count(*) from "compaction-tutorial";
┌────────┐
│ EXPR$0 │
├────────┤
│ 39244 │
└────────┘
Retrieved 1 row in 1.38s.
```
## Compact the data
2019-02-27 19:02:51 -05:00
Let's now compact these 51 small segments.
2018-08-09 16:37:52 -04:00
2019-02-27 22:50:31 -05:00
We have included a compaction task spec for this tutorial datasource at `quickstart/tutorial/compaction-keep-granularity.json` :
2018-08-09 16:37:52 -04:00
2018-08-13 14:11:32 -04:00
```json
2018-08-09 16:37:52 -04:00
{
"type": "compact",
"dataSource": "compaction-tutorial",
"interval": "2015-09-12/2015-09-13",
"tuningConfig" : {
2020-01-15 17:08:29 -05:00
"type" : "index_parallel",
2019-01-10 12:50:14 -05:00
"maxRowsPerSegment" : 5000000,
2019-03-16 02:29:25 -04:00
"maxRowsInMemory" : 25000
2018-08-09 16:37:52 -04:00
}
}
```
2019-08-21 00:48:59 -04:00
This will compact all segments for the interval `2015-09-12/2015-09-13` in the `compaction-tutorial` datasource.
2018-08-09 16:37:52 -04:00
2019-08-21 00:48:59 -04:00
The parameters in the `tuningConfig` control how many segments will be present in the compacted set of segments.
2018-08-09 16:37:52 -04:00
2019-02-27 19:02:51 -05:00
In this tutorial example, only one compacted segment will be created per hour, as each hour has less rows than the 5000000 `maxRowsPerSegment` (note that the total number of rows is 39244).
2018-08-09 16:37:52 -04:00
Let's submit this task now:
2018-08-13 14:11:32 -04:00
```bash
2019-05-16 14:13:48 -04:00
bin/post-index-task --file quickstart/tutorial/compaction-keep-granularity.json --url http://localhost:8081
2018-08-09 16:37:52 -04:00
```
2019-02-27 22:50:31 -05:00
After the task finishes, refresh the [segments view ](http://localhost:8888/unified-console.html#segments ).
2018-08-09 16:37:52 -04:00
2019-02-27 19:02:51 -05:00
The original 51 segments will eventually be marked as "unused" by the Coordinator and removed, with the new compacted segments remaining.
2018-08-09 16:37:52 -04:00
2019-02-27 22:50:31 -05:00
By default, the Druid Coordinator will not mark segments as unused until the Coordinator process has been up for at least 15 minutes, so you may see the old segment set and the new compacted set at the same time in the Druid Console, with 75 total segments:
2019-08-21 00:48:59 -04:00
![Compacted segments intermediate state 1 ](../assets/tutorial-compaction-03.png "Compacted segments intermediate state 1" )
2018-08-09 16:37:52 -04:00
2019-08-21 00:48:59 -04:00
![Compacted segments intermediate state 2 ](../assets/tutorial-compaction-04.png "Compacted segments intermediate state 2" )
2018-08-09 16:37:52 -04:00
2019-02-27 22:50:31 -05:00
The new compacted segments have a more recent version than the original segments, so even when both sets of segments are shown in the Druid Console, queries will only read from the new compacted segments.
2018-08-09 16:37:52 -04:00
Let's try running a COUNT(*) on `compaction-tutorial` again, where the row count should still be 39,244:
2018-08-13 14:11:32 -04:00
```bash
2018-08-09 16:37:52 -04:00
dsql> select count(*) from "compaction-tutorial";
┌────────┐
│ EXPR$0 │
├────────┤
│ 39244 │
└────────┘
Retrieved 1 row in 1.30s.
```
2019-02-27 22:50:31 -05:00
After the Coordinator has been running for at least 15 minutes, the [segments view ](http://localhost:8888/unified-console.html#segments ) should show there are 24 segments, one per hour:
2019-08-21 00:48:59 -04:00
![Compacted segments hourly granularity 1 ](../assets/tutorial-compaction-05.png "Compacted segments hourly granularity 1" )
2019-02-27 22:50:31 -05:00
2019-08-21 00:48:59 -04:00
![Compacted segments hourly granularity 2 ](../assets/tutorial-compaction-06.png "Compacted segments hourly granularity 2" )
2019-02-27 22:50:31 -05:00
## Compact the data with new segment granularity
The compaction task can also produce compacted segments with a granularity different from the granularity of the input segments.
We have included a compaction task spec that will create DAY granularity segments at `quickstart/tutorial/compaction-day-granularity.json` :
```json
{
"type": "compact",
"dataSource": "compaction-tutorial",
"interval": "2015-09-12/2015-09-13",
"segmentGranularity": "DAY",
"tuningConfig" : {
2020-01-15 17:08:29 -05:00
"type" : "index_parallel",
2019-02-27 22:50:31 -05:00
"maxRowsPerSegment" : 5000000,
"maxRowsInMemory" : 25000,
"forceExtendableShardSpecs" : true
}
}
```
Note that `segmentGranularity` is set to `DAY` in this compaction task spec.
Let's submit this task now:
```bash
2019-05-16 14:13:48 -04:00
bin/post-index-task --file quickstart/tutorial/compaction-day-granularity.json --url http://localhost:8081
2019-02-27 22:50:31 -05:00
```
It will take a bit of time before the Coordinator marks the old input segments as unused, so you may see an intermediate state with 25 total segments. Eventually, there will only be one DAY granularity segment:
2019-08-21 00:48:59 -04:00
![Compacted segments day granularity 1 ](../assets/tutorial-compaction-07.png "Compacted segments day granularity 1" )
2019-02-27 22:50:31 -05:00
2019-08-21 00:48:59 -04:00
![Compacted segments day granularity 2 ](../assets/tutorial-compaction-08.png "Compacted segments day granularity 2" )
2018-08-09 16:37:52 -04:00
## Further reading
2019-08-21 00:48:59 -04:00
[Task documentation ](../ingestion/tasks.md )
2018-08-09 16:37:52 -04:00
2019-08-21 00:48:59 -04:00
[Segment optimization ](../operations/segment-optimization.md )