diff --git a/docs/api-reference/automatic-compaction-api.md b/docs/api-reference/automatic-compaction-api.md
index d917cee42eb..a986585ed84 100644
--- a/docs/api-reference/automatic-compaction-api.md
+++ b/docs/api-reference/automatic-compaction-api.md
@@ -3,8 +3,12 @@ id: automatic-compaction-api
title: Automatic compaction API
sidebar_label: Automatic compaction
---
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
-This topic describes the status and configuration API endpoints for [automatic compaction](../data-management/automatic-compaction.md) in Apache Druid. You can configure automatic compaction in the Druid web console or API.
+This topic describes the status and configuration API endpoints for [automatic compaction](../data-management/automatic-compaction.md) in Apache Druid. You can configure automatic compaction in the Druid web console or API.
In this topic, `http://ROUTER_IP:ROUTER_PORT` is a placeholder for your Router service address and port. Replace it with the information for your deployment. For example, use `http://localhost:8888` for quickstart deployments.
@@ -37,36 +41,39 @@ The automatic compaction configuration requires only the `dataSource` property.
Note that this endpoint returns an HTTP `200 OK` message code even if the datasource name does not exist.
-#### URL
+#### URL
POST
/druid/coordinator/v1/config/compaction
#### Responses
-
+
-
+
-*Successfully submitted auto compaction configuration*
-
+*Successfully submitted auto compaction configuration*
+
+
+
---
#### Sample request
-The following example creates an automatic compaction configuration for the datasource `wikipedia_hour`, which was ingested with `HOUR` segment granularity. This automatic compaction configuration performs compaction on `wikipedia_hour`, resulting in compacted segments that represent a day interval of data.
+The following example creates an automatic compaction configuration for the datasource `wikipedia_hour`, which was ingested with `HOUR` segment granularity. This automatic compaction configuration performs compaction on `wikipedia_hour`, resulting in compacted segments that represent a day interval of data.
-In this example:
+In this example:
* `wikipedia_hour` is a datasource with `HOUR` segment granularity.
-* `skipOffsetFromLatest` is set to `PT0S`, meaning that no data is skipped.
+* `skipOffsetFromLatest` is set to `PT0S`, meaning that no data is skipped.
* `partitionsSpec` is set to the default `dynamic`, allowing Druid to dynamically determine the optimal partitioning strategy.
* `type` is set to `index_parallel`, meaning that parallel indexing is used.
* `segmentGranularity` is set to `DAY`, meaning that each compacted segment is a day of data.
-
+
+
+
-
```shell
curl "http://ROUTER_IP:ROUTER_PORT/druid/coordinator/v1/config/compaction"\
@@ -86,7 +93,9 @@ curl "http://ROUTER_IP:ROUTER_PORT/druid/coordinator/v1/config/compaction"\
}'
```
-
+
+
+
```HTTP
POST /druid/coordinator/v1/config/compaction HTTP/1.1
@@ -109,7 +118,8 @@ Content-Length: 281
}
```
-
+
+
#### Sample response
@@ -118,7 +128,7 @@ A successful request returns an HTTP `200 OK` message code and an empty response
### Remove automatic compaction configuration
-Removes the automatic compaction configuration for a datasource. This updates the compaction status of the datasource to "Not enabled."
+Removes the automatic compaction configuration for a datasource. This updates the compaction status of the datasource to "Not enabled."
#### URL
@@ -126,39 +136,47 @@ Removes the automatic compaction configuration for a datasource. This updates th
#### Responses
-
+
-
+
-*Successfully deleted automatic compaction configuration*
-
+*Successfully deleted automatic compaction configuration*
-*Datasource does not have automatic compaction or invalid datasource name*
+
+
-
+
+*Datasource does not have automatic compaction or invalid datasource name*
+
+
+
---
#### Sample request
-
+
+
+
-
```shell
curl --request DELETE "http://ROUTER_IP:ROUTER_PORT/druid/coordinator/v1/config/compaction/wikipedia_hour"
```
-
+
+
+
```HTTP
DELETE /druid/coordinator/v1/config/compaction/wikipedia_hour HTTP/1.1
Host: http://ROUTER_IP:ROUTER_PORT
```
-
+
+
#### Sample response
@@ -166,9 +184,9 @@ A successful request returns an HTTP `200 OK` message code and an empty response
### Update capacity for compaction tasks
-Updates the capacity for compaction tasks. The minimum number of compaction tasks is 1 and the maximum is 2147483647.
+Updates the capacity for compaction tasks. The minimum number of compaction tasks is 1 and the maximum is 2147483647.
-Note that while the max compaction tasks can theoretically be set to 2147483647, the practical limit is determined by the available cluster capacity and is capped at 10% of the cluster's total capacity.
+Note that while the max compaction tasks can theoretically be set to 2147483647, the practical limit is determined by the available cluster capacity and is capped at 10% of the cluster's total capacity.
#### URL
@@ -189,38 +207,46 @@ To limit the maximum number of compaction tasks, use the optional query paramete
#### Responses
-
+
-
+
-*Successfully updated compaction configuration*
-
+*Successfully updated compaction configuration*
-*Invalid `max` value*
+
+
-
+
+*Invalid `max` value*
+
+
+
---
#### Sample request
-
+
+
+
-
```shell
curl --request POST "http://ROUTER_IP:ROUTER_PORT/druid/coordinator/v1/config/compaction/taskslots?ratio=0.2&max=250000"
```
-
+
+
+
```HTTP
POST /druid/coordinator/v1/config/compaction/taskslots?ratio=0.2&max=250000 HTTP/1.1
Host: http://ROUTER_IP:ROUTER_PORT
```
-
+
+
#### Sample response
@@ -240,34 +266,40 @@ You can use this endpoint to retrieve `compactionTaskSlotRatio` and `maxCompacti
#### Responses
-
+
-
+
-*Successfully retrieved automatic compaction configurations*
-
+*Successfully retrieved automatic compaction configurations*
+
+
+
---
#### Sample request
-
+
+
+
-
```shell
curl "http://ROUTER_IP:ROUTER_PORT/druid/coordinator/v1/config/compaction"
```
-
+
+
+
```HTTP
GET /druid/coordinator/v1/config/compaction HTTP/1.1
Host: http://ROUTER_IP:ROUTER_PORT
```
-
+
+
#### Sample response
@@ -383,17 +415,21 @@ Retrieves the automatic compaction configuration for a datasource.
#### Responses
-
+
-
+
-*Successfully retrieved configuration for datasource*
-
+*Successfully retrieved configuration for datasource*
+
+
+
+
*Invalid datasource or datasource does not have automatic compaction enabled*
-
+
+
---
@@ -401,22 +437,26 @@ Retrieves the automatic compaction configuration for a datasource.
The following example retrieves the automatic compaction configuration for datasource `wikipedia_hour`.
-
+
+
+
-
```shell
curl "http://ROUTER_IP:ROUTER_PORT/druid/coordinator/v1/config/compaction/wikipedia_hour"
```
-
+
+
+
```HTTP
GET /druid/coordinator/v1/config/compaction/wikipedia_hour HTTP/1.1
Host: http://ROUTER_IP:ROUTER_PORT
```
-
+
+
#### Sample response
@@ -476,7 +516,7 @@ Host: http://ROUTER_IP:ROUTER_PORT
Retrieves the history of the automatic compaction configuration for a datasource. Returns an empty list if the datasource does not exist or there is no compaction history for the datasource.
The response contains a list of objects with the following keys:
-* `globalConfig`: A JSON object containing automatic compaction configuration that applies to the entire cluster.
+* `globalConfig`: A JSON object containing automatic compaction configuration that applies to the entire cluster.
* `compactionConfig`: A JSON object containing the automatic compaction configuration for the datasource.
* `auditInfo`: A JSON object containing information about the change made, such as `author`, `comment` or `ip`.
* `auditTime`: The date and time when the change was made.
@@ -488,45 +528,53 @@ The response contains a list of objects with the following keys:
#### Query parameters
* `interval` (optional)
* Type: ISO-8601
- * Limits the results within a specified interval. Use `/` as the delimiter for the interval string.
+ * Limits the results within a specified interval. Use `/` as the delimiter for the interval string.
* `count` (optional)
* Type: Int
* Limits the number of results.
#### Responses
-
+
-
+
-*Successfully retrieved configuration history*
-
+*Successfully retrieved configuration history*
-*Invalid `count` value*
+
+
-
+
+*Invalid `count` value*
+
+
+
---
#### Sample request
-
+
+
+
-
```shell
curl "http://ROUTER_IP:ROUTER_PORT/druid/coordinator/v1/config/compaction/wikipedia_hour/history"
```
-
+
+
+
```HTTP
GET /druid/coordinator/v1/config/compaction/wikipedia_hour/history HTTP/1.1
Host: http://ROUTER_IP:ROUTER_PORT
```
-
+
+
#### Sample response
@@ -644,17 +692,21 @@ Returns the total size of segments awaiting compaction for a given datasource. R
#### Responses
-
+
-
+
-*Successfully retrieved segment size awaiting compaction*
-
+*Successfully retrieved segment size awaiting compaction*
-*Unknown datasource name or datasource does not have automatic compaction enabled*
+
+
-
+
+*Unknown datasource name or datasource does not have automatic compaction enabled*
+
+
+
---
@@ -662,22 +714,26 @@ Returns the total size of segments awaiting compaction for a given datasource. R
The following example retrieves the remaining segments to be compacted for datasource `wikipedia_hour`.
-
+
+
+
-
```shell
curl "http://ROUTER_IP:ROUTER_PORT/druid/coordinator/v1/compaction/progress?dataSource=wikipedia_hour"
```
-
+
+
+
```HTTP
GET /druid/coordinator/v1/compaction/progress?dataSource=wikipedia_hour HTTP/1.1
Host: http://ROUTER_IP:ROUTER_PORT
```
-
+
+
#### Sample response
@@ -720,33 +776,39 @@ The `latestStatus` object has the following properties:
#### Responses
-
+
-
+
-*Successfully retrieved `latestStatus` object*
-
+*Successfully retrieved `latestStatus` object*
+
+
+
---
#### Sample request
-
+
+
+
-
```shell
curl "http://ROUTER_IP:ROUTER_PORT/druid/coordinator/v1/compaction/status"
```
-
+
+
+
```HTTP
GET /druid/coordinator/v1/compaction/status HTTP/1.1
Host: http://ROUTER_IP:ROUTER_PORT
```
-
+
+
#### Sample response
diff --git a/docs/api-reference/supervisor-api.md b/docs/api-reference/supervisor-api.md
index b315971ec20..c5f6c076270 100644
--- a/docs/api-reference/supervisor-api.md
+++ b/docs/api-reference/supervisor-api.md
@@ -6,9 +6,7 @@ sidebar_label: Supervisors
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
-
+
+
+
-
*Successfully reset offsets*
-
+
+
+
*Invalid supervisor ID*
-
+
+
---
#### Reset Offsets Metadata
@@ -3181,9 +3183,10 @@ The following table defines the fields within the `partitions` object in the res
The following example shows how to reset offsets for a kafka supervisor with the name `social_media`. Let's say the supervisor is reading
from a kafka topic `ads_media_stream` and has the stored offsets: `{"0": 0, "1": 10, "2": 20, "3": 40}`.
-
+
+
+
-
```shell
curl --request POST "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/supervisor/social_media/resetOffsets"
@@ -3191,7 +3194,9 @@ curl --request POST "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/supervisor/so
--data-raw '{"type":"kafka","partitions":{"type":"end","stream":"ads_media_stream","partitionOffsetMap":{"0":100, "2": 650}}}'
```
-
+
+
+
```HTTP
POST /druid/indexer/v1/supervisor/social_media/resetOffsets HTTP/1.1
@@ -3214,7 +3219,8 @@ Content-Type: application/json
The above operation will reset offsets only for partitions 0 and 2 to 100 and 650 respectively. After a successful reset,
when the supervisor's tasks restart, they will resume reading from `{"0": 100, "1": 10, "2": 650, "3": 40}`.
-
+
+
#### Sample response
diff --git a/docs/development/extensions-core/kafka-supervisor-operations.md b/docs/development/extensions-core/kafka-supervisor-operations.md
index 8504ced595b..b76a80f8cb9 100644
--- a/docs/development/extensions-core/kafka-supervisor-operations.md
+++ b/docs/development/extensions-core/kafka-supervisor-operations.md
@@ -5,6 +5,9 @@ sidebar_label: "Apache Kafka operations"
description: "Reference topic for running and maintaining Apache Kafka supervisors"
---
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+
+
+
-
```shell
curl --request POST "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/supervisor/social_media/resetOffsets"
@@ -159,7 +163,8 @@ curl --request POST "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/supervisor/so
--data-raw '{"type":"kafka","partitions":{"type":"end","stream":"ads_media_foo|ads_media_bar","partitionOffsetMap":{"ads_media_foo:0": 3, "ads_media_bar:1": 12}}}'
```
-
+
+
```HTTP
POST /druid/indexer/v1/supervisor/social_media/resetOffsets HTTP/1.1
@@ -178,10 +183,12 @@ Content-Type: application/json
}
}
```
+
The above operation will reset offsets for `ads_media_foo` partition 0 and `ads_media_bar` partition 1 to offsets 3 and 12 respectively. After a successful reset,
when the supervisor's tasks restart, they will resume reading from `{"ads_media_foo:0": 3, "ads_media_foo:1": 10, "ads_media_bar:0": 20, "ads_media_bar:1": 12}`.
-
+
+
#### Sample response