Doc updates for metadata cleanup and storage (#12190)

* doc updates for metadata storage/cleanup

* Add comments for disabling cleanup

* Apply suggestions from code review

* updated for https://github.com/apache/druid/pull/12201

* Apply suggestions from code review

Co-authored-by: Maytas Monsereenusorn <maytasm@apache.org>

* move retention period line earlier; more concise text

* fix typo

Co-authored-by: Maytas Monsereenusorn <maytasm@apache.org>
This commit is contained in:
Victoria Lim 2022-01-27 11:40:54 -08:00 committed by GitHub
parent fac6a48a8f
commit 24716bfedc
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 138 additions and 50 deletions

View File

@ -23,50 +23,70 @@ title: "Metadata storage"
-->
The Metadata Storage is an external dependency of Apache Druid. Druid uses it to store
various metadata about the system, but not to store the actual data. There are
a number of tables used for various purposes described below.
Apache Druid relies on an external dependency for metadata storage.
Druid uses the metadata store to house various metadata about the system, but not to store the actual data.
The metadata store retains all metadata essential for a Druid cluster to work.
The metadata store includes the following:
- Segments records
- Rule records
- Configuration records
- Task-related tables
- Audit records
Derby is the default metadata store for Druid, however, it is not suitable for production.
[MySQL](../development/extensions-core/mysql.md) and [PostgreSQL](../development/extensions-core/postgresql.md) are more production suitable metadata stores.
See [Metadata storage configuration](../configuration/index.md#metadata-storage) for the default configuration settings.
> We also recommend you set up a high availability environment because there is no way to restore lost metadata.
## Available metadata stores
Druid supports Derby, MySQL, and PostgreSQL for storing metadata.
### Derby
> The Metadata Storage stores the entire metadata which is essential for a Druid cluster to work.
> For production clusters, consider using MySQL or PostgreSQL instead of Derby.
> Also, it's highly recommended to set up a high availability environment
> because there is no way to restore if you lose any metadata.
## Using Derby
Add the following to your Druid configuration.
Configure metadata storage with Derby by setting the following properties in your Druid configuration.
```properties
druid.metadata.storage.type=derby
druid.metadata.storage.connector.connectURI=jdbc:derby://localhost:1527//opt/var/druid_state/derby;create=true
```
## MySQL
### MySQL
See [mysql-metadata-storage extension documentation](../development/extensions-core/mysql.md).
## PostgreSQL
### PostgreSQL
See [postgresql-metadata-storage](../development/extensions-core/postgresql.md).
## Adding custom dbcp properties
## Adding custom DBCP properties
NOTE: These properties are not settable through the `druid.metadata.storage.connector.dbcp properties`: `username`, `password`, `connectURI`, `validationQuery`, `testOnBorrow`. These must be set through `druid.metadata.storage.connector` properties.
Example supported properties:
You can add custom properties to customize the database connection pool (DBCP) for connecting to the metadata store.
Define these properties with a `druid.metadata.storage.connector.dbcp.` prefix.
For example:
```properties
druid.metadata.storage.connector.dbcp.maxConnLifetimeMillis=1200000
druid.metadata.storage.connector.dbcp.defaultQueryTimeout=30000
```
See [BasicDataSource Configuration](https://commons.apache.org/proper/commons-dbcp/configuration) for full list.
Certain properties cannot be set through `druid.metadata.storage.connector.dbcp.` and must be set with the prefix `druid.metadata.storage.connector.`:
* `username`
* `password`
* `connectURI`
* `validationQuery`
* `testOnBorrow`
See [BasicDataSource Configuration](https://commons.apache.org/proper/commons-dbcp/configuration) for a full list of configurable properties.
## Metadata storage tables
This section describes the various tables in metadata storage.
### Segments table
This is dictated by the `druid.metadata.storage.tables.segments` property.
@ -81,7 +101,9 @@ available for requests). Value 0 means that the segment should not be loaded int
unloading segments from the cluster without actually removing their metadata (which allows for simpler rolling back if
that is ever an issue).
The `payload` column stores a JSON blob that has all of the metadata for the segment (some of the data stored in this payload is redundant with some of the columns in the table, that is intentional). This looks something like
The `payload` column stores a JSON blob that has all of the metadata for the segment.
Some of the data in the `payload` column intentionally duplicates data from other columns in the segments table.
As an example, the `payload` column may take the following form:
```json
{
@ -102,37 +124,42 @@ The `payload` column stores a JSON blob that has all of the metadata for the seg
}
```
Note that the format of this blob can and will change from time-to-time.
### Rule table
The rule table is used to store the various rules about where segments should
The rule table stores the various rules about where segments should
land. These rules are used by the [Coordinator](../design/coordinator.md)
when making segment (re-)allocation decisions about the cluster.
### Config table
The config table is used to store runtime configuration objects. We do not have
The config table stores runtime configuration objects. We do not have
many of these yet and we are not sure if we will keep this mechanism going
forward, but it is the beginnings of a method of changing some configuration
parameters across the cluster at runtime.
### Task-related tables
There are also a number of tables created and used by the [Overlord](../design/overlord.md) and [MiddleManager](../design/middlemanager.md) when managing tasks.
Task-related tables are created and used by the [Overlord](../design/overlord.md) and [MiddleManager](../design/middlemanager.md) when managing tasks.
### Audit table
The Audit table is used to store the audit history for configuration changes
e.g rule changes done by [Coordinator](../design/coordinator.md) and other
The audit table stores the audit history for configuration changes
such as rule changes done by [Coordinator](../design/coordinator.md) and other
config changes.
## Accessed by
## Metadata storage access
The Metadata Storage is accessed only by:
Only the following processes access the metadata storage:
1. Indexing Service Processes (if any)
2. Realtime Processes (if any)
3. Coordinator Processes
1. Indexing service processes (if any)
2. Realtime processes (if any)
3. Coordinator processes
Thus you need to give permissions (e.g., in AWS security groups) for only these machines to access the metadata storage.
## Learn more
See the following topics for more information:
* [Metadata storage configuration](../configuration/index.md#metadata-storage)
* [Automated cleanup for metadata records](../operations/clean-metadata-store.md)
Thus you need to give permissions (e.g., in AWS Security Groups) only for these machines to access the Metadata storage.

View File

@ -23,51 +23,71 @@ description: "Defines a strategy to maintain Druid metadata store performance by
~ specific language governing permissions and limitations
~ under the License.
-->
When you delete some entities from Apache Druid, records related to the entity may remain in the metadata store including:
- segments records
- audit records
- supervisor records
- rule records
- compaction configuration records
- datasource records created by supervisors
Apache Druid relies on [metadata storage](../dependencies/metadata-storage.md) to track information on data storage, operations, and system configuration.
The metadata store includes the following:
If you have a high datasource churn rate, meaning you frequently create and delete many short-lived datasources or other related entities like compaction configuration or rules, the leftover records can start to fill your metadata store and cause performance issues.
- Segment records
- Audit records
- Supervisor records
- Rule records
- Compaction configuration records
- Datasource records created by supervisors
- Indexer task logs
When you delete some entities from Apache Druid, records related to the entity may remain in the metadata store.
If you have a high datasource churn rate, meaning you frequently create and delete many short-lived datasources or other related entities like compaction configuration or rules, the leftover records can fill your metadata store and cause performance issues.
To maintain metadata store performance, you can configure Apache Druid to automatically remove records associated with deleted entities from the metadata store.
By default, Druid automatically cleans up metadata older than 90 days.
This applies to all metadata entities in this topic except compaction configuration records and indexer task logs, for which cleanup is disabled by default.
You can configure the retention period for each metadata type, when available, through the record's `durationToRetain` property.
Certain records may require additional conditions be satisfied before clean up occurs.
See the [example](#example) for how you can customize the automated metadata cleanup for a specific use case.
To maintain metadata store performance in this case, you can configure Apache Druid to automatically remove records associated with deleted entities from the metadata store.
## Automated cleanup strategies
There are several cases when you should consider automated cleanup of the metadata related to deleted datasources:
- If you know you have many high-churn datasources, for example, you have scripts that create and delete supervisors regularly.
- If you know you have many high-churn datasources, for example, you have scripts that create and delete supervisors regularly.
- If you have issues with the hard disk for your metadata database filling up.
- If you run into performance issues with the metadata database. For example, API calls are very slow or fail to execute.
If you have compliance requirements to keep audit records and you enable automated cleanup for audit records, use alternative methods to preserve audit metadata, for example, by periodically exporting audit metadata records to external storage.
## Configure automated metadata cleanup
By default, automatic cleanup for metadata is disabled. See [Metadata storage](../configuration/index.md#metadata-storage) for the default configuration settings after you enable the feature.
You can configure cleanup on a per-entity basis with the following constraints:
You can configure cleanup for each entity separately, as described in this section.
Define the properties in the `coordinator/runtime.properties` file.
The cleanup of one entity may depend on the cleanup of another entity as follows:
- You have to configure a [kill task for segment records](#kill-task) before you can configure automated cleanup for [rules](#rules-records) or [compaction configuration](#compaction-configuration-records).
- You have to configure the scheduler for the cleanup jobs to run at the same frequency or more frequently than your most frequent cleanup job. For example, if your most frequent cleanup job is every hour, set the scheduler metadata store management period to one hour or less: `druid.coordinator.period.metadataStoreManagementPeriod=P1H`.
- You have to schedule the metadata management tasks to run at the same or higher frequency as your most frequent cleanup job. For example, if your most frequent cleanup job is every hour, set the metadata store management period to one hour or less: `druid.coordinator.period.metadataStoreManagementPeriod=P1H`.
For details on configuration properties, see [Metadata management](../configuration/index.md#metadata-management).
If you want to skip the details, check out the [example](#example) for configuring automated metadata cleanup.
<a name="kill-task">
### Segment records and segments in deep storage (kill task)
Segment records and segments in deep storage become eligible for deletion:
> The kill task is the only configuration in this topic that affects actual data in deep storage and not simply metadata or logs.
Segment records and segments in deep storage become eligible for deletion when both of the following conditions hold:
- When they meet the eligibility requirement of kill task datasource configuration according to `killDataSourceWhitelist` and `killAllDataSources` set in the Coordinator dynamic configuration. See [Dynamic configuration](../configuration/index.md#dynamic-configuration).
- The `durationToRetain` time has passed since their creation.
- When the `durationToRetain` time has passed since their creation.
Kill tasks use the following configuration:
- `druid.coordinator.kill.on`: When `True`, enables the Coordinator to submit kill task for unused segments, which deletes them completely from metadata store and from deep storage. Only applies `dataSources` according to the dynamic configuration: allowed datasources (`killDataSourceWhitelist`) or all datasources (`killAllDataSources`).
- `druid.coordinator.kill.on`: When `true`, enables the Coordinator to submit a kill task for unused segments, which deletes them completely from metadata store and from deep storage.
Only applies to the specified datasources in the dynamic configuration parameter `killDataSourceWhitelist`.
If `killDataSourceWhitelist` is not set or empty, `killAllDataSources` defaults to true so that kill tasks can be submitted for all datasources.
- `druid.coordinator.kill.period`: Defines the frequency in [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601#Durations) for the cleanup job to check for and delete eligible segments. Defaults to `P1D`. Must be greater than `druid.coordinator.period.indexingPeriod`.
- `druid.coordinator.kill.durationToRetain`: Defines the retention period in [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601#Durations) after creation that segments become eligible for deletion.
- `druid.coordinator.kill.maxSegments`: Defines the maximum number of segments to delete per kill task.
>The kill task is the only configuration in this topic that affects actual data in deep storage and not simply metadata or logs.
### Audit records
All audit records become eligible for deletion when the `durationToRetain` time has passed since their creation.
Audit cleanup uses the following configuration:
@ -76,6 +96,7 @@ Audit cleanup uses the following configuration:
- `druid.coordinator.kill.audit.durationToRetain`: Defines the retention period in [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601#Durations) after creation that audit records become eligible for deletion.
### Supervisor records
Supervisor records become eligible for deletion when the supervisor is terminated and the `durationToRetain` time has passed since their creation.
Supervisor cleanup uses the following configuration:
@ -84,6 +105,7 @@ Supervisor cleanup uses the following configuration:
- `druid.coordinator.kill.supervisor.durationToRetain`: Defines the retention period in [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601#Durations) after creation that supervisor records become eligible for deletion.
### Rules records
Rule records become eligible for deletion when all segments for the datasource have been killed by the kill task and the `durationToRetain` time has passed since their creation. Automated cleanup for rules requires a [kill task](#kill-task).
Rule cleanup uses the following configuration:
@ -92,15 +114,26 @@ Rule cleanup uses the following configuration:
- `druid.coordinator.kill.rule.durationToRetain`: Defines the retention period in [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601#Durations) after creation that rules records become eligible for deletion.
### Compaction configuration records
Druid retains all compaction configuration records by default, which should be suitable for most use cases.
If you create and delete short-lived datasources with high frequency, and you set auto compaction configuration on those datasources, then consider turning on automated cleanup of compaction configuration records.
> With automated cleanup of compaction configuration records, if you create a compaction configuration for some datasource before the datasource exists, for example if initial ingestion is still ongoing, Druid may remove the compaction configuration.
To prevent the configuration from being prematurely removed, wait for the datasource to be created before applying the compaction configuration to the datasource.
Unlike other metadata records, compaction configuration records do not have a retention period set by `durationToRetain`. Druid deletes compaction configuration records at every cleanup cycle for inactive datasources, which do not have segments either used or unused.
Compaction configuration records in the `druid_config` table become eligible for deletion after all segments for the datasource have been killed by the kill task. Automated cleanup for compaction configuration requires a [kill task](#kill-task).
Compaction configuration cleanup uses the following configuration:
- `druid.coordinator.kill.compaction.on`: When `true`, enables cleanup for compaction configuration records.
- `druid.coordinator.kill.compaction.on`: When `true`, enables cleanup for compaction configuration records.
- `druid.coordinator.kill.compaction.period`: Defines the frequency in [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601#Durations) for the cleanup job to check for and delete eligible compaction configuration records. Defaults to `P1D`.
>If you already have an extremely large compaction configuration, you may not be able to delete compaction configuration due to size limits with the audit log. In this case you can set `druid.audit.manager.maxPayloadSizeBytes` and `druid.audit.manager.skipNullField` to avoid the auditing issue. See [Audit logging](../configuration/index.md#audit-logging).
### Datasource records created by supervisors
Datasource records created by supervisors become eligible for deletion when the supervisor is terminated or does not exist in the `druid_supervisors` table and the `durationToRetain` time has passed since their creation.
Datasource cleanup uses the following configuration:
@ -109,7 +142,9 @@ Datasource cleanup uses the following configuration:
- `druid.coordinator.kill.datasource.durationToRetain`: Defines the retention period in [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601#Durations) after creation that datasource records become eligible for deletion.
### Indexer task logs
You can configure the Overlord to delete indexer task log metadata and the indexer task logs from local disk or from cloud storage.
Set these properties in the `overlord/runtime.properties` file.
Indexer task log cleanup on the Overlord uses the following configuration:
- `druid.indexer.logs.kill.enabled`: When `true`, enables cleanup of task logs.
@ -119,7 +154,32 @@ Indexer task log cleanup on the Overlord uses the following configuration:
For more detail, see [Task logging](../configuration/index.md#task-logging).
## Automated metadata cleanup example configuration
## Disable automated metadata cleanup
Druid automatically cleans up metadata records, excluding compaction configuration records and indexer task logs.
To disable automated metadata cleanup, set the following properties in the `coordinator/runtime.properties` file:
```
# Keep unused segments
druid.coordinator.kill.on=false
# Keep audit records
druid.coordinator.kill.audit.on=false
# Keep supervisor records
druid.coordinator.kill.supervisor.on=false
# Keep rules records
druid.coordinator.kill.rule.on=false
# Keep datasource records created by supervisors
druid.coordinator.kill.datasource.on=false
```
<a name="example"></a>
## Example configuration for automated metadata cleanup
Consider a scenario where you have scripts to create and delete hundreds of datasources and related entities a day. You do not want to fill your metadata store with leftover records. The datasources and related entities tend to persist for only one or two days. Therefore, you want to run a cleanup job that identifies and removes leftover records that are at least four days old. The exception is for audit logs, which you need to retain for 30 days:
```
@ -167,4 +227,5 @@ druid.coordinator.kill.datasource.durationToRetain=P4D
## Learn more
See the following topics for more information:
- [Metadata management](../configuration/index.md#metadata-management) for metadata store configuration reference.
- [Metadata storage](../dependencies/metadata-storage.md) for an overview of the metadata storage database.
- [Metadata storage](../dependencies/metadata-storage.md) for an overview of the metadata storage database.