更新文件的名称
This commit is contained in:
parent
0552ddf989
commit
4935fad402
@ -1 +0,0 @@
|
||||
<!-- toc -->
|
43
operations/deep-storage-migration.md
Normal file
43
operations/deep-storage-migration.md
Normal file
@ -0,0 +1,43 @@
|
||||
# 深度存储合并
|
||||
|
||||
If you have been running an evaluation Druid cluster using local deep storage and wish to migrate to a
|
||||
more production-capable deep storage system such as S3 or HDFS, this document describes the necessary steps.
|
||||
|
||||
Migration of deep storage involves the following steps at a high level:
|
||||
|
||||
- Copying segments from local deep storage to the new deep storage
|
||||
- Exporting Druid's segments table from metadata
|
||||
- Rewriting the load specs in the exported segment data to reflect the new deep storage location
|
||||
- Reimporting the edited segments into metadata
|
||||
|
||||
## Shut down cluster services
|
||||
|
||||
To ensure a clean migration, shut down the non-coordinator services to ensure that metadata state will not
|
||||
change as you do the migration.
|
||||
|
||||
When migrating from Derby, the coordinator processes will still need to be up initially, as they host the Derby database.
|
||||
|
||||
## Copy segments from old deep storage to new deep storage.
|
||||
|
||||
Before migrating, you will need to copy your old segments to the new deep storage.
|
||||
|
||||
For information on what path structure to use in the new deep storage, please see [deep storage migration options](../operations/export-metadata.md#deep-storage-migration).
|
||||
|
||||
## Export segments with rewritten load specs
|
||||
|
||||
Druid provides an [Export Metadata Tool](../operations/export-metadata.md) for exporting metadata from Derby into CSV files
|
||||
which can then be reimported.
|
||||
|
||||
By setting [deep storage migration options](../operations/export-metadata.md#deep-storage-migration), the `export-metadata` tool will export CSV files where the segment load specs have been rewritten to load from your new deep storage location.
|
||||
|
||||
Run the `export-metadata` tool on your existing cluster, using the migration options appropriate for your new deep storage location, and save the CSV files it generates. After a successful export, you can shut down the coordinator.
|
||||
|
||||
### Import metadata
|
||||
|
||||
After generating the CSV exports with the modified segment data, you can reimport the contents of the Druid segments table from the generated CSVs.
|
||||
|
||||
Please refer to [import commands](../operations/export-metadata.md#importing-metadata) for examples. Only the `druid_segments` table needs to be imported.
|
||||
|
||||
### Restart cluster
|
||||
|
||||
After importing the segment table successfully, you can now restart your cluster.
|
68
operations/metadata-migration.md
Normal file
68
operations/metadata-migration.md
Normal file
@ -0,0 +1,68 @@
|
||||
# 元数据合并
|
||||
|
||||
If you have been running an evaluation Druid cluster using the built-in Derby metadata storage and wish to migrate to a
|
||||
more production-capable metadata store such as MySQL or PostgreSQL, this document describes the necessary steps.
|
||||
|
||||
## Shut down cluster services
|
||||
|
||||
To ensure a clean migration, shut down the non-coordinator services to ensure that metadata state will not
|
||||
change as you do the migration.
|
||||
|
||||
When migrating from Derby, the coordinator processes will still need to be up initially, as they host the Derby database.
|
||||
|
||||
## Exporting metadata
|
||||
|
||||
Druid provides an [Export Metadata Tool](../operations/export-metadata.md) for exporting metadata from Derby into CSV files
|
||||
which can then be imported into your new metadata store.
|
||||
|
||||
The tool also provides options for rewriting the deep storage locations of segments; this is useful
|
||||
for [deep storage migration](../operations/deep-storage-migration.md).
|
||||
|
||||
Run the `export-metadata` tool on your existing cluster, and save the CSV files it generates. After a successful export, you can shut down the coordinator.
|
||||
|
||||
## Initializing the new metadata store
|
||||
|
||||
### Create database
|
||||
|
||||
Before importing the existing cluster metadata, you will need to set up the new metadata store.
|
||||
|
||||
The [MySQL extension](../development/extensions-core/mysql.md) and [PostgreSQL extension](../development/extensions-core/postgresql.md) docs have instructions for initial database setup.
|
||||
|
||||
### Update configuration
|
||||
|
||||
Update your Druid runtime properties with the new metadata configuration.
|
||||
|
||||
### Create Druid tables
|
||||
|
||||
Druid provides a `metadata-init` tool for creating Druid's metadata tables. After initializing the Druid database, you can run the commands shown below from the root of the Druid package to initialize the tables.
|
||||
|
||||
In the example commands below:
|
||||
|
||||
- `lib` is the Druid lib directory
|
||||
- `extensions` is the Druid extensions directory
|
||||
- `base` corresponds to the value of `druid.metadata.storage.tables.base` in the configuration, `druid` by default.
|
||||
- The `--connectURI` parameter corresponds to the value of `druid.metadata.storage.connector.connectURI`.
|
||||
- The `--user` parameter corresponds to the value of `druid.metadata.storage.connector.user`.
|
||||
- The `--password` parameter corresponds to the value of `druid.metadata.storage.connector.password`.
|
||||
|
||||
#### MySQL
|
||||
|
||||
```bash
|
||||
cd ${DRUID_ROOT}
|
||||
java -classpath "lib/*" -Dlog4j.configurationFile=conf/druid/cluster/_common/log4j2.xml -Ddruid.extensions.directory="extensions" -Ddruid.extensions.loadList=[\"mysql-metadata-storage\"] -Ddruid.metadata.storage.type=mysql org.apache.druid.cli.Main tools metadata-init --connectURI="<mysql-uri>" --user <user> --password <pass> --base druid
|
||||
```
|
||||
|
||||
#### PostgreSQL
|
||||
|
||||
```bash
|
||||
cd ${DRUID_ROOT}
|
||||
java -classpath "lib/*" -Dlog4j.configurationFile=conf/druid/cluster/_common/log4j2.xml -Ddruid.extensions.directory="extensions" -Ddruid.extensions.loadList=[\"postgresql-metadata-storage\"] -Ddruid.metadata.storage.type=postgresql org.apache.druid.cli.Main tools metadata-init --connectURI="<postgresql-uri>" --user <user> --password <pass> --base druid
|
||||
```
|
||||
|
||||
### Import metadata
|
||||
|
||||
After initializing the tables, please refer to the [import commands](../operations/export-metadata.md#importing-metadata) for your target database.
|
||||
|
||||
### Restart cluster
|
||||
|
||||
After importing the metadata successfully, you can now restart your cluster.
|
@ -1 +0,0 @@
|
||||
<!-- toc -->
|
Loading…
x
Reference in New Issue
Block a user