OpenSearch/docs/plugins/repository-gcs.asciidoc

256 lines
9.0 KiB
Plaintext

[[repository-gcs]]
=== Google Cloud Storage Repository Plugin
The GCS repository plugin adds support for using the https://cloud.google.com/storage/[Google Cloud Storage]
service as a repository for {ref}/modules-snapshots.html[Snapshot/Restore].
:plugin_name: repository-gcs
include::install_remove.asciidoc[]
[[repository-gcs-usage]]
==== Getting started
The plugin uses the https://cloud.google.com/storage/docs/json_api/[Google Cloud Storage JSON API] (v1)
to connect to the Storage service. If this is the first time you use Google Cloud Storage, you first
need to connect to the https://console.cloud.google.com/[Google Cloud Platform Console] and create a new
project. Once your project is created, you must enable the Cloud Storage Service for your project.
[[repository-gcs-creating-bucket]]
===== Creating a Bucket
Google Cloud Storage service uses the concept of https://cloud.google.com/storage/docs/key-terms[Bucket]
as a container for all the data. Buckets are usually created using the
https://console.cloud.google.com/[Google Cloud Platform Console]. The plugin will not automatically
create buckets.
To create a new bucket:
1. Connect to the https://console.cloud.google.com/[Google Cloud Platform Console]
2. Select your project
3. Go to the https://console.cloud.google.com/storage/browser[Storage Browser]
4. Click the "Create Bucket" button
5. Enter the name of the new bucket
6. Select a storage class
7. Select a location
8. Click the "Create" button
The bucket should now be created.
[[repository-gcs-service-authentication]]
===== Service Authentication
The plugin supports two authentication modes:
* The built-in <<repository-gcs-using-compute-engine, Compute Engine authentication>>. This mode is
recommended if your Elasticsearch node is running on a Compute Engine virtual machine.
* Specifying <<repository-gcs-using-service-account, Service Account>> credentials.
[[repository-gcs-using-compute-engine]]
===== Using Compute Engine
When running on Compute Engine, the plugin use Google's built-in authentication mechanism to
authenticate on the Storage service. Compute Engine virtual machines are usually associated to a
default service account. This service account can be found in the VM instance details in the
https://console.cloud.google.com/compute/[Compute Engine console].
This is the default authentication mode and requires no configuration.
NOTE: The Compute Engine VM must be allowed to use the Storage service. This can be done only at VM
creation time, when "Storage" access can be configured to "Read/Write" permission. Check your
instance details at the section "Cloud API access scopes".
[[repository-gcs-using-service-account]]
===== Using a Service Account
If your Elasticsearch node is not running on Compute Engine, or if you don't want to use Google's
built-in authentication mechanism, you can authenticate on the Storage service using a
https://cloud.google.com/iam/docs/overview#service_account[Service Account] file.
To create a service account file:
1. Connect to the https://console.cloud.google.com/[Google Cloud Platform Console]
2. Select your project
3. Got to the https://console.cloud.google.com/permissions[Permission] tab
4. Select the https://console.cloud.google.com/permissions/serviceaccounts[Service Accounts] tab
5. Click on "Create service account"
6. Once created, select the new service account and download a JSON key file
A service account file looks like this:
[source,js]
----
{
"type": "service_account",
"project_id": "your-project-id",
"private_key_id": "...",
"private_key": "-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----\n",
"client_email": "service-account-for-your-repository@your-project-id.iam.gserviceaccount.com",
"client_id": "..."
}
----
// NOTCONSOLE
This file must be stored in the {ref}/secure-settings.html[elasticsearch keystore], under a setting name
of the form `gcs.client.NAME.credentials_file`, where `NAME` is the name of the client configuration.
The default client name is `default`, but a different client name can be specified in repository
settings using `client`.
For example, if specifying the credentials file in the keystore under
`gcs.client.my_alternate_client.credentials_file`, you can configure a repository to use these
credentials like this:
[source,js]
----
PUT _snapshot/my_gcs_repository
{
"type": "gcs",
"settings": {
"bucket": "my_bucket",
"client": "my_alternate_client"
}
}
----
// CONSOLE
// TEST[skip:we don't have gcs setup while testing this]
The `credentials_file` settings are {ref}/secure-settings.html#reloadable-secure-settings[reloadable].
After you reload the settings, the internal `gcs` clients, used to transfer the
snapshot contents, will utilize the latest settings from the keystore.
NOTE: In progress snapshot/restore jobs will not be preempted by a *reload*
of the client's `credentials_file` settings. They will complete using the client
as it was built when the operation started.
[[repository-gcs-client]]
==== Client Settings
The client used to connect to Google Cloud Storage has a number of settings available.
Client setting names are of the form `gcs.client.CLIENT_NAME.SETTING_NAME` and specified
inside `elasticsearch.yml`. The default client name looked up by a `gcs` repository is
called `default`, but can be customized with the repository setting `client`.
For example:
[source,js]
----
PUT _snapshot/my_gcs_repository
{
"type": "gcs",
"settings": {
"bucket": "my_bucket",
"client": "my_alternate_client"
}
}
----
// CONSOLE
// TEST[skip:we don't have gcs setup while testing this]
Some settings are sensitive and must be stored in the
{ref}/secure-settings.html[elasticsearch keystore]. This is the case for the service account file:
[source,sh]
----
bin/elasticsearch-keystore add-file gcs.client.default.credentials_file
----
The following are the available client settings. Those that must be stored in the keystore
are marked as `Secure`.
`credentials_file`::
The service account file that is used to authenticate to the Google Cloud Storage service. (Secure)
`endpoint`::
The Google Cloud Storage service endpoint to connect to. This will be automatically
determined by the Google Cloud Storage client but can be specified explicitly.
`connect_timeout`::
The timeout to establish a connection to the Google Cloud Storage service. The value should
specify the unit. For example, a value of `5s` specifies a 5 second timeout. The value of `-1`
corresponds to an infinite timeout. The default value is 20 seconds.
`read_timeout`::
The timeout to read data from an established connection. The value should
specify the unit. For example, a value of `5s` specifies a 5 second timeout. The value of `-1`
corresponds to an infinite timeout. The default value is 20 seconds.
`application_name`::
Name used by the client when it uses the Google Cloud Storage service. Setting
a custom name can be useful to authenticate your cluster when requests
statistics are logged in the Google Cloud Platform. Default to `repository-gcs`
`project_id`::
The Google Cloud project id. This will be automatically infered from the credentials file but
can be specified explicitly. For example, it can be used to switch between projects when the
same credentials are usable for both the production and the development projects.
[[repository-gcs-repository]]
==== Repository Settings
The `gcs` repository type supports a number of settings to customize how data
is stored in Google Cloud Storage.
These can be specified when creating the repository. For example:
[source,js]
----
PUT _snapshot/my_gcs_repository
{
"type": "gcs",
"settings": {
"bucket": "my_other_bucket",
"base_path": "dev"
}
}
----
// CONSOLE
// TEST[skip:we don't have gcs set up while testing this]
The following settings are supported:
`bucket`::
The name of the bucket to be used for snapshots. (Mandatory)
`client`::
The name of the client to use to connect to Google Cloud Storage.
Defaults to `default`.
`base_path`::
Specifies the path within bucket to repository data. Defaults to
the root of the bucket.
`chunk_size`::
Big files can be broken down into chunks during snapshotting if needed.
The chunk size can be specified in bytes or by using size value notation,
i.e. `1g`, `10m`, `5k`. Defaults to `100m`.
`compress`::
When set to `true` metadata files are stored in compressed format. This
setting doesn't affect index files that are already compressed by default.
Defaults to `false`.
`application_name`::
deprecated[7.0.0, This setting is now defined in the <<repository-gcs-client, client settings>>]
[[repository-gcs-bucket-permission]]
===== Recommended Bucket Permission
The service account used to access the bucket must have the "Writer" access to the bucket:
1. Connect to the https://console.cloud.google.com/[Google Cloud Platform Console]
2. Select your project
3. Got to the https://console.cloud.google.com/storage/browser[Storage Browser]
4. Select the bucket and "Edit bucket permission"
5. The service account must be configured as a "User" with "Writer" access