[DOCS] - Updating chunk_size values to fix size value notation. Chunksize41591 (#45552) (#45579)

* changes to chunk_size #41591

* update to chunk size to include ` `

* Update docs/plugins/repository-azure.asciidoc

Co-Authored-By: James Rodewig <james.rodewig@elastic.co>

* Update docs/reference/modules/snapshots.asciidoc

Co-Authored-By: James Rodewig <james.rodewig@elastic.co>

* Update docs/plugins/repository-azure.asciidoc

Co-Authored-By: James Rodewig <james.rodewig@elastic.co>

* Update docs/plugins/repository-s3.asciidoc

Co-Authored-By: James Rodewig <james.rodewig@elastic.co>

* edits to fix passive voice
This commit is contained in:
Chris Dean 2019-08-14 15:59:36 -05:00 committed by GitHub
parent 285f011bbb
commit deab736aad
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
4 changed files with 28 additions and 28 deletions

View File

@ -121,8 +121,8 @@ The Azure repository supports following settings:
`chunk_size`::
Big files can be broken down into chunks during snapshotting if needed.
The chunk size can be specified in bytes or by using size value notation,
i.e. `1g`, `10m`, `5k`. Defaults to `64m` (64m max)
Specify the chunk size as a value and unit, for example:
`10MB`, `5KB`, `500B`. Defaults to `64MB` (64MB max).
`compress`::
@ -154,7 +154,7 @@ PUT _snapshot/my_backup2
"settings": {
"container": "backup-container",
"base_path": "backups",
"chunk_size": "32m",
"chunk_size": "32MB",
"compress": true
}
}

View File

@ -11,8 +11,8 @@ include::install_remove.asciidoc[]
==== Getting started
The plugin uses the https://github.com/GoogleCloudPlatform/google-cloud-java/tree/master/google-cloud-clients/google-cloud-storage[Google Cloud Java Client for Storage]
to connect to the Storage service. If you are using
https://cloud.google.com/storage/[Google Cloud Storage] for the first time, you
to connect to the Storage service. If you are using
https://cloud.google.com/storage/[Google Cloud Storage] for the first time, you
must connect to the https://console.cloud.google.com/[Google Cloud Platform Console]
and create a new project. After your project is created, you must enable the
Cloud Storage Service for your project.
@ -20,10 +20,10 @@ Cloud Storage Service for your project.
[[repository-gcs-creating-bucket]]
===== Creating a Bucket
The Google Cloud Storage service uses the concept of a
https://cloud.google.com/storage/docs/key-terms[bucket] as a container for all
the data. Buckets are usually created using the
https://console.cloud.google.com/[Google Cloud Platform Console]. The plugin
The Google Cloud Storage service uses the concept of a
https://cloud.google.com/storage/docs/key-terms[bucket] as a container for all
the data. Buckets are usually created using the
https://console.cloud.google.com/[Google Cloud Platform Console]. The plugin
does not automatically create buckets.
To create a new bucket:
@ -43,12 +43,12 @@ https://cloud.google.com/storage/docs/quickstart-console#create_a_bucket[Google
[[repository-gcs-service-authentication]]
===== Service Authentication
The plugin must authenticate the requests it makes to the Google Cloud Storage
The plugin must authenticate the requests it makes to the Google Cloud Storage
service. It is common for Google client libraries to employ a strategy named https://cloud.google.com/docs/authentication/production#providing_credentials_to_your_application[application default credentials].
However, that strategy is **not** supported for use with Elasticsearch. The
plugin operates under the Elasticsearch process, which runs with the security
However, that strategy is **not** supported for use with Elasticsearch. The
plugin operates under the Elasticsearch process, which runs with the security
manager enabled. The security manager obstructs the "automatic" credential discovery.
Therefore, you must configure <<repository-gcs-using-service-account,service account>>
Therefore, you must configure <<repository-gcs-using-service-account,service account>>
credentials even if you are using an environment that does not normally require
this configuration (such as Compute Engine, Kubernetes Engine or App Engine).
@ -67,7 +67,7 @@ Here is a summary of the steps:
3. Got to the https://console.cloud.google.com/permissions[Permission] tab.
4. Select the https://console.cloud.google.com/permissions/serviceaccounts[Service Accounts] tab.
5. Click *Create service account*.
6. After the account is created, select it and download a JSON key file.
6. After the account is created, select it and download a JSON key file.
A JSON service account file looks like this:
@ -92,13 +92,13 @@ To provide this file to the plugin, it must be stored in the {ref}/secure-settin
add a `file` setting with the name `gcs.client.NAME.credentials_file` using the `add-file` subcommand.
`NAME` is the name of the client configuration for the repository. The implicit client
name is `default`, but a different client name can be specified in the
repository settings with the `client` key.
repository settings with the `client` key.
NOTE: Passing the file path via the GOOGLE_APPLICATION_CREDENTIALS environment
NOTE: Passing the file path via the GOOGLE_APPLICATION_CREDENTIALS environment
variable is **not** supported.
For example, if you added a `gcs.client.my_alternate_client.credentials_file`
setting in the keystore, you can configure a repository to use those credentials
For example, if you added a `gcs.client.my_alternate_client.credentials_file`
setting in the keystore, you can configure a repository to use those credentials
like this:
[source,js]
@ -116,11 +116,11 @@ PUT _snapshot/my_gcs_repository
// TEST[skip:we don't have gcs setup while testing this]
The `credentials_file` settings are {ref}/secure-settings.html#reloadable-secure-settings[reloadable].
After you reload the settings, the internal `gcs` clients, which are used to
After you reload the settings, the internal `gcs` clients, which are used to
transfer the snapshot contents, utilize the latest settings from the keystore.
NOTE: Snapshot or restore jobs that are in progress are not preempted by a *reload*
of the client's `credentials_file` settings. They complete using the client as
of the client's `credentials_file` settings. They complete using the client as
it was built when the operation started.
[[repository-gcs-client]]
@ -232,8 +232,8 @@ The following settings are supported:
`chunk_size`::
Big files can be broken down into chunks during snapshotting if needed.
The chunk size can be specified in bytes or by using size value notation,
e.g. , `10m` or `5k`. Defaults to `100m`, which is the maximum permitted.
Specify the chunk size as a value and unit, for example:
`10MB` or `5KB`. Defaults to `100MB`, which is the maximum permitted.
`compress`::

View File

@ -244,9 +244,9 @@ The following settings are supported:
`chunk_size`::
Big files can be broken down into chunks during snapshotting if needed. The
chunk size can be specified in bytes or by using size value notation, i.e.
`1gb`, `10mb`, `5kb`. Defaults to `1gb`.
Big files can be broken down into chunks during snapshotting if needed.
Specify the chunk size as a value and unit, for example:
`1GB`, `10MB`, `5KB`, `500B`. Defaults to `1GB`.
`compress`::

View File

@ -218,9 +218,9 @@ The following settings are supported:
[horizontal]
`location`:: Location of the snapshots. Mandatory.
`compress`:: Turns on compression of the snapshot files. Compression is applied only to metadata files (index mapping and settings). Data files are not compressed. Defaults to `false`.
`chunk_size`:: Big files can be broken down into chunks during snapshotting if needed. The chunk size can be specified in bytes or by
using size value notation, i.e. 1g, 10m, 5k. Defaults to `null` (unlimited chunk size).
`compress`:: Turns on compression of the snapshot files. Compression is applied only to metadata files (index mapping and settings). Data files are not compressed. Defaults to `true`.
`chunk_size`:: Big files can be broken down into chunks during snapshotting if needed. Specify the chunk size as a value and
unit, for example: `1GB`, `10MB`, `5KB`, `500B`. Defaults to `null` (unlimited chunk size).
`max_restore_bytes_per_sec`:: Throttles per node restore rate. Defaults to `40mb` per second.
`max_snapshot_bytes_per_sec`:: Throttles per node snapshot rate. Defaults to `40mb` per second.
`readonly`:: Makes repository read-only. Defaults to `false`.