Add Installation and Configuration for Benchmark section (#4124)
* Add Installation and Configuration for Benchmark section Signed-off-by: Naarcha-AWS <naarcha@amazon.com> * Fix dead links Signed-off-by: Naarcha-AWS <naarcha@amazon.com> * Fix spelling. Technical errors. Formatting Signed-off-by: Naarcha-AWS <naarcha@amazon.com> * More link fixes. More formatting Signed-off-by: Naarcha-AWS <naarcha@amazon.com> * Fix typo Signed-off-by: Naarcha-AWS <naarcha@amazon.com> * Add technical feedback Signed-off-by: Naarcha-AWS <naarcha@amazon.com> * Add one last note Signed-off-by: Naarcha-AWS <naarcha@amazon.com> * Update _benchmark/installing-benchmark.md Co-authored-by: Heather Halter <HDHALTER@AMAZON.COM> Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Update _benchmark/index.md Co-authored-by: Heather Halter <HDHALTER@AMAZON.COM> Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Update _benchmark/installing-benchmark.md Co-authored-by: Heather Halter <HDHALTER@AMAZON.COM> Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Update _benchmark/installing-benchmark.md Co-authored-by: Heather Halter <HDHALTER@AMAZON.COM> Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Add more technical feedback Signed-off-by: Naarcha-AWS <naarcha@amazon.com> * Add additional feedback Signed-off-by: Naarcha-AWS <naarcha@amazon.com> * Another pass Signed-off-by: Naarcha-AWS <naarcha@amazon.com> * Another typo Signed-off-by: Naarcha-AWS <naarcha@amazon.com> * Apply suggestions from code review Co-authored-by: Heather Halter <HDHALTER@AMAZON.COM> Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Remove teams section Signed-off-by: Naarcha-AWS <naarcha@amazon.com> * Apply suggestions from code review Co-authored-by: Nathan Bower <nbower@amazon.com> Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Add editorial feedback Signed-off-by: Naarcha-AWS <naarcha@amazon.com> * Apply suggestions from code review Co-authored-by: Nathan Bower <nbower@amazon.com> Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Nathan Bower <nbower@amazon.com> Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Add installation benchmark Signed-off-by: Naarcha-AWS <naarcha@amazon.com> --------- Signed-off-by: Naarcha-AWS <naarcha@amazon.com> Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Co-authored-by: Heather Halter <HDHALTER@AMAZON.COM> Co-authored-by: Nathan Bower <nbower@amazon.com>
This commit is contained in:
parent
3ff62a8a6c
commit
eb88baf999
|
@ -0,0 +1,198 @@
|
|||
---
|
||||
layout: default
|
||||
title: Configuring OpenSearch Benchmark
|
||||
nav_order: 7
|
||||
has_children: false
|
||||
---
|
||||
|
||||
# Configuring OpenSearch Benchmark
|
||||
|
||||
OpenSearch Benchmark configuration data is stored in `~/.benchmark/benchmark.ini`, which is automatically created the first time OpenSearch Benchmark runs.
|
||||
|
||||
The file is separated into the following sections, which you can customize based on the needs of your cluster.
|
||||
|
||||
## meta
|
||||
|
||||
This section contains meta information about the configuration file.
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| :---- | :---- | :---- |
|
||||
| `config.version` | Integer | The version of the configuration file format. This property is managed by OpenSearch Benchmark and should not be changed. |
|
||||
|
||||
## system
|
||||
|
||||
This section contains global information for the current benchmark environment. This information should be identical on all machines on which OpenSearch Benchmark is installed.
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| :---- | :---- | :---- |
|
||||
| `env.name` | String | The name of the benchmark environment used as metadata in metrics documents when an OpenSearch metrics store is configured. Only alphanumeric characters are allowed. Default is `local`. |
|
||||
| `available.cores` | Integer | Determines the number of available CPU cores. OpenSearch Benchmark aims to create one asyncio event loop per core and distributes it to clients evenly across event loops. Defaults to the number of logical CPU cores for your cluster. |
|
||||
| `async.debug` | Boolean | Enables debug mode on OpenSearch Benchmark's asyncio event loop. Default is `false`. |
|
||||
| `passenv` | String | A comma-separated list of environment variable names that should be passed to OpenSearch for processing. |
|
||||
|
||||
## node
|
||||
|
||||
This section contains node-specific information that can be customized according to the needs of your cluster.
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| :---- | :---- | :---- |
|
||||
| `root.dir` | String | The directory that stores all OpenSearch Benchmark data. OpenSearch Benchmark assumes control over this directory and all its subdirectories. |
|
||||
| `src.root.dir` | String | The directory from which the OpenSearch source code and any OpenSearch plugins are called. Only relevant for benchmarks from [sources](#source). |
|
||||
|
||||
## source
|
||||
|
||||
This section contains more details about the OpenSearch source tree.
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| :---- | :---- | :---- |
|
||||
| `remote.repo.url` | URL | The URL from which to check out OpenSearch. Default is `https://github.com/opensearch-project/OpenSearch.git`.
|
||||
| `opensearch.src.subdir` | String | The local path relative to the `src.root.dir` of the OpenSearch search tree. Default is `OpenSearch`.
|
||||
| `cache` | Boolean | Enables OpenSearch's internal source artifact cache, `opensearch*.tar.gz`, and any plugin zip files. Artifacts are cached based on their Git revision. Default is `true`. |
|
||||
| `cache.days` | Integer | The number of days that an artifact should be kept in the source artifact cache. Default is `7`. |
|
||||
|
||||
## benchmarks
|
||||
|
||||
This section contains the settings that can be customized in the OpenSearch Benchmark data directory.
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| :---- | :---- | :---- |
|
||||
| `local.dataset.cache` | String | The directory in which benchmark datasets are stored. Depending on the benchmarks that are run, this directory may contain hundreds of GB of data. Default path is `$HOME/.benchmark/benchmarks/data`. |
|
||||
|
||||
## results_publishing
|
||||
|
||||
This section defines how benchmark metrics are stored.
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| :---- | :---- | :---- |
|
||||
| `datastore.type` | String | If set to `in-memory` all metrics are kept in memory while running the benchmark. If set to `opensearch` all metrics are instead written to a persistent metrics store and the data is made available for further analysis. Default is `in-memory`. |
|
||||
| `sample.queue.size` | Function | The number of metrics samples that can be stored in OpenSearch Benchmark’s in-memory queue. Default is `2^20`. |
|
||||
| metrics.request.downsample.factor | Integer| (default: 1): Determines how many service time and latency samples are saved in the metrics store. By default, all values are saved. If you want to, for example. keep only every 100th sample, specify `100`. This is useful to avoid overwhelming the metrics store in benchmarks with many clients. Default is `1`. |
|
||||
| `output.processingtime` | Boolean | If set to `true`, OpenSearch shows the additional metric processing time in the command line report. Default is `false`. |
|
||||
|
||||
### `datastore.type` parameters
|
||||
|
||||
When `datastore.type` is set to `opensearch`, the following reporting settings can be customized.
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| :---- | :---- | :---- |
|
||||
| `datastore.host` | IP address | The hostname of the metrics store, for example, `124.340.200.22`. |
|
||||
| datastore.port| Port | The port number of the metrics store, for example, `9200`. |
|
||||
| `datastore.secure` | Boolean | If set to `false`, OpenSearch assumes an HTTP connection. If set to true, it assumes an HTTPS connection. |
|
||||
| `datastore.ssl.verification_mode` | String | When set to the default `full`, the metrics store’s SSL certificate is checked. To disable certificate verification, set this value to `none`. |
|
||||
| `datastore.ssl.certificate_authorities` | String | Determines the local file system path to the certificate authority’s signing certificate.
|
||||
| `datastore.user` | Username | Sets the username for the metrics store |
|
||||
| `datastore.password`: | String | Sets the password for the metrics store. Alternatively, this password can be configured using the `OSB_DATASTORE_PASSWORD` environment variable, which avoids storing credentials in a plain text file. The environment variable takes precedence over the config file if both define a password. |
|
||||
| `datastore.probe.cluster_version` | String | Enables automatic detection of the metrics store’s version. Default is `true`. |
|
||||
| `datastore.number_of_shards` | Integer | The number of primary shards that the `opensearch-*` indexes should have. Any updates to this setting after initial index creation will only be applied to new `opensearch-*` indexes. Default is the [OpenSearch static index value]({{site.url}}{{site.baseurl}}/api-reference/index-apis/create-index/#static-index-settings). |
|
||||
| `datastore.number_of_replicas` | Integer | The number of replicas each primary shard in the datastore contains. Any updates to this setting after initial index creation will only be applied to new `opensearch-* `indexes. Default is the [OpenSearch static index value]({{site.url}}{{site.baseurl}}/api-reference/index-apis/create-index/#static-index-settings). |
|
||||
|
||||
### Examples
|
||||
|
||||
You can use the following examples to set reporting values in your cluster.
|
||||
|
||||
This example defines an unprotected metrics store in the local network:
|
||||
|
||||
```
|
||||
[results_publishing]
|
||||
datastore.type = opensearch
|
||||
datastore.host = 192.168.10.17
|
||||
datastore.port = 9200
|
||||
datastore.secure = false
|
||||
datastore.user =
|
||||
datastore.password =
|
||||
```
|
||||
|
||||
This example defines a secure connection to a metrics store in the local network with a self-signed certificate:
|
||||
|
||||
```
|
||||
[results_publishing]
|
||||
datastore.type = opensearch
|
||||
datastore.host = 192.168.10.22
|
||||
datastore.port = 9200
|
||||
datastore.secure = true
|
||||
datastore.ssl.verification_mode = none
|
||||
datastore.user = user-name
|
||||
datastore.password = the-password-to-your-cluster
|
||||
```
|
||||
|
||||
## workloads
|
||||
|
||||
This section defines how workloads are retrieved. All keys are read by OpenSearch using the syntax `<<workload-repository-name>>.url`, which you can select using the OpenSearch Benchmark CLI `--workload-repository=workload-repository-name"` option. By default, OpenSearch chooses the workload repository using the `default.url` `https://github.com/opensearch-project/opensearch-benchmark-workloads`.
|
||||
|
||||
|
||||
## defaults
|
||||
|
||||
This section defines the default values of certain OpenSearch Benchmark CLI parameters.
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| :---- | :---- | :---- |
|
||||
| `preserve_benchmark_candidate` | Boolean | Determines whether OpenSearch installations are preserved or wiped by default after a benchmark. To preserve an installation for a single benchmark, use the command line flag `--preserve-install`. Default is `false`.
|
||||
|
||||
## distributions
|
||||
|
||||
This section defines how OpenSearch versions are distributed.
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| :---- | :---- | :---- |
|
||||
| `release.cache` | Boolean | Determines whether newly released OpenSearch versions should be cached locally. |
|
||||
|
||||
## Proxy configurations
|
||||
|
||||
OpenSearch automatically downloads all the necessary proxy data for you, including:
|
||||
|
||||
- OpenSearch distributions, when you specify `--distribution-version=<OPENSEARCH-VERSION>`.
|
||||
- OpenSearch source code, when you specify a Git revision number, for example, `--revision=1e04b2w`.
|
||||
- Any metadata tracked from the [OpenSearch GitHub repository](https://github.com/opensearch-project/OpenSearch).
|
||||
|
||||
As of OpenSearch Benchmark 0.5.0, only `http_proxy` is supported.
|
||||
{: .warning}
|
||||
|
||||
You can use an `http_proxy` to connect OpenSearch Benchmark to a specific proxy and connect the proxy to a benchmark workload. To add the proxy:
|
||||
|
||||
|
||||
1. Add your proxy URL to your shell profile:
|
||||
|
||||
```
|
||||
export http_proxy=http://proxy.proxy.org:4444/
|
||||
```
|
||||
|
||||
2. Source your shell profile and verify that the proxy URL is set correctly:
|
||||
|
||||
```
|
||||
source ~/.bash_profile ; echo $http_proxy
|
||||
```
|
||||
|
||||
3. Configure Git to connect to your proxy by using the following command. For more information, see the [Git documentation](https://git-scm.com/docs/git-config).
|
||||
|
||||
```
|
||||
git config --global http_proxy $http_proxy
|
||||
```
|
||||
|
||||
4. Use `git clone` to clone the workloads repository by using the following command. If the proxy configured correctly, the clone is successful.
|
||||
|
||||
```
|
||||
git clone http://github.com/opensearch-project/opensearch-benchmark-workloads.git
|
||||
```
|
||||
|
||||
5. Lastly, verify that OpenSearch Benchmark can connect to the proxy server by checking the `/.benchmark/logs/benchmark.log` log. When OpenSearch Benchmark starts, you should see the following at the top of the log:
|
||||
|
||||
```
|
||||
Connecting via proxy URL [http://proxy.proxy.org:4444/] to the Internet (picked up from the environment variable [http_proxy]).
|
||||
```
|
||||
|
||||
## Logging
|
||||
|
||||
Logs from OpenSearch Benchmark can be configured in the `~/.benchmark/logging.json` file. For more information about how to format the log file, see the following Python documentation:
|
||||
|
||||
- For general tips and tricks, use the [Python Logging Cookbook](https://docs.python.org/3/howto/logging-cookbook.html).
|
||||
- For the file format, see the Python [logging configuration schema](https://docs.python.org/3/library/logging.config.html#logging-config-dictschema).
|
||||
- For instructions on how to customize where the log output is written, see the [logging handlers documentation](https://docs.python.org/3/library/logging.handlers.html).
|
||||
|
||||
By default, OpenSearch Benchmark logs all output to `~/.benchmark/logs/benchmark.log`.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
@ -0,0 +1,18 @@
|
|||
---
|
||||
layout: default
|
||||
title: OpenSearch Benchmark
|
||||
nav_order: 1
|
||||
has_children: false
|
||||
has_toc: false
|
||||
---
|
||||
|
||||
# OpenSearch Benchmark
|
||||
|
||||
OpenSearch Benchmark is a macrobenchmark utility provided by the [OpenSearch Project](https://github.com/opensearch-project). You can use OpenSearch Benchmark to gather performance metrics from an OpenSearch cluster for a variety of purposes, including:
|
||||
|
||||
- Tracking the overall performance of an OpenSearch cluster.
|
||||
- Informing decisions about when to upgrade your cluster to a new version.
|
||||
- Determining how changes to your workflow—such as modifying mappings or queries—might impact your cluster.
|
||||
|
||||
OpenSearch Benchmark can be installed directly on a compatible host running Linux and macOS. You can also run OpenSearch Benchmark in a Docker container. See [Installing OpenSearch Benchmark]({{site.url}}{{site.baseurl}}/benchmark/installing-benchmark/) for more information.
|
||||
|
|
@ -0,0 +1,155 @@
|
|||
---
|
||||
layout: default
|
||||
title: Installing OpenSearch Benchmark
|
||||
nav_order: 5
|
||||
has_children: false
|
||||
---
|
||||
|
||||
# Installing OpenSearch Benchmark
|
||||
|
||||
You can install OpenSearch Benchmark directly on a host running Linux or macOS, or you can run OpenSearch Benchmark in a Docker container on any compatible host. This page provides general considerations for your OpenSearch Benchmark host as well as instructions for installing OpenSearch Benchmark.
|
||||
|
||||
|
||||
## Choosing appropriate hardware
|
||||
|
||||
OpenSearch Benchmark can be used to provision OpenSearch nodes for testing. If you intend to use OpenSearch Benchmark to provision nodes in your environment, then install OpenSearch Benchmark directly on each host in the cluster. Additionally, you must configure each host in the cluster for OpenSearch. See [Installing OpenSearch]({{site.url}}{{site.baseurl}}/install-and-configure/install-opensearch/index/) for guidance on important host settings.
|
||||
|
||||
Remember that OpenSearch Benchmark cannot be used to provision OpenSearch nodes when you run OpenSearch Benchmark in a Docker container. If you want to use OpenSearch Benchmark to provision nodes, or if you want to distribute the benchmark workload with the OpenSearch Benchmark daemon, then you must install OpenSearch Benchmark directly on each host using Python and pip.
|
||||
{: .important}
|
||||
|
||||
When you select a host, you should also think about which workloads you want to run. To see a list of default benchmark workloads, visit the [opensearch-benchmark-workloads](https://github.com/opensearch-project/opensearch-benchmark-workloads) repository on GitHub. As a general rule, make sure that the OpenSearch Benchmark host has enough free storage space to store the compressed data and the fully decompressed data corpus once OpenSearch Benchmark is installed.
|
||||
|
||||
If you want to benchmark with a default workload, then use the following table to determine the approximate minimum amount of required free space needed by adding the compressed size with the uncompressed size.
|
||||
|
||||
| Workload name | Document count | Compressed size | Uncompressed size |
|
||||
| :----: | :----: | :----: | :----: |
|
||||
| eventdata | 20,000,000 | 756.0 MB | 15.3 GB |
|
||||
| geonames | 11,396,503 | 252.9 MB | 3.3 GB |
|
||||
| geopoint | 60,844,404 | 482.1 MB | 2.3 GB |
|
||||
| geopointshape | 60,844,404 | 470.8 MB | 2.6 GB |
|
||||
| geoshape | 60,523,283 | 13.4 GB | 45.4 GB |
|
||||
| http_logs | 247,249,096 | 1.2 GB | 31.1 GB |
|
||||
| nested | 11,203,029 | 663.3 MB | 3.4 GB |
|
||||
| noaa | 33,659,481 | 949.4 MB | 9.0 GB |
|
||||
| nyc_taxis | 165,346,692 | 4.5 GB | 74.3 GB |
|
||||
| percolator | 2,000,000 | 121.1 kB | 104.9 MB |
|
||||
| pmc | 574,199 | 5.5 GB | 21.7 GB |
|
||||
| so | 36,062,278 | 8.9 GB | 33.1 GB |
|
||||
|
||||
Your OpenSearch Benchmark host should use solid-state drives (SSDs) for storage because they perform read and write operations significantly faster than traditional spinning-disk hard drives. Spinning-disk hard drives can introduce performance bottlenecks, which can make benchmark results unreliable and inconsistent.
|
||||
{: .tip}
|
||||
|
||||
## Installing on Linux and macOS
|
||||
|
||||
If you want to run OpenSearch Benchmark in a Docker container, see [Installing with Docker](#installing-with-docker). The OpenSearch Benchmark Docker image includes all of the required software, so there are no additional steps required.
|
||||
{: .important}
|
||||
|
||||
To install OpenSearch Benchmark directly on a UNIX host, such as Linux or macOS, make sure you have **Python 3.8 or later** installed.
|
||||
|
||||
If you need help installing Python, refer to the official [Python Setup and Usage](https://docs.python.org/3/using/index.html) documentation.
|
||||
|
||||
### Checking software dependencies
|
||||
|
||||
Before you begin installing OpenSearch Benchmark, check the following software dependencies.
|
||||
|
||||
Use [pyenv](https://github.com/pyenv/pyenv) to manage multiple versions of Python on your host. This is especially useful if your "system" version of Python is earlier than version 3.8.
|
||||
{: .tip}
|
||||
|
||||
- Check that Python 3.8 or later is installed:
|
||||
|
||||
```bash
|
||||
python3 --version
|
||||
```
|
||||
{% include copy.html %}
|
||||
|
||||
- Check that `pip` is installed and functional:
|
||||
|
||||
```bash
|
||||
pip --version
|
||||
```
|
||||
{% include copy.html %}
|
||||
|
||||
- _Optional_: Check that your installed version of `git` is **Git 1.9 or later** using the following command. `git` is not required for OpenSearch Benchmark installation, but it is required in order to fetch benchmark workload resources from a repository when you want to perform tests. See the official Git [Documentation](https://git-scm.com/doc) for help installing Git.
|
||||
|
||||
```bash
|
||||
git --version
|
||||
```
|
||||
{% include copy.html %}
|
||||
|
||||
### Completing the installation
|
||||
|
||||
After the required is installed, you can install OpenSearch Benchmark using the following command:
|
||||
|
||||
```bash
|
||||
pip install opensearch-benchmark
|
||||
```
|
||||
{% include copy.html %}
|
||||
|
||||
After the installation completes, you can use the following command to display help information:
|
||||
|
||||
```bash
|
||||
opensearch-benchmark -h
|
||||
```
|
||||
{% include copy.html %}
|
||||
|
||||
|
||||
Now that OpenSearch Benchmark is installed on your host, you can learn about [Configuring OpenSearch Benchmark]({{site.url}}{{site.baseurl}}/benchmark/configuring-benchmark/).
|
||||
|
||||
## Installing with Docker
|
||||
|
||||
You can find the official Docker images for OpenSearch Benchmark on [Docker Hub](https://hub.docker.com/r/opensearchproject/opensearch-benchmark) or on the [Amazon ECR Public Gallery](https://gallery.ecr.aws/opensearchproject/opensearch-benchmark).
|
||||
|
||||
|
||||
### Docker limitations
|
||||
|
||||
Some OpenSearch Benchmark functionality is unavailable when you run OpenSearch Benchmark in a Docker container. Specifically, the following restrictions apply:
|
||||
|
||||
- OpenSearch Benchmark cannot distribute load from multiple hosts, such as load worker coordinator hosts.
|
||||
- OpenSearch Benchmark cannot provision OpenSearch nodes and can only run tests on previously existing clusters. You can only invoke OpenSearch Benchmark commands using the `benchmark-only` pipeline.
|
||||
|
||||
### Pulling the Docker images
|
||||
|
||||
To pull the image from Docker Hub, run the following command:
|
||||
|
||||
```bash
|
||||
docker pull opensearchproject/opensearch-benchmark:latest
|
||||
```
|
||||
{% include copy.html %}
|
||||
|
||||
To pull the image from Amazon Elastic Container Registry (Amazon ECR):
|
||||
|
||||
```bash
|
||||
docker pull public.ecr.aws/opensearchproject/opensearch-benchmark:latest
|
||||
```
|
||||
{% include copy.html %}
|
||||
|
||||
### Running Benchmark with Docker
|
||||
|
||||
To run OpenSearch Benchmark, use `docker run` to launch a container. OpenSearch Benchmark subcommands are passed as arguments when you start the container. OpenSearch Benchmark then processes the command and stops the container after the requested operation completes.
|
||||
|
||||
For example, the following command prints the help text for OpenSearch Benchmark to the command line and then stops the container:
|
||||
|
||||
```bash
|
||||
docker run opensearchproject/opensearch-benchmark -h
|
||||
```
|
||||
{% include copy.html %}
|
||||
|
||||
|
||||
### Establishing volume persistence in a Docker container
|
||||
|
||||
To make sure your benchmark data and logs persist after your Docker container stops, specify a Docker volume to mount to the image when you work with OpenSearch Benchmark.
|
||||
|
||||
Use the `-v` option to specify a local directory to mount and a directory in the container where the volume is attached.
|
||||
|
||||
The following example command creates a volume in a user's home directory, mounts the volume to the OpenSearch Benchmark container at `/opensearch-benchmark/.benchmark`, and then runs a test benchmark using the geonames workload. Some client options are also specified:
|
||||
|
||||
```bash
|
||||
run -v $HOME/benchmarks:/opensearch-benchmark/.benchmark opensearchproject/opensearch-benchmark execute_test --target-hosts https://198.51.100.25:9200 --pipeline benchmark-only --workload geonames --client-options basic_auth_user:admin,basic_auth_password:admin,verify_certs:false --test-mode
|
||||
```
|
||||
{% include copy.html %}
|
||||
|
||||
See [Configuring OpenSearch Benchmark]({{site.url}}{{site.baseurl}}/benchmark/configuring-benchmark/) to learn more about the files and subdirectories located in `/opensearch-benchmark/.benchmark`.
|
||||
|
||||
## Next steps
|
||||
|
||||
- [Configuring OpenSearch Benchmark]({{site.url}}{{site.baseurl}}/benchmark/configuring-benchmark/)
|
|
@ -73,6 +73,9 @@ collections:
|
|||
clients:
|
||||
permalink: /:collection/:path/
|
||||
output: true
|
||||
benchmark:
|
||||
permalink: /:collection/:path/
|
||||
output: true
|
||||
data-prepper:
|
||||
permalink: /:collection/:path/
|
||||
output: true
|
||||
|
@ -139,6 +142,9 @@ just_the_docs:
|
|||
clients:
|
||||
name: Clients
|
||||
nav_fold: true
|
||||
benchmark:
|
||||
name: OpenSearch Benchmark
|
||||
nav_fold: true
|
||||
data-prepper:
|
||||
name: Data Prepper
|
||||
nav_fold: true
|
||||
|
|
Loading…
Reference in New Issue