Add distributed workload test tutorial (#5629)
* Add distributed workload test tutoriall Signed-off-by: Naarcha-AWS <naarcha@amazon.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Update distributed-load.md Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Heather Halter <HDHALTER@AMAZON.COM> Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Nathan Bower <nbower@amazon.com> Co-authored-by: Heather Halter <HDHALTER@AMAZON.COM> Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Nathan Bower <nbower@amazon.com> Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Nathan Bower <nbower@amazon.com> Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Update distributed-load.md Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --------- Signed-off-by: Naarcha-AWS <naarcha@amazon.com> Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Co-authored-by: Heather Halter <HDHALTER@AMAZON.COM> Co-authored-by: Nathan Bower <nbower@amazon.com>
This commit is contained in:
parent
cba2dee4c3
commit
b4e1da2480
|
@ -0,0 +1,71 @@
|
|||
---
|
||||
layout: default
|
||||
title: Running distributed loads
|
||||
nav_order: 10
|
||||
parent: User guide
|
||||
---
|
||||
|
||||
# Running distributed loads
|
||||
|
||||
|
||||
OpenSearch Benchmark loads always run on the same machine on which the benchmark was started. However, you can use multiple load drivers to generate additional benchmark testing loads, particularly for large clusters on multiple machines. This tutorial describes how to distribute benchmark loads across multiple machines in a single cluster.
|
||||
|
||||
## System architecture
|
||||
|
||||
The following tutorial uses a three-node architecture; each node is generated in [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/ec2/?nc2=h_ql_doc_ec2):
|
||||
|
||||
- **Node 1**: Node 1 acts as the _coordinator node_ and enables distribution and communication between the other two nodes.
|
||||
- **Node 2** and **Node 3**: The remaining nodes in the cluster are used to generate the load for the benchmark test.
|
||||
|
||||
OpenSearch Benchmark must be installed on all nodes. For installation instructions, see [Installing OpenSearch Benchmark]({{site.url}}{{site.baseurl}}/benchmark/user-guide/installing-benchmark/).
|
||||
|
||||
Make note of each node's IP address. This tutorial uses the following IP addresses:
|
||||
|
||||
- **Node 1 -- Coordinator node**: 192.0.1.0
|
||||
- **Node 2 -- Worker node**: 198.52.100.0
|
||||
- **Node 3 -- Worker node**: 198.53.100.0
|
||||
|
||||
## Step 1: Enable node communication
|
||||
|
||||
Make sure to enable communication for each node. In the AWS Management Console:
|
||||
|
||||
1. Go to the EC2 host for the node.
|
||||
2. Select **Security**, and then select the security group associated with the node.
|
||||
3. Use **Add inbound rules** to open traffic to the node, based on the port range and traffic type of your cluster.
|
||||
|
||||
## Step 2: Run daemon processes on each node
|
||||
|
||||
Start OpenSearch Benchmark on each node, using `--node-ip` to initialize OpenSearch Benchmark on the node itself and then `--coordinator-ip` to connect each node to the coordinator node.
|
||||
|
||||
For **Node 1**, the following command identifies the node as the coordinator node:
|
||||
|
||||
```
|
||||
opensearch-benchmarkd start --node-ip=192.0.1.0 --coordinator-ip=192.0.1.0
|
||||
```
|
||||
|
||||
The following commands enable **Node 2** and **Node 3** to listen to the coordinator node for load generation instructions.
|
||||
|
||||
**Node 2**
|
||||
|
||||
```
|
||||
opensearch-benchmarkd start --node-ip=198.52.100.0 --coordinator-ip=192.0.1.0
|
||||
```
|
||||
|
||||
**Node 3**
|
||||
|
||||
```
|
||||
opensearch-benchmarkd start --node-ip=198.53.100.0 --coordinator-ip=192.0.1.0
|
||||
```
|
||||
|
||||
With OpenSearch Benchmark running on all three nodes and the worker nodes set to listen to the coordinator node, you can now run the benchmark test.
|
||||
|
||||
## Step 3: Run the benchmark test
|
||||
|
||||
On **Node 1**, run a benchmark test with the `worker-ips` set to the IP addresses for your worker nodes, as shown in the following example:
|
||||
|
||||
```
|
||||
opensearch-benchmark execute_test --pipeline=benchmark-only --workload=eventdata --worker-ips=198.52.100.0,198.53.100.0 --target-hosts=<DOMAIN_ENDPOINT> --client-options=<STANDARD_CLIENT_OPTIONS> --kill-running-processes
|
||||
```
|
||||
|
||||
After the test completes, the logs generated by the test appear on your worker nodes.
|
||||
|
Loading…
Reference in New Issue