Aria Marble bed32c1a1b
Content updates for segrep - benchmarking and recommended settings (#3836)
* Content updates for segrep

Signed-off-by: ariamarble <armarble@amazon.com>

* Apply suggestions from doc review

Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com>

* doc review changes

Signed-off-by: ariamarble <armarble@amazon.com>

* small wording change

Signed-off-by: ariamarble <armarble@amazon.com>

* small change

Signed-off-by: ariamarble <armarble@amazon.com>

---------

Signed-off-by: ariamarble <armarble@amazon.com>
Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com>
2023-04-21 14:25:44 -07:00

8.4 KiB

layout title nav_order has_children parent redirect_from
default Segment replication 70 true Availability and Recovery
/opensearch/segment-replication/
/opensearch/segment-replication/index/

Segment replication

With segment replication, segment files are copied across shards instead of documents being indexed on each shard copy. This improves indexing throughput and lowers resource utilization at the expense of increased network utilization.

When the primary shard sends a checkpoint to replica shards on a refresh, a new segment replication event is triggered on replica shards. This happens:

  • When a new replica shard is added to a cluster.
  • When there are segment file changes on a primary shard refresh.
  • During peer recovery, such as replica shard recovery and shard relocation (explicit allocation using the move allocation command or automatic shard rebalancing).

Segment replication is the first feature in a series of features designed to decouple reads and writes in order to lower compute costs.

Use cases

  • Users who have high write loads but do not have high search requirements and are comfortable with longer refresh times.
  • Users with very high loads who want to add new nodes, as you do not need to index all nodes when adding a new node to the cluster.
  • OpenSearch cluster deployments with low replica counts, such as those used for log analytics.

Segment replication configuration

To set segment replication as the replication strategy, create an index with replication.type set to SEGMENT:

PUT /my-index1
{
  "settings": {
    "index": {
      "replication.type": "SEGMENT" 
    }
  }
}

{% include copy-curl.html %}

In segment replication, the primary shard is usually generating more network traffic than the replicas because it copies segment files to the replicas. Thus, it's beneficial to distribute primary shards equally between the nodes. To ensure balanced primary shard distribution, set the dynamic cluster.routing.allocation.balance.prefer_primary setting to true. For more information, see Cluster settings.

Segment replication currently does not support the wait_for value in the refresh query parameter. {: .important }

For the best performance, we recommend enabling both of the following settings:

  1. Segment replication backpressure.
  2. Balanced primary shard allocation:
curl -X PUT "$host/_cluster/settings?pretty" -H 'Content-Type: application/json' -d'
  {
    "persistent": {
    "cluster.routing.allocation.balance.prefer_primary": true,
    "segrep.pressure.enabled": true
   }
  }

{% include copy-curl.html %}

Comparing replication benchmarks

During initial benchmarks, segment replication users reported 40% higher throughput than when using document replication with the same cluster setup.

The following benchmarks were collected with OpenSearch-benchmark using the nyc_taxi dataset.

The test run was performed on a 10-node (m5.xlarge) cluster with 10 shards and 5 replicas. Each shard was about 3.2GBs in size.

The benchmarking results are listed in the following table.

Document Replication Segment Replication Percent difference
Test execution time (minutes) 118.00 98.00 27%
Index Throughput (number of requests per second) p0 71917.20 105740.70 47.03%
p50 77392.90 110988.70 43.41%
p100 93032.90 123131.50 32.35%
Query Throughput (number of requests per second) p0 1.748 1.744 -0.23%
p50 1.754 1.753 0%
p100 1.769 1.768 -0.06%
CPU (%) p50 37.19 25.579 -31.22%
p90 94.00 88.00 -6.38%
p99 100 100 0%
p100 100.00 100.00 0%
Memory (%) p50 30 24.241 -19.20%
p90 61.00 55.00 -9.84%
p99 72 62 -13.89%%
p100 80.00 67.00 -16.25%
Index Latency (ms) p50 803 647.90 -19.32%
p90 1215.50 908.60 -25.25%
p99 9738.70 1565 -83.93%
p100 21559.60 2747.70 -87.26%
Query Latency (ms) p50 36.209 37.799 4.39%
p90 42.971 60.823 41.54%
p99 50.604 70.072 38.47%
p100 52.883 73.911 39.76%

Your results may vary based on the cluster topology, hardware used, shard count, and merge settings. {: .note }

Other considerations

When using segment replication, consider the following:

  1. Enabling segment replication for an existing index requires reindexing.
  2. Rolling upgrades are not currently supported. Full cluster restarts are required when upgrading indexes using segment replication. See Issue 3881.
  3. Cross-cluster replication does not currently use segment replication to copy between clusters.
  4. Increased network congestion on primary shards. See Issue - Optimize network bandwidth on primary shards.
  5. Integration with remote-backed storage as the source of replication is currently unsupported.
  6. Read-after-write guarantees: The wait_until refresh policy is not compatible with segment replication. If you use the wait_until refresh policy while ingesting documents, you'll get a response only after the primary node has refreshed and made those documents searchable. Replica shards will respond only after having written to their local translog. We are exploring other mechanisms for providing read-after-write guarantees. For more information, see the corresponding GitHub issue.
  7. System indexes will continue to use document replication internally until read-after-write guarantees are available. In this case, document replication does not hinder the overall performance because there are few system indexes.

Next steps

  1. Track future enhancements to segment replication.
  2. Read this blog post about segment replication.