Data Prepper ToC Update (#2514)

* Creating PR with first file.

Signed-off-by: carolxob <carolxob@amazon.com>

* Adding newly created files to PR.

Signed-off-by: carolxob <carolxob@amazon.com>

* Reorganized files and added appropriate metadata to map ToC correctly.

Signed-off-by: carolxob <carolxob@amazon.com>

* Moved Authoring pipelines page.

Signed-off-by: carolxob <carolxob@amazon.com>

* Minor ToC updates.

Signed-off-by: carolxob <carolxob@amazon.com>

* Minor ToC updates to Sources section for Data Prepper.

Signed-off-by: carolxob <carolxob@amazon.com>

* Updated Buffers section under Data Prepper.

Signed-off-by: carolxob <carolxob@amazon.com>

* Minor update to otelmetricssource.

Signed-off-by: carolxob <carolxob@amazon.com>

* Restructured ToC in Processors section for Data Prepper.

Signed-off-by: carolxob <carolxob@amazon.com>

* Minor filename change.

Signed-off-by: carolxob <carolxob@amazon.com>

* Adjustments to metadata in ToC.

Signed-off-by: carolxob <carolxob@amazon.com>

* Minor edit.

Signed-off-by: carolxob <carolxob@amazon.com>

* Fixed nav order in metadata.

Signed-off-by: carolxob <carolxob@amazon.com>

* Minor edit.

Signed-off-by: carolxob <carolxob@amazon.com>

* Minor update top metadata for ToC.

Signed-off-by: carolxob <carolxob@amazon.com>

* Adjustmenets to Toc order.

Signed-off-by: carolxob <carolxob@amazon.com>

* Minor adjustments to ToC metadata.

Signed-off-by: carolxob <carolxob@amazon.com>

* Minor adjustments to Sinks section.

Signed-off-by: carolxob <carolxob@amazon.com>

* Adjustements to high level ToC.

Signed-off-by: carolxob <carolxob@amazon.com>

* Minor adjustement to Pipelines.md

Signed-off-by: carolxob <carolxob@amazon.com>

* Minor update.

Signed-off-by: carolxob <carolxob@amazon.com>

* Slight reorganization. Removed two placeholder pages for now.

Signed-off-by: carolxob <carolxob@amazon.com>

* Removed a page and replaced with pipelines content.

Signed-off-by: carolxob <carolxob@amazon.com>

* Minor changes/additions to content for placeholder pages.

Signed-off-by: carolxob <carolxob@amazon.com>

* Minor update to page link.

Signed-off-by: carolxob <carolxob@amazon.com>

* Minor adjustments to ToC metadata.

Signed-off-by: carolxob <carolxob@amazon.com>

* Minor edits.

Signed-off-by: carolxob <carolxob@amazon.com>

* Removed /clients from redirects to correct nav order.

Signed-off-by: carolxob <carolxob@amazon.com>

* Minor edits.

Signed-off-by: carolxob <carolxob@amazon.com>

* Minor adjustments to ToC metadata.

Signed-off-by: carolxob <carolxob@amazon.com>

* Minor adjustments.

Signed-off-by: carolxob <carolxob@amazon.com>

* Minor adjustment ot metadata.

Signed-off-by: carolxob <carolxob@amazon.com>

* TOC link fixes

Signed-off-by: Naarcha-AWS <naarcha@amazon.com>

* Changed page name.

Signed-off-by: carolxob <carolxob@amazon.com>

* Corrected references to Peer Forwarder.

Signed-off-by: carolxob <carolxob@amazon.com>

* Renamed Data Prepper folder.

Signed-off-by: carolxob <carolxob@amazon.com>

* Minor updates to phrasing and capitalization.

Signed-off-by: carolxob <carolxob@amazon.com>

* Minor phrasing update.

Signed-off-by: carolxob <carolxob@amazon.com>

* Minor phrasing update.

Signed-off-by: carolxob <carolxob@amazon.com>

* Minor change.

Signed-off-by: carolxob <carolxob@amazon.com>

* Minor change to change S3 Source to S3Source.

Signed-off-by: carolxob <carolxob@amazon.com>

* Updated references to peer forwarder and changed capitalization.

Signed-off-by: carolxob <carolxob@amazon.com>

* Updated capitalization for peer forwarder.

Signed-off-by: carolxob <carolxob@amazon.com>

* Made edits based on doc review feedback.

Signed-off-by: carolxob <carolxob@amazon.com>

* Update to one word.

Signed-off-by: carolxob <carolxob@amazon.com>

---------

Signed-off-by: carolxob <carolxob@amazon.com>
Signed-off-by: Naarcha-AWS <naarcha@amazon.com>
Co-authored-by: Naarcha-AWS <naarcha@amazon.com>
This commit is contained in:
Caroline 2023-02-03 15:06:10 -07:00 committed by GitHub
parent d7e8cdedd1
commit 0249991f76
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
49 changed files with 105 additions and 84 deletions

View File

@ -0,0 +1,12 @@
---
layout: default
title: Common use cases
has_children: true
nav_order: 15
redirect_from:
- /data-prepper/common-use-cases/
---
# Common use cases
You can use Data Prepper for several different purposes, including trace analytics, log analytics, Amazon S3 log analytics, and metrics ingestion.

View File

@ -1,6 +1,7 @@
---
layout: default
title: Log analytics
parent: Common use cases
nav_order: 15
---

View File

@ -3,8 +3,6 @@ layout: default
title: Getting started
nav_order: 5
redirect_from:
- /clients/data-prepper/getting-started/
- /data-prepper/get-started/
- /clients/data-prepper/get-started/
---
@ -44,7 +42,7 @@ You will configure two files:
Depending on your use case, we have a few different guides to configuring Data Prepper.
* [Trace Analytics](https://github.com/opensearch-project/data-prepper/blob/main/docs/trace_analytics.md)
* [Log Ingestion](https://github.com/opensearch-project/data-prepper/blob/main/docs/log_analytics.md): Learn how to set up Data Prepper for log observability.
* [Log Analytics]({{site.url}}{{site.baseurl}}/data-prepper/common-use-cases/log-analytics/): Learn how to set up Data Prepper for log observability.
* [Simple Pipeline](https://github.com/opensearch-project/data-prepper/blob/main/docs/simple_pipelines.md): Learn the basics of Data Prepper pipelines with some simple configurations.
## 3. Defining a pipeline
@ -71,7 +69,7 @@ docker run --name data-prepper \
opensearchproject/data-prepper:latest
```
This sample pipeline configuration above demonstrates a simple pipeline with a source (`random`) sending data to a sink (`stdout`). For more examples and details about more advanced pipeline configurations, see [Pipelines]({{site.url}}{{site.baseurl}}/clients/data-prepper/pipelines).
The preceding example pipeline configuration above demonstrates a simple pipeline with a source (`random`) sending data to a sink (`stdout`). For further detailed examples of more advanced pipeline configurations, see [Pipelines]({{site.url}}{{site.baseurl}}/clients/data-prepper/pipelines/).
After starting Data Prepper, you should see log output and some UUIDs after a few seconds:

View File

@ -5,8 +5,6 @@ nav_order: 1
has_children: false
has_toc: false
redirect_from:
- /clients/tools/data-prepper/
- /clients/data-prepper/
- /clients/data-prepper/index/
---

View File

@ -1,10 +1,8 @@
---
layout: default
title: Configuring Data Prepper
has_children: true
nav_order: 100
redirect_from:
- /clients/data-prepper/data-prepper-reference/
parent: Managing Data Prepper
nav_order: 10
---
# Configuring Data Prepper
@ -31,15 +29,15 @@ peer_forwarder | No | Object | Peer forwarder configurations. See [Peer forwarde
The following section details various configuration options for peer forwarder.
#### General options for peer forwarder
#### General options for peer forwarding
Option | Required | Type | Description
:--- | :--- | :--- | :---
port | No | Integer | The port number peer forwarder server is running on. Valid options are between 0 and 65535. Defaults is 4994.
request_timeout | No | Integer | Request timeout in milliseconds for peer forwarder HTTP server. Default is 10000.
server_thread_count | No | Integer | Number of threads used by peer forwarder server. Default is 200.
client_thread_count | No | Integer | Number of threads used by peer forwarder client. Default is 200.
max_connection_count | No | Integer | Maximum number of open connections for peer forwarder server. Default is 500.
port | No | Integer | The peer forwarding server port. Valid options are between 0 and 65535. Defaults is 4994.
request_timeout | No | Integer | Request timeout for the peer forwarder HTTP server in milliseconds. Default is 10000.
server_thread_count | No | Integer | Number of threads used by the peer forwarder server. Default is 200.
client_thread_count | No | Integer | Number of threads used by the peer forwarder client. Default is 200.
max_connection_count | No | Integer | Maximum number of open connections for the peer forwarder server. Default is 500.
max_pending_requests | No | Integer | Maximum number of allowed tasks in ScheduledThreadPool work queue. Default is 1024.
discovery_mode | No | String | Peer discovery mode to use. Valid options are `local_node`, `static`, `dns`, or `aws_cloud_map`. Defaults to `local_node`, which processes events locally.
static_endpoints | Conditionally | List | A list containing endpoints of all Data Prepper instances. Required if `discovery_mode` is set to static.

View File

@ -1,7 +1,8 @@
---
layout: default
title: Configuring Log4j
nav_order: 25
parent: Managing Data Prepper
nav_order: 20
---
# Configuring Log4j

View File

@ -1,7 +1,8 @@
---
layout: default
title: Core APIs
nav_order: 20
parent: Managing Data Prepper
nav_order: 15
---
# Core APIs

View File

@ -0,0 +1,10 @@
---
layout: default
title: Managing Data Prepper
has_children: true
nav_order: 20
---
# Managing Data Prepper
You can perform administrator functions for Data Prepper, including system configuration, interacting with core APIs, Log4j configuration, and monitoring. You can set up peer forwarding to coordinate multiple Data Prepper nodes when using stateful aggregation.

View File

@ -1,7 +1,8 @@
---
layout: default
title: Monitoring
nav_order: 33
parent: Administrating Data Prepper
nav_order: 25
---
# Monitoring Data Prepper with metrics

View File

@ -1,7 +1,7 @@
---
layout: default
title: Migrating from Open Distro
nav_order: 35
nav_order: 30
---
# Migrating from Open Distro

View File

@ -1,7 +1,7 @@
---
layout: default
title: Migrating from Logstash
nav_order: 30
nav_order: 25
redirect_from:
- /data-prepper/configure-logstash-data-prepper/
---
@ -10,7 +10,7 @@ redirect_from:
You can run Data Prepper with a Logstash configuration.
As mentioned in the [Getting started]({{site.url}}{{site.baseurl}}/data-prepper/get-started/) guide, you'll need to configure Data Prepper with a pipeline using a `pipelines.yaml` file.
As mentioned in [Getting started with Data Prepper]({{site.url}}{{site.baseurl}}/data-prepper/getting-started/), you'll need to configure Data Prepper with a pipeline using a `pipelines.yaml` file.
Alternatively, if you have a Logstash configuration `logstash.conf` to configure Data Prepper instead of `pipelines.yaml`.
@ -28,7 +28,7 @@ As of the Data Prepper 1.2 release, the following plugins from the Logstash conf
## Running Data Prepper with a Logstash configuration
1. To install Data Prepper's Docker image, see the Installing Data Prepper in [Get Started]({{site.url}}{{site.baseurl}}/data-prepper/getting-started#1-installing-data-prepper).
1. To install Data Prepper's Docker image, see Installing Data Prepper in [Getting Started]({{site.url}}{{site.baseurl}}/data-prepper/getting-started#1-installing-data-prepper).
2. Run the Docker image installed in Step 1 by supplying your `logstash.conf` configuration.

View File

@ -2,7 +2,7 @@
layout: default
title: Bounded blocking
parent: Buffers
grand_parent: Configuring Data Prepper
grand_parent: Pipelines
nav_order: 50
---

View File

@ -1,9 +1,9 @@
---
layout: default
title: Buffers
parent: Configuring Data Prepper
parent: Pipelines
has_children: true
nav_order: 50
nav_order: 20
---
# Buffers

View File

@ -2,7 +2,7 @@
layout: default
title: add_entries
parent: Processors
grand_parent: Configuring Data Prepper
grand_parent: Pipelines
nav_order: 45
---

View File

@ -2,7 +2,7 @@
layout: default
title: aggregate
parent: Processors
grand_parent: Configuring Data Prepper
grand_parent: Pipelines
nav_order: 45
---

View File

@ -2,7 +2,7 @@
layout: default
title: copy_values
parent: Processors
grand_parent: Configuring Data Prepper
grand_parent: Pipelines
nav_order: 45
---

View File

@ -2,7 +2,7 @@
layout: default
title: csv
parent: Processors
grand_parent: Configuring Data Prepper
grand_parent: Pipelines
nav_order: 45
---

View File

@ -2,7 +2,7 @@
layout: default
title: date
parent: Processors
grand_parent: Configuring Data Prepper
grand_parent: Pipelines
nav_order: 45
---

View File

@ -2,7 +2,7 @@
layout: default
title: delete_entries
parent: Processors
grand_parent: Configuring Data Prepper
grand_parent: Pipelines
nav_order: 45
---

View File

@ -2,7 +2,7 @@
layout: default
title: drop_events
parent: Processors
grand_parent: Configuring Data Prepper
grand_parent: Pipelines
nav_order: 45
---
@ -14,7 +14,7 @@ Drops all the events that are passed into this processor.
Option | Required | Type | Description
:--- | :--- | :--- | :---
drop_when | Yes | String | Accepts a Data Prepper Expression string following the [Data Prepper Expression Syntax](https://github.com/opensearch-project/data-prepper/blob/main/docs/expression_syntax.md). Configuring `drop_events` with `drop_when: true` drops all the events received.
drop_when | Yes | String | Accepts a Data Prepper Expression string following the [Data Prepper Expression Syntax]({{site.url}}{{site.baseurl}}/data-prepper/pipelines/expression-syntax/). Configuring `drop_events` with `drop_when: true` drops all the events received.
handle_failed_events | No | Enum | Specifies how exceptions are handled when an exception occurs while evaluating an event. Default value is `drop`, which drops the event so it doesn't get sent to OpenSearch. Available options are `drop`, `drop_silently`, `skip`, `skip_silently`. For more information, see [handle_failed_events](https://github.com/opensearch-project/data-prepper/tree/main/data-prepper-plugins/drop-events-processor#handle_failed_events).
<!---## Configuration

View File

@ -2,7 +2,7 @@
layout: default
title: grok
parent: Processors
grand_parent: Configuring Data Prepper
grand_parent: Pipelines
nav_order: 45
---

View File

@ -2,7 +2,7 @@
layout: default
title: json
parent: Processors
grand_parent: Configuring Data Prepper
grand_parent: Pipelines
nav_order: 45
---

View File

@ -2,7 +2,7 @@
layout: default
title: key_value
parent: Processors
grand_parent: Configuring Data Prepper
grand_parent: Pipelines
nav_order: 45
---

View File

@ -2,7 +2,7 @@
layout: default
title: lowercase_string
parent: Processors
grand_parent: Configuring Data Prepper
grand_parent: Pipelines
nav_order: 45
---

View File

@ -2,7 +2,7 @@
layout: default
title: otel_trace_raw
parent: Processors
grand_parent: Configuring Data Prepper
grand_parent: Pipelines
nav_order: 45
---

View File

@ -2,8 +2,8 @@
layout: default
title: Processors
has_children: true
parent: Configuring Data Prepper
nav_order: 100
parent: Pipelines
nav_order: 25
---
# Processors
@ -13,10 +13,6 @@ Processors perform some action on your data: filter, transform, enrich, etc.
Prior to Data Prepper 1.3, Processors were named Preppers. Starting in Data Prepper 1.3, the term Prepper is deprecated in favor of Processor. Data Prepper will continue to support the term "Prepper" until 2.0, where it will be removed.
{: .note }
## copy_values
Copy values within an event. `copy_values` is part of [mutate event](https://github.com/opensearch-project/data-prepper/tree/main/data-prepper-plugins/mutate-event-processors#mutate-event-processors) processors.

View File

@ -2,7 +2,7 @@
layout: default
title: rename_keys
parent: Processors
grand_parent: Configuring Data Prepper
grand_parent: Pipelines
nav_order: 44
---

View File

@ -2,7 +2,7 @@
layout: default
title: routes
parent: Processors
grand_parent: Configuring Data Prepper
grand_parent: Pipelines
nav_order: 45
---

View File

@ -2,7 +2,7 @@
layout: default
title: service_map_stateful
parent: Processors
grand_parent: Configuring Data Prepper
grand_parent: Pipelines
nav_order: 45
---

View File

@ -2,7 +2,7 @@
layout: default
title: service_map_stateful
parent: sinks
grand_parent: Configuring Data Prepper
grand_parent: Pipelines
nav_order: 45
---

View File

@ -2,7 +2,7 @@
layout: default
title: split_string
parent: Processors
grand_parent: Configuring Data Prepper
grand_parent: Pipelines
nav_order: 45
---

View File

@ -2,7 +2,7 @@
layout: default
title: string_converter
parent: Processors
grand_parent: Configuring Data Prepper
grand_parent: Pipelines
nav_order: 45
---

View File

@ -2,7 +2,7 @@
layout: default
title: substitute_string
parent: Processors
grand_parent: Configuring Data Prepper
grand_parent: Pipelines
nav_order: 45
---

View File

@ -2,7 +2,7 @@
layout: default
title: trim_string
parent: Processors
grand_parent: Configuring Data Prepper
grand_parent: Pipelines
nav_order: 45
---

View File

@ -2,7 +2,7 @@
layout: default
title: uppercase_string
parent: Processors
grand_parent: Configuring Data Prepper
grand_parent: Pipelines
nav_order: 45
---

View File

@ -2,7 +2,7 @@
layout: default
title: file sink
parent: Sinks
grand_parent: Configuring Data Prepper
grand_parent: Pipelines
nav_order: 45
---

View File

@ -2,7 +2,7 @@
layout: default
title: OpenSearch sink
parent: Sinks
grand_parent: Configuring Data Prepper
grand_parent: Pipelines
nav_order: 45
---

View File

@ -2,7 +2,7 @@
layout: default
title: Pipeline sink
parent: Sinks
grand_parent: Configuring Data Prepper
grand_parent: Pipelines
nav_order: 45
---

View File

@ -1,9 +1,9 @@
---
layout: default
title: Sinks
parent: Configuring Data Prepper
parent: Pipelines
has_children: true
nav_order: 44
nav_order: 30
---
# Sinks

View File

@ -2,7 +2,7 @@
layout: default
title: stdout sink
parent: Sinks
grand_parent: Configuring Data Prepper
grand_parent: Pipelines
nav_order: 45
---

View File

@ -2,7 +2,7 @@
layout: default
title: http_source
parent: Sources
grand_parent: Configuring Data Prepper
grand_parent: Pipelines
nav_order: 5
---

View File

@ -2,7 +2,7 @@
layout: default
title: otel_metrics_source
parent: Sources
grand_parent: Configuring Data Prepper
grand_parent: Pipelines
nav_order: 10
---

View File

@ -2,7 +2,7 @@
layout: default
title: otel_trace_source source
parent: Sources
grand_parent: Configuring Data Prepper
grand_parent: Pipelines
nav_order: 15
---

View File

@ -2,7 +2,7 @@
layout: default
title: s3
parent: Sources
grand_parent: Configuring Data Prepper
grand_parent: Pipelines
nav_order: 20
---
@ -20,14 +20,14 @@ codec | Yes | Codec | The codec to apply. Must be `newline`, `json`, or `csv`.
sqs | Yes | sqs | The [Amazon Simple Queue Service](https://aws.amazon.com/sqs/) (Amazon SQS) configuration. See [sqs](#sqs) for details.
aws | Yes | aws | The AWS configuration. See [aws](#aws) for details.
on_error | No | String | Determines how to handle errors in Amazon SQS. Can be either `retain_messages` or `delete_messages`. If `retain_messages`, then Data Prepper will leave the message in the SQS queue and try again. This is recommended for dead-letter queues. If `delete_messages`, then Data Prepper will delete failed messages. Default is `retain_messages`.
buffer_timeout | No | Duration | The timeout for writing events to the Data Prepper buffer. Any events that the S3 Source cannot write to the buffer in this time will be discarded. Default is 10 seconds.
buffer_timeout | No | Duration | The timeout for writing events to the Data Prepper buffer. Any events that the S3Source cannot write to the buffer in this time will be discarded. Default is 10 seconds.
records_to_accumulate | No | Integer | The number of messages that accumulate before writing to the buffer. Default is 100.
metadata_root_key | No | String | Base key for adding S3 metadata to each Event. The metadata includes the key and bucket for each S3 object. Defaults to `s3/`.
disable_bucket_ownership_validation | No | Boolean | If `true`, then the S3 Source will not attempt to validate that the bucket is owned by the expected account. The only expected account is the same account that owns the SQS queue. Defaults to `false`.
disable_bucket_ownership_validation | No | Boolean | If `true`, the S3Source will not attempt to validate that the bucket is owned by the expected account. The expected account is the same account that owns the SQS queue. Defaults to `false`.
## sqs
The following are configure usage of Amazon SQS in the S3 Source plugin.
The following parameters allow you to configure usage for Amazon SQS in the S3Source plugin.
Option | Required | Type | Description
:--- | :--- | :--- | :---

View File

@ -1,9 +1,9 @@
---
layout: default
title: Sources
parent: Configuring Data Prepper
parent: Pipelines
has_children: true
nav_order: 42
nav_order: 15
---
# Sources

View File

@ -1,7 +1,8 @@
---
layout: default
title: Expression syntax
nav_order: 40
parent: Pipelines
nav_order: 12
---
# Expression syntax

View File

@ -1,8 +1,8 @@
---
layout: default
title: Pipeline options
parent: Configuring Data Prepper
nav_order: 41
parent: Pipelines
nav_order: 11
---
# Pipeline options

View File

@ -1,8 +1,10 @@
---
layout: default
title: Pipelines
has_children: true
nav_order: 10
redirect_from:
- /data-prepper/pipelines/
- /clients/data-prepper/pipelines/
---
@ -77,7 +79,12 @@ conditional-routing-sample-pipeline:
## Examples
This section provides some pipeline examples that you can use to start creating your own pipelines. For more information, see [Data Prepper configuration reference]({{site.url}}{{site.baseurl}}/clients/data-prepper/data-prepper-reference/) guide.
This section provides some pipeline examples that you can use to start creating your own pipelines. For more pipeline configurations, select from the following options for each component:
- [Buffers]({{site.url}}{{site.baseurl}}/data-prepper/pipelines/configuration/buffers/buffers/)
- [Processors]({{site.url}}{{site.baseurl}}/data-prepper/pipelines/configuration/processors/processors/)
- [Sinks]({{site.url}}{{site.baseurl}}/data-prepper/pipelines/configuration/sinks/sinks/)
- [Sources]({{site.url}}{{site.baseurl}}/data-prepper/pipelines/configuration/sources/sources/)
The Data Prepper repository has several [sample applications](https://github.com/opensearch-project/data-prepper/tree/main/examples) to help you get started.
@ -213,10 +220,7 @@ metrics-pipeline:
### S3 log ingestion pipeline
The following example demonstrates how to use the S3 Source and Grok Processor plugins to process unstructured log data
from [Amazon Simple Storage Service](https://aws.amazon.com/s3/) (Amazon S3). This example uses Application Load
Balancer logs. As the Application Load Balancer writes logs to S3, S3 creates notifications in Amazon SQS. Data Prepper
reads those notifications and reads the S3 objects to get the log data and process it.
The following example demonstrates how to use the S3Source and Grok Processor plugins to process unstructured log data from [Amazon Simple Storage Service](https://aws.amazon.com/s3/) (Amazon S3). This example uses application load balancer logs. As the application load balancer writes logs to S3, S3 creates notifications in Amazon SQS. Data Prepper monitors those notifications and reads the S3 objects to get the log data and process it.
```yml
log-pipeline:
@ -293,13 +297,13 @@ docker run --name data-prepper \
opensearchproject/data-prepper:latest
```
## Configure the peer forwarder
## Configure peer forwarder
Data Prepper provides an HTTP service to forward Events between Data Prepper nodes for aggregation. This is required for operating Data Prepper in a clustered deployment. Currently, peer forwarding is supported in `aggregate`, `service_map_stateful`, and `otel_trace_raw` processors. Peer forwarder groups events based on the identification keys provided by the processors. For `service_map_stateful` and `otel_trace_raw` it's `traceId` by default and can not be configured. For `aggregate` processor, it is configurable using `identification_keys` option.
Peer forwarder supports peer discovery through one of three options: a static list, a DNS record lookup , or AWS Cloud Map. This option can be configured using `discovery_mode` option. Peer forwarder also supports SSL for verification and encrytion, and mTLS for mutual authentication in peer forwarding service.
Peer forwarder supports peer discovery through one of three options: a static list, a DNS record lookup , or AWS Cloud Map. Peer discovery can be configured using `discovery_mode` option. Peer forwarder also supports SSL for verification and encryption, and mTLS for mutual authentication in a peer forwarding service.
To configure the peer forwarder, add configuration options to `data-prepper-config.yaml` mentioned in the previous [Configure the Data Prepper server](#configure-the-data-prepper-server) section:
To configure peer forwarder, add configuration options to `data-prepper-config.yaml` mentioned in the [Configure the Data Prepper server](#configure-the-data-prepper-server) section:
```yml
peer_forwarder:

View File

@ -10,7 +10,7 @@ Log ingestion provides a way to transform unstructured log data into structured
## Get started with log ingestion
OpenSearch Log Ingestion consists of three components---[Data Prepper]({{site.url}}{{site.baseurl}}/clients/data-prepper/index/), [OpenSearch]({{site.url}}{{site.baseurl}}/) and [OpenSearch Dashboards]({{site.url}}{{site.baseurl}}/)---that fit into the OpenSearch ecosystem. The Data Prepper repository has several [sample applications](https://github.com/opensearch-project/data-prepper/tree/main/examples) to help you get started.
OpenSearch Log Ingestion consists of three components---[Data Prepper]({{site.url}}{{site.baseurl}}/clients/data-prepper/index/), [OpenSearch]({{site.url}}{{site.baseurl}}/quickstart/) and [OpenSearch Dashboards]({{site.url}}{{site.baseurl}}/dashboards/index/)---that fit into the OpenSearch ecosystem. The Data Prepper repository has several [sample applications](https://github.com/opensearch-project/data-prepper/tree/main/examples) to help you get started.
### Basic flow of data