MAINT: update docs
Signed-off-by: Chen <19492223+chenqi0805@users.noreply.github.com>
This commit is contained in:
parent
cb02f718cf
commit
8651556df8
|
@ -38,7 +38,7 @@ Sources define where your data comes from.
|
|||
Source for the OpenTelemetry Collector.
|
||||
|
||||
Option | Required | Type | Description
|
||||
:--- | :--- | :--- | :---
|
||||
:--- |:--------------|:--------| :---
|
||||
port | No | Integer | The port OTel trace source is running on. Default is `21890`.
|
||||
request_timeout | No | Integer | The request timeout in milliseconds. Default is `10_000`.
|
||||
health_check_service | No | Boolean | Enables a gRPC health check service under `grpc.health.v1/Health/Check`. Default is `false`.
|
||||
|
@ -53,6 +53,7 @@ useAcmCertForSSL | No | Boolean | Whether to enable TLS/SSL using certificate an
|
|||
acmCertificateArn | Conditionally | String | Represents the ACM certificate ARN. ACM certificate take preference over S3 or local file system certificate. Required if `useAcmCertForSSL` is set to `true`.
|
||||
awsRegion | Conditionally | String | Represents the AWS region to use ACM or S3. Required if `useAcmCertForSSL` is set to `true` or `sslKeyCertChainFile` and `sslKeyFile` are AWS S3 paths.
|
||||
authentication | No | Object | An authentication configuration. By default, this runs an unauthenticated server. This uses pluggable authentication for HTTPS. To use basic authentication, define the `http_basic` plugin with a `username` and `password`. To provide customer authentication use or create a plugin which implements: [GrpcAuthenticationProvider](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/armeria-common/src/main/java/com/amazon/dataprepper/armeria/authentication/GrpcAuthenticationProvider.java).
|
||||
record_type | No | String | A string represents the supported record data type that will be written into the buffer plugin. Its value takes either `otlp` or `event`. Default is `otlp`. <ul><li>`otlp`: otel-trace-source will write each incoming ExportTraceServiceRequest as record data type into the buffer.</li><li>`event`: otel-trace-source will decode each incoming ExportTraceServiceRequest into collection of Data Prepper internal spans serving as buffer items. To achieve better performance in this mode, it is recommended to set the buffer capacity proportional to the estimated number of spans in the incoming request payload.</li></ul>
|
||||
|
||||
### http_source
|
||||
|
||||
|
@ -116,13 +117,19 @@ Prior to Data Prepper 1.3, Processors were named Preppers. Starting in Data Prep
|
|||
|
||||
### otel_trace_raw_prepper
|
||||
|
||||
Converts OpenTelemetry data to OpenSearch-compatible JSON documents.
|
||||
Converts OpenTelemetry data to OpenSearch-compatible JSON documents and fills in trace group related fields in those JSON documents. It requires `record_type` to be set as `otlp` in `otel_trace_source`.
|
||||
|
||||
Option | Required | Type | Description
|
||||
:--- | :--- | :--- | :---
|
||||
root_span_flush_delay | No | Integer | Represents the time interval in seconds to flush all the root spans in the processor together with their descendants. Default is 30.
|
||||
trace_flush_interval | No | Integer | Represents the time interval in seconds to flush all the descendant spans without any root span. Default is 180.
|
||||
|
||||
### otel_trace_raw
|
||||
|
||||
This processor is a Data Prepper event record type compatible version of `otel_trace_raw_prepper` that fills in trace group related fields into all incoming Data Prepper span records. It requires `record_type` to be set as `event` in `otel_trace_source`.
|
||||
|
||||
Option | Required | Type | Description
|
||||
:--- | :--- | :--- | :---
|
||||
trace_flush_interval | No | Integer | Represents the time interval in seconds to flush all the descendant spans without any root span. Default is 180.
|
||||
|
||||
### service_map_stateful
|
||||
|
||||
|
|
|
@ -75,6 +75,8 @@ This example uses weak security. We strongly recommend securing all plugins whic
|
|||
|
||||
The following example demonstrates how to build a pipeline that supports the [Trace Analytics OpenSearch Dashboards plugin]({{site.url}}{{site.baseurl}}/observability-plugin/trace/ta-dashboards/). This pipeline takes data from the OpenTelemetry Collector and uses two other pipelines as sinks. These two separate pipelines index trace and the service map documents for the dashboard plugin.
|
||||
|
||||
#### Classic
|
||||
|
||||
```yml
|
||||
entry-pipeline:
|
||||
delay: "100"
|
||||
|
@ -115,6 +117,65 @@ service-map-pipeline:
|
|||
trace_analytics_service_map: true
|
||||
```
|
||||
|
||||
#### Event record type
|
||||
|
||||
Starting from Data Prepper 1.4, we support event record type in trace analytics pipeline source, buffer and processors.
|
||||
|
||||
```yml
|
||||
entry-pipeline:
|
||||
delay: "100"
|
||||
source:
|
||||
otel_trace_source:
|
||||
ssl: false
|
||||
record_type: event
|
||||
buffer:
|
||||
bounded_blocking:
|
||||
buffer_size: 10240
|
||||
batch_size: 160
|
||||
sink:
|
||||
- pipeline:
|
||||
name: "raw-pipeline"
|
||||
- pipeline:
|
||||
name: "service-map-pipeline"
|
||||
raw-pipeline:
|
||||
source:
|
||||
pipeline:
|
||||
name: "entry-pipeline"
|
||||
buffer:
|
||||
bounded_blocking:
|
||||
buffer_size: 10240
|
||||
batch_size: 160
|
||||
processor:
|
||||
- otel_trace_raw:
|
||||
sink:
|
||||
- opensearch:
|
||||
hosts: ["https://localhost:9200"]
|
||||
insecure: true
|
||||
username: admin
|
||||
password: admin
|
||||
trace_analytics_raw: true
|
||||
service-map-pipeline:
|
||||
delay: "100"
|
||||
source:
|
||||
pipeline:
|
||||
name: "entry-pipeline"
|
||||
buffer:
|
||||
bounded_blocking:
|
||||
buffer_size: 10240
|
||||
batch_size: 160
|
||||
processor:
|
||||
- service_map_stateful:
|
||||
sink:
|
||||
- opensearch:
|
||||
hosts: ["https://localhost:9200"]
|
||||
insecure: true
|
||||
username: admin
|
||||
password: admin
|
||||
trace_analytics_service_map: true
|
||||
```
|
||||
|
||||
Note that it is recommended to scale the `buffer_size` and `batch_size` by the estimated maximum batch size in the client request payload to maintain similar ingestion throughput and latency as in [Classic](#classic).
|
||||
|
||||
## Migrating from Logstash
|
||||
|
||||
Data Prepper supports Logstash configuration files for a limited set of plugins. Simply use the logstash config to run Data Prepper.
|
||||
|
|
Loading…
Reference in New Issue