addressing pr feedback
Signed-off-by: Christopher Manning <cmanning09@users.noreply.github.com>
This commit is contained in:
parent
2c2d7ffc67
commit
b6a37dc493
|
@ -48,7 +48,7 @@ collections:
|
|||
replication-plugin:
|
||||
permalink: /:collection/:path/
|
||||
output: true
|
||||
observability-plugins:
|
||||
observability:
|
||||
permalink: /:collection/:path/
|
||||
output: true
|
||||
monitoring-plugins:
|
||||
|
@ -90,8 +90,8 @@ just_the_docs:
|
|||
replication-plugin:
|
||||
name: Replication plugin
|
||||
nav_fold: true
|
||||
observability-plugins:
|
||||
name: Observability plugins
|
||||
observability:
|
||||
name: Observability
|
||||
nav_fold: true
|
||||
monitoring-plugins:
|
||||
name: Monitoring plugins
|
||||
|
|
|
@ -1,58 +0,0 @@
|
|||
---
|
||||
layout: default
|
||||
title: Get Started
|
||||
parent: Data Prepper
|
||||
nav_order: 1
|
||||
---
|
||||
|
||||
# Get started with Data Prepper
|
||||
|
||||
Data Prepper is an independent component, not an OpenSearch plugin, that converts data for use with OpenSearch. It's not bundled with the all-in-one OpenSearch installation packages.
|
||||
|
||||
## Install Data Prepper
|
||||
|
||||
To use the Docker image, pull it like any other image:
|
||||
|
||||
```bash
|
||||
docker pull opensearchproject/data-prepper:latest
|
||||
```
|
||||
|
||||
## Define a pipeline
|
||||
|
||||
Build a pipeline by creating a pipeline configuration YAML using any of the built-in Data Prepper plugins.
|
||||
|
||||
For more examples and details on pipeline configurations see [Pipelines]({{site.url}}{{site.baseurl}}/observability-plugins/data-prepper/pipelines) guide.
|
||||
|
||||
## Start Data Prepper
|
||||
|
||||
Run the following command with your pipeline configuration YAML.
|
||||
|
||||
```bash
|
||||
docker run --name data-prepper -v /full/path/to/pipelines.yaml:/usr/share/data-prepper/pipelines.yaml opensearchproject/opensearch-data-prepper:latest
|
||||
```
|
||||
|
||||
### Migrating from Logstash
|
||||
|
||||
Data Prepper supports Logstash configuration files for a limited set of plugins. Simply use the logstash config to run Data Prepper.
|
||||
|
||||
```bash
|
||||
docker run --name data-prepper -v /full/path/to/logstash.conf:/usr/share/data-prepper/pipelines.conf opensearchproject/opensearch-data-prepper:latest
|
||||
```
|
||||
|
||||
This feature is limited by feature parity of Data Prepper. As of Data Prepper 1.2 release, the following plugins from the Logstash configuration are supported:
|
||||
|
||||
- HTTP Input plugin
|
||||
- Grok Filter plugin
|
||||
- Elasticsearch Output plugin
|
||||
- Amazon Elasticsearch Output plugin
|
||||
|
||||
### Configure the Data Prepper server
|
||||
Data Prepper itself provides administrative HTTP endpoints such as `/list` to list pipelines and `/metrics/prometheus` to provide Prometheus-compatible metrics data. The port which serves these endpoints has a TLS configuration and is specified by a separate YAML file. Data Prepper docker images secures these endpoints by default. We strongly recommend providing your own configuration file for securing production environments. Example:
|
||||
|
||||
```yml
|
||||
ssl: true
|
||||
keyStoreFilePath: "/usr/share/data-prepper/keystore.jks"
|
||||
keyStorePassword: "password"
|
||||
privateKeyPassword: "other_password"
|
||||
serverPort: 1234
|
||||
```
|
|
@ -1,15 +0,0 @@
|
|||
---
|
||||
layout: default
|
||||
title: Data Prepper
|
||||
nav_order: 80
|
||||
has_children: true
|
||||
has_toc: false
|
||||
---
|
||||
|
||||
# Data Prepper
|
||||
|
||||
Data Prepper is a server side data collector with abilities to filter, enrich, transform, normalize and aggregate data for downstream analytics and visualization.
|
||||
|
||||
Data Prepper allows users to build custom pipelines to improve their operational view of their own applications. Two common uses for Data Prepper are trace and log analytics. [Trace Anaytics]({{site.url}}{{site.baseurl}}/observability-plugins/trace/index/) can help you visualize this flow of events and identify performance problems. [Log Anayltics]({{site.url}}{{site.baseurl}}/observability-plugins/log-analytics/index) can improve searching, analyzing and provide insights into your application.
|
||||
|
||||
To get started building your own custom pipelines with Data Prepper, see the [Get Started]({{site.url}}{{site.baseurl}}/observability-plugins/data-prepper/get-started/) guide.
|
|
@ -1,122 +0,0 @@
|
|||
---
|
||||
layout: default
|
||||
title: Pipelines
|
||||
parent: Data Prepper
|
||||
nav_order: 2
|
||||
---
|
||||
|
||||
# Pipelines
|
||||
|
||||
To use Data Prepper, you define pipelines in a configuration YAML file. Each pipeline is a combination of a source, a buffer, zero or more preppers, and one or more sinks:
|
||||
|
||||
```yml
|
||||
sample-pipeline:
|
||||
workers: 4 # the number of workers
|
||||
delay: 100 # in milliseconds, how long workers wait between read attempts
|
||||
source:
|
||||
otel_trace_source:
|
||||
ssl: true
|
||||
sslKeyCertChainFile: "config/demo-data-prepper.crt"
|
||||
sslKeyFile: "config/demo-data-prepper.key"
|
||||
buffer:
|
||||
bounded_blocking:
|
||||
buffer_size: 1024 # max number of records the buffer accepts
|
||||
batch_size: 256 # max number of records the buffer drains after each read
|
||||
prepper:
|
||||
- otel_trace_raw_prepper:
|
||||
sink:
|
||||
- opensearch:
|
||||
hosts: ["https:localhost:9200"]
|
||||
cert: "config/root-ca.pem"
|
||||
username: "ta-user"
|
||||
password: "ta-password"
|
||||
trace_analytics_raw: true
|
||||
```
|
||||
|
||||
- Sources define where your data comes from. In this case, the source is the OpenTelemetry Collector (`otel_trace_source`) with some optional SSL settings.
|
||||
|
||||
- Buffers store data as it passes through the pipeline.
|
||||
|
||||
By default, Data Prepper uses its one and only buffer, the `bounded_blocking` buffer, so you can omit this section unless you developed a custom buffer or need to tune the buffer settings.
|
||||
|
||||
- Preppers perform some action on your data: filter, transform, enrich, etc.
|
||||
|
||||
You can have multiple preppers, which run sequentially from top to bottom, not in parallel. The `otel_trace_raw_prepper` prepper converts OpenTelemetry data into OpenSearch-compatible JSON documents.
|
||||
|
||||
- Sinks define where your data goes. In this case, the sink is an OpenSearch cluster.
|
||||
|
||||
## Examples
|
||||
|
||||
This section provides some pipeline examples that you can use to start creating your own pipelines. For more information, see [Data Prepper configuration reference]({{site.url}}{{site.baseurl}}/observability-plugins/data-prepper/data-prepper-reference/) guide.
|
||||
|
||||
The Data Prepper repository has several [sample applications](https://github.com/opensearch-project/data-prepper/tree/main/examples) to help you get started.
|
||||
|
||||
### Log ingestion pipeline
|
||||
|
||||
The following example demonstrates how to use HTTP source and Grok prepper plugins to process unstructured log data.
|
||||
|
||||
```yaml
|
||||
log-pipeline:
|
||||
source:
|
||||
http:
|
||||
ssl: false
|
||||
processor:
|
||||
- grok:
|
||||
match:
|
||||
log: [ "%{COMMONAPACHELOG}" ]
|
||||
sink:
|
||||
- opensearch:
|
||||
hosts: [ "https://opensearch:9200" ]
|
||||
insecure: true
|
||||
username: admin
|
||||
password: admin
|
||||
index: apache_logs
|
||||
```
|
||||
|
||||
Note: This example uses weak security. We strongly recommend securing all plugins which open external ports in production environments.
|
||||
|
||||
### Trace Analytics pipeline
|
||||
|
||||
The following example demonstrates how to build a pipeline that supports the [Trace Analytics OpenSearch Dashboards plugin]({{site.url}}{{site.baseurl}}/observability-plugins/trace/ta-dashboards/). This pipeline takes data from the OpenTelemetry Collector and uses two other pipelines as sinks. These two separate pipelines index trace and the service map documents for the dashboard plugin.
|
||||
|
||||
```yml
|
||||
entry-pipeline:
|
||||
delay: "100"
|
||||
source:
|
||||
otel_trace_source:
|
||||
ssl: true
|
||||
sslKeyCertChainFile: "config/demo-data-prepper.crt"
|
||||
sslKeyFile: "config/demo-data-prepper.key"
|
||||
sink:
|
||||
- pipeline:
|
||||
name: "raw-pipeline"
|
||||
- pipeline:
|
||||
name: "service-map-pipeline"
|
||||
raw-pipeline:
|
||||
source:
|
||||
pipeline:
|
||||
name: "entry-pipeline"
|
||||
prepper:
|
||||
- otel_trace_raw_prepper:
|
||||
sink:
|
||||
- opensearch:
|
||||
hosts: ["https://localhost:9200" ]
|
||||
cert: "config/root-ca.pem"
|
||||
username: "ta-user"
|
||||
password: "ta-password"
|
||||
trace_analytics_raw: true
|
||||
service-map-pipeline:
|
||||
delay: "100"
|
||||
source:
|
||||
pipeline:
|
||||
name: "entry-pipeline"
|
||||
prepper:
|
||||
- service_map_stateful:
|
||||
sink:
|
||||
- opensearch:
|
||||
hosts: ["https://localhost:9200"]
|
||||
cert: "config/root-ca.pem"
|
||||
username: "ta-user"
|
||||
password: "ta-password"
|
||||
trace_analytics_service_map: true
|
||||
```
|
|
@ -1,26 +0,0 @@
|
|||
---
|
||||
layout: default
|
||||
title: About Observability
|
||||
nav_order: 1
|
||||
has_children: false
|
||||
redirect_from:
|
||||
- /observability-plugins/
|
||||
---
|
||||
|
||||
# About Observability
|
||||
OpenSearch Dashboards
|
||||
{: .label .label-yellow :}
|
||||
|
||||
The Observability plugins are a collection of plugins that let you visualize data-driven events by using Piped Processing Language to explore, discover, and query data stored in OpenSearch.
|
||||
|
||||
Your experience of exploring data might differ, but if you're new to exploring data to create visualizations, we recommend trying a workflow like the following:
|
||||
|
||||
1. Explore data over a certain timeframe using [Piped Processing Language]({{site.url}}{{site.baseurl}}/observability-plugins/ppl/index).
|
||||
1. Use [event analytics]({{site.url}}{{site.baseurl}}/observability-plugins/event-analytics) to turn data-driven events into visualizations.
|
||||
![Sample Event Analytics View]({{site.url}}{{site.baseurl}}/images/event-analytics.png)
|
||||
1. Create [operational panels]({{site.url}}{{site.baseurl}}/observability-plugins/operational-panels) and add visualizations to compare data the way you like.
|
||||
![Sample Operational Panel View]({{site.url}}{{site.baseurl}}/images/operational-panel.png)
|
||||
1. Use [trace analytics]({{site.url}}{{site.baseurl}}/observability-plugins/trace/index) to create traces and dive deep into your data.
|
||||
![Sample Trace Analytics View]({{site.url}}{{site.baseurl}}/images/observability-trace.png)
|
||||
1. Leverage [notebooks]({{site.url}}{{site.baseurl}}/observability-plugins/notebooks) to combine different visualizations and code blocks that you can share with team members.
|
||||
![Sample Notebooks View]({{site.url}}{{site.baseurl}}/images/notebooks.png)
|
|
@ -7,7 +7,7 @@ nav_order: 3
|
|||
|
||||
# Data Prepper configuration reference
|
||||
|
||||
This page lists all supported Data Prepper server, sources, buffers, preppers, and sinks, along with their associated options. For example configuration files, see [Data Prepper]({{site.url}}{{site.baseurl}}/observability-plugins/data-prepper/pipelines/).
|
||||
This page lists all supported Data Prepper server, sources, buffers, preppers, and sinks, along with their associated options. For example configuration files, see [Data Prepper]({{site.url}}{{site.baseurl}}/observability/data-prepper/pipelines/).
|
||||
|
||||
## Data Prepper server options
|
||||
|
||||
|
@ -52,7 +52,7 @@ sslKeyFile | Conditionally | String, file-system path or AWS S3 path to the secu
|
|||
useAcmCertForSSL | No | Boolean, enables TLS/SSL using certificate and private key from AWS Certificate Manager (ACM). Default is `false`.
|
||||
acmCertificateArn | Conditionally | String, represents the ACM certificate ARN. ACM certificate take preference over S3 or local file system certificate. Required if `useAcmCertForSSL` is set to `true`.
|
||||
awsRegion | Conditionally | String, represents the AWS region to use ACM or S3. Required if `useAcmCertForSSL` is set to `true` or `sslKeyCertChainFile` and `sslKeyFile` are AWS S3 paths.
|
||||
authentication | No | An authentication configuration. By default, this runs an unauthenticated server. This uses pluggable authentication for HTTPS. To provide customer authentication use or create a plugin which implements: [GrpcAuthenticationProvider](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/armeria-common/src/main/java/com/amazon/dataprepper/armeria/authentication/GrpcAuthenticationProvider.java)
|
||||
authentication | No | An authentication configuration. By default, this runs an unauthenticated server. This uses pluggable authentication for HTTPS. To use basic authentication define the ```http_basic``` plugin with a `username` and `password`. To provide customer authentication use or create a plugin which implements: [GrpcAuthenticationProvider](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/armeria-common/src/main/java/com/amazon/dataprepper/armeria/authentication/GrpcAuthenticationProvider.java).
|
||||
|
||||
### http_source
|
||||
|
||||
|
@ -64,8 +64,8 @@ port | No | Integer, the port the source is running on. Default is `2021`. Valid
|
|||
request_timeout | No | Integer, the request timeout in millis. Default is `10_000`.
|
||||
thread_count | No | Integer, the number of threads to keep in the ScheduledThreadPool. Default is `200`.
|
||||
max_connection_count | No | Integer, the maximum allowed number of open connections. Default is `500`.
|
||||
max_pending_requests | No | Ingeger, the maximum number of allowed tasks in ScheduledThreadPool work queue. Default is `1024.
|
||||
authentication | No | An authentication configuration. By default, this runs an unauthenticated server. This uses pluggable authentication for HTTPS. To provide customer authentication use or create a plugin which implements: [ArmeriaHttpAuthenticationProvider](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/armeria-common/src/main/java/com/amazon/dataprepper/armeria/authentication/ArmeriaHttpAuthenticationProvider.java)
|
||||
max_pending_requests | No | Ingeger, the maximum number of allowed tasks in ScheduledThreadPool work queue. Default is `1024`.
|
||||
authentication | No | An authentication configuration. By default, this runs an unauthenticated server. This uses pluggable authentication for HTTPS. To use basic authentication define the ```http_basic``` plugin with a `username` and `password`. To provide customer authentication use or create a plugin which implements: [ArmeriaHttpAuthenticationProvider](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/armeria-common/src/main/java/com/amazon/dataprepper/armeria/authentication/ArmeriaHttpAuthenticationProvider.java).
|
||||
|
||||
### file
|
||||
|
||||
|
@ -163,16 +163,16 @@ Takes unstructured data and utilizes pattern matching to structure and extract i
|
|||
|
||||
Option | Required | Description
|
||||
:--- | :--- | :---
|
||||
match | No | Map, specifies which keys to match specific patterns against. Default is `{}`.
|
||||
keep_empty_captures | No | Boolean, enables preserving `null` captures. Defatul value is `false`.
|
||||
named_captures_only | No | Boolean, enables whether to keep only named captures. Default vaule is `true`.
|
||||
match | No | Map, specifies which keys to match specific patterns against. Default is an empty body.
|
||||
keep_empty_captures | No | Boolean, enables preserving `null` captures. Default value is `false`.
|
||||
named_captures_only | No | Boolean, enables whether to keep only named captures. Default value is `true`.
|
||||
break_on_match | No | Boolean, specifies wether to match all patterns or stop once the first successful match is found. Default is `true`.
|
||||
keys_to_overwrite | No | List, specifies which existing keys are to be overwritten if there is a capture with the same key value. Default is `[]`.
|
||||
pattern_definitions | No | Map, that allows for custom pattern use inline. Default value is `{}`.
|
||||
patterns_directories | No | List, specifies the path of directories that contain customer pattern files. Default value is `[]`.
|
||||
pattern_files_glob | No | String, specifies which pattern files to use from the directories specified for `pattern_directories`. Default is `*`.
|
||||
target_key | No | String, specifies a parent level key to store all captures. Default value is `null`.
|
||||
timeout_millis | Integer, the maximum amount of time that matching will be performed. Setting to `0` will disable the timeout. Default value is `30,000`.
|
||||
timeout_millis | No | Integer, the maximum amount of time that matching will be performed. Setting to `0` will disable the timeout. Default value is `30,000`.
|
||||
|
||||
## Sinks
|
||||
|
|
@ -0,0 +1,63 @@
|
|||
---
|
||||
layout: default
|
||||
title: Get Started
|
||||
parent: Data Prepper
|
||||
nav_order: 1
|
||||
---
|
||||
|
||||
# Get started with Data Prepper
|
||||
|
||||
Data Prepper is an independent component, not an OpenSearch plugin, that converts data for use with OpenSearch. It's not bundled with the all-in-one OpenSearch installation packages.
|
||||
|
||||
## 1. Install Data Prepper
|
||||
|
||||
To use the Docker image, pull it like any other image:
|
||||
|
||||
```bash
|
||||
docker pull opensearchproject/data-prepper:latest
|
||||
```
|
||||
|
||||
## 2. Define a pipeline
|
||||
|
||||
Create a Data Prepper pipeline file, `pipelines.yaml`, with:
|
||||
|
||||
```yaml
|
||||
simple-sample-pipeline:
|
||||
workers: 2
|
||||
delay: "5000"
|
||||
source:
|
||||
random:
|
||||
sink:
|
||||
- stdout:
|
||||
```
|
||||
|
||||
## 3. Start Data Prepper
|
||||
|
||||
Run the following command with your pipeline configuration YAML.
|
||||
|
||||
```bash
|
||||
docker run --name data-prepper \
|
||||
-v /full/path/to/pipelines.yaml:/usr/share/data-prepper/pipelines.yaml \
|
||||
opensearchproject/opensearch-data-prepper:latest
|
||||
```
|
||||
|
||||
You should see log output and after a few seconds some UUIDs in the output. It should look something like the following:
|
||||
|
||||
```yaml
|
||||
2021-09-30T20:19:44,147 [main] INFO com.amazon.dataprepper.pipeline.server.DataPrepperServer - Data Prepper server running at :4900
|
||||
2021-09-30T20:19:44,681 [random-source-pool-0] INFO com.amazon.dataprepper.plugins.source.RandomStringSource - Writing to buffer
|
||||
2021-09-30T20:19:45,183 [random-source-pool-0] INFO com.amazon.dataprepper.plugins.source.RandomStringSource - Writing to buffer
|
||||
2021-09-30T20:19:45,687 [random-source-pool-0] INFO com.amazon.dataprepper.plugins.source.RandomStringSource - Writing to buffer
|
||||
2021-09-30T20:19:46,191 [random-source-pool-0] INFO com.amazon.dataprepper.plugins.source.RandomStringSource - Writing to buffer
|
||||
2021-09-30T20:19:46,694 [random-source-pool-0] INFO com.amazon.dataprepper.plugins.source.RandomStringSource - Writing to buffer
|
||||
2021-09-30T20:19:47,200 [random-source-pool-0] INFO com.amazon.dataprepper.plugins.source.RandomStringSource - Writing to buffer
|
||||
2021-09-30T20:19:49,181 [simple-test-pipeline-processor-worker-1-thread-1] INFO com.amazon.dataprepper.pipeline.ProcessWorker - simple-test-pipeline Worker: Processing 6 records from buffer
|
||||
07dc0d37-da2c-447e-a8df-64792095fb72
|
||||
5ac9b10a-1d21-4306-851a-6fb12f797010
|
||||
99040c79-e97b-4f1d-a70b-409286f2a671
|
||||
5319a842-c028-4c17-a613-3ef101bd2bdd
|
||||
e51e700e-5cab-4f6d-879a-1c3235a77d18
|
||||
b4ed2d7e-cf9c-4e9d-967c-b18e8af35c90
|
||||
```
|
||||
|
||||
This sample pipeline configuration above demonstrates a simple pipeline with a source (`random`) sending data to a sink (`stdout`). For more examples and details on more advanced pipeline configurations see the [Pipelines]({{site.url}}{{site.baseurl}}/observability/data-prepper/pipelines) guide.
|
|
@ -0,0 +1,15 @@
|
|||
---
|
||||
layout: default
|
||||
title: Data Prepper
|
||||
nav_order: 80
|
||||
has_children: true
|
||||
has_toc: false
|
||||
---
|
||||
|
||||
# Data Prepper
|
||||
|
||||
Data Prepper is a server side data collector capable of filtering, enriching, transforming, normalizing and aggregating data for downstream analytics and visualization.
|
||||
|
||||
Data Prepper allows users to build custom pipelines to improve their operational view of their own applications. Two common uses for Data Prepper are trace and log analytics. [Trace Analytics]({{site.url}}{{site.baseurl}}/observability/trace/index/) can help you visualize the flow of events and identify performance problems. [Log Analytics]({{site.url}}{{site.baseurl}}/observability/log-analytics/) can improve searching, analyzing and provide insights into your application.
|
||||
|
||||
To get started building your own custom pipelines with Data Prepper, see the [Get Started]({{site.url}}{{site.baseurl}}/observability/data-prepper/get-started/) guide.
|
|
@ -0,0 +1,151 @@
|
|||
---
|
||||
layout: default
|
||||
title: Pipelines
|
||||
parent: Data Prepper
|
||||
nav_order: 2
|
||||
---
|
||||
|
||||
# Pipelines
|
||||
|
||||
![Data Prepper Pipeline]({{site.url}}{{site.baseurl}}/images/data-prepper-pipeline.png)
|
||||
|
||||
To use Data Prepper, you define pipelines in a configuration YAML file. Each pipeline is a combination of a source, a buffer, zero or more preppers, and one or more sinks. For example:
|
||||
|
||||
```yml
|
||||
simple-sample-pipeline:
|
||||
workers: 2 # the number of workers
|
||||
delay: 5000 # in milliseconds, how long workers wait between read attempts
|
||||
source:
|
||||
random:
|
||||
buffer:
|
||||
bounded_blocking:
|
||||
buffer_size: 1024 # max number of records the buffer accepts
|
||||
batch_size: 256 # max number of records the buffer drains after each read
|
||||
processor:
|
||||
- string_converter:
|
||||
upper_case: true
|
||||
sink:
|
||||
- stdout:
|
||||
```
|
||||
|
||||
- Sources define where your data comes from. In this case, the source is a random UUID generator (`random`).
|
||||
|
||||
- Buffers store data as it passes through the pipeline.
|
||||
|
||||
By default, Data Prepper uses its one and only buffer, the `bounded_blocking` buffer, so you can omit this section unless you developed a custom buffer or need to tune the buffer settings.
|
||||
|
||||
- Preppers perform some action on your data: filter, transform, enrich, etc.
|
||||
|
||||
You can have multiple preppers, which run sequentially from top to bottom, not in parallel. The `string_converter` prepper transform the strings by making them uppercase.
|
||||
|
||||
- Sinks define where your data goes. In this case, the sink is stdout.
|
||||
|
||||
## Examples
|
||||
|
||||
This section provides some pipeline examples that you can use to start creating your own pipelines. For more information, see [Data Prepper configuration reference]({{site.url}}{{site.baseurl}}/observability/data-prepper/data-prepper-reference/) guide.
|
||||
|
||||
The Data Prepper repository has several [sample applications](https://github.com/opensearch-project/data-prepper/tree/main/examples) to help you get started.
|
||||
|
||||
### Log ingestion pipeline
|
||||
|
||||
The following example demonstrates how to use HTTP source and Grok prepper plugins to process unstructured log data.
|
||||
|
||||
```yaml
|
||||
log-pipeline:
|
||||
source:
|
||||
http:
|
||||
ssl: false
|
||||
processor:
|
||||
- grok:
|
||||
match:
|
||||
log: [ "%{COMMONAPACHELOG}" ]
|
||||
sink:
|
||||
- opensearch:
|
||||
hosts: [ "https://opensearch:9200" ]
|
||||
insecure: true
|
||||
username: admin
|
||||
password: admin
|
||||
index: apache_logs
|
||||
```
|
||||
|
||||
Note: This example uses weak security. We strongly recommend securing all plugins which open external ports in production environments.
|
||||
|
||||
### Trace Analytics pipeline
|
||||
|
||||
The following example demonstrates how to build a pipeline that supports the [Trace Analytics OpenSearch Dashboards plugin]({{site.url}}{{site.baseurl}}/observability/trace/ta-dashboards/). This pipeline takes data from the OpenTelemetry Collector and uses two other pipelines as sinks. These two separate pipelines index trace and the service map documents for the dashboard plugin.
|
||||
|
||||
```yml
|
||||
entry-pipeline:
|
||||
delay: "100"
|
||||
source:
|
||||
otel_trace_source:
|
||||
ssl: false
|
||||
sink:
|
||||
- pipeline:
|
||||
name: "raw-pipeline"
|
||||
- pipeline:
|
||||
name: "service-map-pipeline"
|
||||
raw-pipeline:
|
||||
source:
|
||||
pipeline:
|
||||
name: "entry-pipeline"
|
||||
prepper:
|
||||
- otel_trace_raw_prepper:
|
||||
sink:
|
||||
- opensearch:
|
||||
hosts: ["https://localhost:9200" ]
|
||||
insecure: true
|
||||
username: admin
|
||||
password: admin
|
||||
trace_analytics_raw: true
|
||||
service-map-pipeline:
|
||||
delay: "100"
|
||||
source:
|
||||
pipeline:
|
||||
name: "entry-pipeline"
|
||||
prepper:
|
||||
- service_map_stateful:
|
||||
sink:
|
||||
- opensearch:
|
||||
hosts: ["https://localhost:9200"]
|
||||
insecure: true
|
||||
username: admin
|
||||
password: admin
|
||||
trace_analytics_service_map: true
|
||||
```
|
||||
|
||||
## Migrating from Logstash
|
||||
|
||||
Data Prepper supports Logstash configuration files for a limited set of plugins. Simply use the logstash config to run Data Prepper.
|
||||
|
||||
```bash
|
||||
docker run --name data-prepper \
|
||||
-v /full/path/to/logstash.conf:/usr/share/data-prepper/pipelines.conf \
|
||||
opensearchproject/opensearch-data-prepper:latest
|
||||
```
|
||||
|
||||
This feature is limited by feature parity of Data Prepper. As of Data Prepper 1.2 release, the following plugins from the Logstash configuration are supported:
|
||||
|
||||
- HTTP Input plugin
|
||||
- Grok Filter plugin
|
||||
- Elasticsearch Output plugin
|
||||
- Amazon Elasticsearch Output plugin
|
||||
|
||||
## Configure the Data Prepper server
|
||||
Data Prepper itself provides administrative HTTP endpoints such as `/list` to list pipelines and `/metrics/prometheus` to provide Prometheus-compatible metrics data. The port which serves these endpoints has a TLS configuration and is specified by a separate YAML file. Data Prepper docker images secures these endpoints by default. We strongly recommend providing your own configuration file for securing production environments. Here is an example `data-prepper-config.yaml`:
|
||||
|
||||
```yml
|
||||
ssl: true
|
||||
keyStoreFilePath: "/usr/share/data-prepper/keystore.jks"
|
||||
keyStorePassword: "password"
|
||||
privateKeyPassword: "other_password"
|
||||
serverPort: 1234
|
||||
```
|
||||
|
||||
To configure the Data Prepper server, run Data Prepper with the additional yaml file.
|
||||
|
||||
```yaml
|
||||
docker run --name data-prepper -v /full/path/to/pipelines.yaml:/usr/share/data-prepper/pipelines.yaml \
|
||||
/full/path/to/data-prepper-config.yaml:/usr/share/data-prepper/data-prepper-config.yaml \
|
||||
opensearchproject/opensearch-data-prepper:latest
|
||||
````
|
|
@ -6,7 +6,7 @@ nav_order: 10
|
|||
|
||||
# Event analytics
|
||||
|
||||
Event analytics in observability is where you can use [Piped Processing Language]({{site.url}}{{site.baseurl}}/observability-plugins/ppl/index) (PPL) queries to build and view different visualizations of your data.
|
||||
Event analytics in observability is where you can use [Piped Processing Language]({{site.url}}{{site.baseurl}}/observability/ppl/index) (PPL) queries to build and view different visualizations of your data.
|
||||
|
||||
## Get started with event analytics
|
||||
|
||||
|
@ -24,10 +24,10 @@ source = opensearch_dashboards_sample_data_logs | fields host | stats count()
|
|||
|
||||
By default, Dashboards shows results from the last 15 minutes of your data. To see data from a different timeframe, use the date and time selector.
|
||||
|
||||
For more information about building PPL queries, see [Piped Processing Language]({{site.url}}{{site.baseurl}}/observability-plugins/ppl/index).
|
||||
For more information about building PPL queries, see [Piped Processing Language]({{site.url}}{{site.baseurl}}/observability/ppl/index).
|
||||
|
||||
## Save a visualization
|
||||
|
||||
After Dashboards generates a visualization, you must save it if you want to return to it at a later time or if you want to add it to an [operational panel]({{site.url}}{{site.baseurl}}/observability-plugins/operational-panels).
|
||||
After Dashboards generates a visualization, you must save it if you want to return to it at a later time or if you want to add it to an [operational panel]({{site.url}}{{site.baseurl}}/observability/operational-panels).
|
||||
|
||||
To save a visualization, expand the save dropdown menu next to **Run**, enter a name for your visualization, then choose **Save**. You can reopen any saved visualizations on the event analytics page.
|
|
@ -0,0 +1,28 @@
|
|||
---
|
||||
layout: default
|
||||
title: About Observability
|
||||
nav_order: 1
|
||||
has_children: false
|
||||
redirect_from:
|
||||
- /observability/
|
||||
- /observability/
|
||||
---
|
||||
|
||||
# About Observability
|
||||
OpenSearch Dashboards
|
||||
{: .label .label-yellow :}
|
||||
|
||||
Observability is collection of plugins and applications that let you visualize data-driven events by using Piped Processing Language to explore, discover, and query data stored in OpenSearch.
|
||||
|
||||
Your experience of exploring data might differ, but if you're new to exploring data to create visualizations, we recommend trying a workflow like the following:
|
||||
|
||||
1. Explore data over a certain timeframe using [Piped Processing Language]({{site.url}}{{site.baseurl}}/observability/ppl/index).
|
||||
2. Use [event analytics]({{site.url}}{{site.baseurl}}/observability/event-analytics) to turn data-driven events into visualizations.
|
||||
![Sample Event Analytics View]({{site.url}}{{site.baseurl}}/images/event-analytics.png)
|
||||
3. Create [operational panels]({{site.url}}{{site.baseurl}}/observability/operational-panels) and add visualizations to compare data the way you like.
|
||||
![Sample Operational Panel View]({{site.url}}{{site.baseurl}}/images/operational-panel.png)
|
||||
4. Use [log analytics]({{site.url}}{{site.baseurl}}/observability/log-analytics) to transform unstructured log data.
|
||||
5. Use [trace analytics]({{site.url}}{{site.baseurl}}/observability/trace/index) to create traces and dive deep into your data.
|
||||
![Sample Trace Analytics View]({{site.url}}{{site.baseurl}}/images/observability-trace.png)
|
||||
6. Leverage [notebooks]({{site.url}}{{site.baseurl}}/observability/notebooks) to combine different visualizations and code blocks that you can share with team members.
|
||||
![Sample Notebooks View]({{site.url}}{{site.baseurl}}/images/notebooks.png)
|
|
@ -6,13 +6,11 @@ nav_order: 70
|
|||
|
||||
# Log Ingestion
|
||||
|
||||
Log ingestion provides a way to transform unstructured log data into a structure data and ingestion into OpenSearch. This data can help improve your log collection in distributed applications.
|
||||
|
||||
Structured log data allows for improved queries and filtering based on the data format when you are searching logs for an event.
|
||||
Log ingestion provides a way to transform unstructured log data into structured data and ingest into OpenSearch. Structured log data allows for improved queries and filtering based on the data format when searching logs for an event.
|
||||
|
||||
## Get started with log ingestion
|
||||
|
||||
OpenSearch Log Ingestion consists of three components---[Data Prepper]({{site.url}}{{site.baseurl}}/observability-plugins/data-prepper/index/), [OpenSearch]({{site.url}}{{site.baseurl}}/) and [OpenSearch Dashboards]({{site.url}}{{site.baseurl}}/)---that fit into the OpenSearch ecosystem. The Data Prepper repository has several [sample applications](https://github.com/opensearch-project/data-prepper/tree/main/examples) to help you get started.
|
||||
OpenSearch Log Ingestion consists of three components---[Data Prepper]({{site.url}}{{site.baseurl}}/observability/data-prepper/index/), [OpenSearch]({{site.url}}{{site.baseurl}}/) and [OpenSearch Dashboards]({{site.url}}{{site.baseurl}}/)---that fit into the OpenSearch ecosystem. The Data Prepper repository has several [sample applications](https://github.com/opensearch-project/data-prepper/tree/main/examples) to help you get started.
|
||||
|
||||
### Basic flow of data
|
||||
|
||||
|
@ -22,7 +20,7 @@ OpenSearch Log Ingestion consists of three components---[Data Prepper]({{site.ur
|
|||
|
||||
(In the [example](#example) below, [FluentBit](https://docs.fluentbit.io/manual/) is used as a log collector that collects log data from a file and sends the log data to Data Prepper).
|
||||
|
||||
2. [Data Prepper]({{site.url}}{{site.baseurl}}/observability-plugins/data-prepper/index/) receives the log data, transforms the data into a structure format, and indexes it on an OpenSearch cluster.
|
||||
2. [Data Prepper]({{site.url}}{{site.baseurl}}/observability/data-prepper/index/) receives the log data, transforms the data into a structure format, and indexes it on an OpenSearch cluster.
|
||||
|
||||
3. The data can then be explored through OpenSearch search queries or the **Discover** page in OpenSearch Dashboards.
|
||||
|
||||
|
@ -92,4 +90,4 @@ The response should show the parsed log data:
|
|||
]
|
||||
```
|
||||
|
||||
The same data can be viewed in OpenSearch Dashboards by visiting the **Discover** page and searching the `apache_logs` index. Remember, you must create the index in OpensSearch Dashboards if this is your first time searching for the index.
|
||||
The same data can be viewed in OpenSearch Dashboards by visiting the **Discover** page and searching the `apache_logs` index. Remember, you must create the index in OpenSearch Dashboards if this is your first time searching for the index.
|
|
@ -6,7 +6,7 @@ nav_order: 30
|
|||
|
||||
# Operational panels
|
||||
|
||||
Operational panels in OpenSearch Dashboards are collections of visualizations generated using [Piped Processing Language]({{site.url}}{{site.baseurl}}/observability-plugins/ppl/index) (PPL) queries.
|
||||
Operational panels in OpenSearch Dashboards are collections of visualizations generated using [Piped Processing Language]({{site.url}}{{site.baseurl}}/observability/ppl/index) (PPL) queries.
|
||||
|
||||
## Get started with operational panels
|
||||
|
||||
|
@ -16,7 +16,7 @@ If you want to start using operational panels without adding any data, expand th
|
|||
|
||||
To create an operational panel and add visualizations:
|
||||
|
||||
1. From the **Add Visualization** dropdown menu, choose **Select Existing Visualization** or **Create New Visualization**, which takes you to the [event analytics]({{site.url}}{{site.baseurl}}/observability-plugins/event-analytics) explorer, where you can use PPL to create visualizations.
|
||||
1. From the **Add Visualization** dropdown menu, choose **Select Existing Visualization** or **Create New Visualization**, which takes you to the [event analytics]({{site.url}}{{site.baseurl}}/observability/event-analytics) explorer, where you can use PPL to create visualizations.
|
||||
1. If you're adding already existing visualizations, choose a visualization from the dropdown menu.
|
||||
1. Choose **Add**.
|
||||
|
|
@ -19,9 +19,9 @@ OpenSearch Trace Analytics consists of two components---Data Prepper and the Tra
|
|||
|
||||
1. The [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/getting-started/) receives data from the application and formats it into OpenTelemetry data.
|
||||
|
||||
1. [Data Prepper]({{site.url}}{{site.baseurl}}/observability-plugins/data-prepper/index/) processes the OpenTelemetry data, transforms it for use in OpenSearch, and indexes it on an OpenSearch cluster.
|
||||
1. [Data Prepper]({{site.url}}{{site.baseurl}}/observability/data-prepper/index/) processes the OpenTelemetry data, transforms it for use in OpenSearch, and indexes it on an OpenSearch cluster.
|
||||
|
||||
1. The [Trace Analytics OpenSearch Dashboards plugin]({{site.url}}{{site.baseurl}}/observability-plugins/trace/ta-dashboards/) displays the data in near real-time as a series of charts and tables, with an emphasis on service architecture, latency, error rate, and throughput.
|
||||
1. The [Trace Analytics OpenSearch Dashboards plugin]({{site.url}}{{site.baseurl}}/observability/trace/ta-dashboards/) displays the data in near real-time as a series of charts and tables, with an emphasis on service architecture, latency, error rate, and throughput.
|
||||
|
||||
## Jaeger HotROD
|
||||
|
||||
|
@ -78,4 +78,4 @@ curl -X GET -u 'admin:admin' -k 'https://localhost:9200/otel-v1-apm-span-000001/
|
|||
|
||||
Navigate to `http://localhost:5601` in a web browser and choose **Trace Analytics**. You can see the results of your single click in the Jaeger HotROD web interface: the number of traces per API and HTTP method, latency trends, a color-coded map of the service architecture, and a list of trace IDs that you can use to drill down on individual operations.
|
||||
|
||||
If you don't see your trace, adjust the timeframe in OpenSearch Dashboards. For more information on using the plugin, see [OpenSearch Dashboards plugin]({{site.url}}{{site.baseurl}}/observability-plugins/trace/ta-dashboards/).
|
||||
If you don't see your trace, adjust the timeframe in OpenSearch Dashboards. For more information on using the plugin, see [OpenSearch Dashboards plugin]({{site.url}}{{site.baseurl}}/observability/trace/ta-dashboards/).
|
|
@ -262,4 +262,4 @@ You can use wildcards to delete more than one data stream.
|
|||
|
||||
We recommend deleting data from a data stream using an ISM policy.
|
||||
|
||||
You can also use [asynchronous search]({{site.url}}{{site.baseurl}}/search-plugins/async/index/) and [SQL]({{site.url}}{{site.baseurl}}/search-plugins/sql/index/) and [PPL]({{site.url}}{{site.baseurl}}/observability-plugins/ppl/index/) to query your data stream directly. You can also use the security plugin to define granular permissions on the data stream name.
|
||||
You can also use [asynchronous search]({{site.url}}{{site.baseurl}}/search-plugins/async/index/) and [SQL]({{site.url}}{{site.baseurl}}/search-plugins/sql/index/) and [PPL]({{site.url}}{{site.baseurl}}/observability/ppl/index/) to query your data stream directly. You can also use the security plugin to define granular permissions on the data stream name.
|
||||
|
|
|
@ -0,0 +1 @@
|
|||
<mxfile host="Electron" modified="2021-12-14T14:26:24.213Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/15.8.7 Chrome/91.0.4472.164 Electron/13.6.2 Safari/537.36" etag="0EV8uYp45YbKkp4qsNFr" version="15.8.7" type="device"><diagram id="qG4h7x6C5n9358YNV1Zm" name="Page-1">7VhNc9owEP01HMPYCH8dCyHtIZ1mSjtpj8JebDWy5ZHlGPLrK2P5UwaSGUh6KBest2shvbfPKzNBy3j3meM0+soCoJOZEewm6HYym5mmYcuvEtlXiOc5FRByEqikFliTF1CgodCcBJD1EgVjVJC0D/osScAXPQxzzop+2pbR/q+mOAQNWPuY6ugjCURUoa5ltPgXIGEkmg2rSIzrZAVkEQ5Y0YHQaoKWnDFRXcW7JdCSvJqX6r67I9FmYRwS8Zob1oviwVyFN4/ff76Y3g+SxzncqFmeMc3VhtVixb5mgLM8CaCcxJygRRERAesU+2W0kJpLLBIxVeFMcPbUMCX3uFA/AFzA7ujKzYYPWUjAYhB8L1PaKqpuUTU0r9kvOorUNEddNepErKogbOZuiZIXiqs38GZrNEEg60YNGRcRC1mC6apFFy2Rhhy1OfeMpYq+PyDEXpkA54L1yd2yRKig6Z0gu1zKaarlylnOfTixQ1UIAvMQxIk8NC4dB4oFee6v4+IyzLTyXVf70tShVD4x4HwBn+d4SyhdMsr4YWYUYHC3fpPZidi+C5vthSxgDSzg6hZocroWcK/lAOcjHCDZ4vtf6v7D4Hc5mFr18HbXDd7u1ehdnYNe6Zz5RzoHac5Z5NstcE3VAGdRK1kuKElkkdfN1ujrgykJE3ntS9LkXGhB8QboA8uIIKwXKN1AZJu9HyRsmBAs7iR8UlOKskQWsomm5cLiXVieN6YFbKisoWy6qVav2VMyeGfeafZMWAIjdXEBpyJv4FRPd6pjTOe6VW2JGt2PfSXt3f/WPWXJV1jX+kjrzjXrPnBI0xHv1m5JOfMhy853vw32n8KD1N8qpx/pis1DYfzMN2yRFrjBfKxFurMNsu3LGM9C541njp0S7Wu1SEs/nZDk6R3PJmBK6p0x4j3bQfhCxDvzf+1s4ukOISkc6nlIvty2GHuFGTaKDrEK0nrdsGPFJAjoMVn7z9Oen67QlMyBRJajS2SPKITerpActu+0h1jnnwG0+gs=</diagram></mxfile>
|
Binary file not shown.
After Width: | Height: | Size: 25 KiB |
Loading…
Reference in New Issue