Minor changes to Data Prepper index.md. (#2426)

* Minor changes to index.md.

Signed-off-by: carolxob <carolxob@amazon.com>

* Minor typo/formatting fixes.

Signed-off-by: carolxob <carolxob@amazon.com>

* Minor change.

Signed-off-by: carolxob <carolxob@amazon.com>

* Adjusted ToC order so that Getting started appears before Core APIs.

Signed-off-by: carolxob <carolxob@amazon.com>

* Minor edits to titles.

Signed-off-by: carolxob <carolxob@amazon.com>

* Minor adjustments to ToC.

Signed-off-by: carolxob <carolxob@amazon.com>

* Changed filename.

Signed-off-by: carolxob <carolxob@amazon.com>

* Minor ToC edit

Signed-off-by: carolxob <carolxob@amazon.com>

* Minor edits.

Signed-off-by: carolxob <carolxob@amazon.com>

* Minor edit.

Signed-off-by: carolxob <carolxob@amazon.com>

* Edits made based on editorial feedback.

Signed-off-by: carolxob <carolxob@amazon.com>

Signed-off-by: carolxob <carolxob@amazon.com>
This commit is contained in:
Caroline 2023-01-23 11:32:57 -07:00 committed by GitHub
parent e98ee6d833
commit 80aaf54bb7
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
11 changed files with 31 additions and 29 deletions

View File

@ -10,7 +10,7 @@ nav_order: 45
## Overview
Sink for flat file output.
The file sink creates a flat file output.
Option | Required | Type | Description
:--- | :--- | :--- | :---

View File

@ -10,7 +10,7 @@ nav_order: 45
## Overview
Sink for console output. Can be useful for testing. No options.
The stdout sink can be used for console output and can be useful for testing. It has no configurable options.
<!--- ## Configuration

View File

@ -1,7 +1,7 @@
---
layout: default
title: Core APIs
nav_order: 2
nav_order: 20
---
# Core APIs

View File

@ -3,10 +3,10 @@ layout: default
title: Getting started
nav_order: 5
redirect_from:
- /clients/data-prepper/get-started/
- /clients/data-prepper/getting-started/
---
# Get started with Data Prepper
# Getting started with Data Prepper
Data Prepper is an independent component, not an OpenSearch plugin, that converts data for use with OpenSearch. It's not bundled with the all-in-one OpenSearch installation packages.
@ -19,8 +19,7 @@ There are two ways to install Data Prepper:
1. Run the Docker image.
2. Build from source.
The easiest way to use Data Prepper is by running the Docker image. We suggest
you use this approach if you have [Docker](https://www.docker.com) available.
The easiest way to use Data Prepper is by running the Docker image. We suggest that you use this approach if you have [Docker](https://www.docker.com) available.
You can pull the Docker image:
@ -41,6 +40,7 @@ You will configure two files:
* `pipelines.yaml`
Depending on your use case, we have a few different guides to configuring Data Prepper.
* [Trace Analytics](https://github.com/opensearch-project/data-prepper/blob/main/docs/trace_analytics.md)
* [Log Ingestion](https://github.com/opensearch-project/data-prepper/blob/main/docs/log_analytics.md): Learn how to set up Data Prepper for log observability.
* [Simple Pipeline](https://github.com/opensearch-project/data-prepper/blob/main/docs/simple_pipelines.md): Learn the basics of Data Prepper pipelines with some simple configurations.

View File

@ -12,18 +12,18 @@ redirect_from:
# Data Prepper
Data Prepper is a server side data collector capable of filtering, enriching, transforming, normalizing and aggregating data for downstream analytics and visualization.
Data Prepper is a server-side data collector capable of filtering, enriching, transforming, normalizing, and aggregating data for downstream analytics and visualization.
Data Prepper lets users build custom pipelines to improve the operational view of applications. Two common uses for Data Prepper are trace and log analytics. [Trace analytics]({{site.url}}{{site.baseurl}}/observability-plugin/trace/index/) can help you visualize the flow of events and identify performance problems, and [log analytics]({{site.url}}{{site.baseurl}}/observability-plugin/log-analytics/) can improve searching, analyzing and provide insights into your application.
## Concepts
Data Prepper is composed of one or more **Pipelines** that collect and filter data based on the components set within the pipeline. Each component is pluggable, enabling you to use your own custom implementation of each component. These components include the following:
Data Prepper is composed of one or more **pipelines** that collect and filter data based on the components set within the pipeline. Each component is pluggable, enabling you to use your own custom implementation of each component. These components include the following:
- One [source](#source)
- One or more [sinks](#sink)
- (Optional) One [buffer](#buffer)
- (Optional) One or more[processors](#processor)
- (Optional) One or more [processors](#processor)
A single instance of Data Prepper can have one or more pipelines.
@ -88,5 +88,5 @@ sample-pipeline:
## Next steps
To get started building your own custom pipelines with Data Prepper, see the [Get Started]({{site.url}}{{site.baseurl}}/clients/data-prepper/get-started/) guide.
To get started building your own custom pipelines with Data Prepper, see [Getting started]({{site.url}}{{site.baseurl}}/clients/data-prepper/get-started/).

View File

@ -1,7 +1,7 @@
---
layout: default
title: Log analytics
nav_order: 30
nav_order: 15
---
# Log analytics

View File

@ -1,16 +1,16 @@
---
layout: default
title: Log4j configuration
nav_order: 12
title: Configuring Log4j
nav_order: 25
---
# Log4j configuration
# Configuring Log4j
This section provides information about configuring Log4j.
You can configure logging using Log4j in Data Prepper.
## Logging
The following describes how Data Prepper performs logging. Data Prepper uses [SLF4J](http://www.slf4j.org/) with a [Log4j 2 binding](http://logging.apache.org/log4j/2.x/log4j-slf4j-impl/).
Data Prepper uses [SLF4J](http://www.slf4j.org/) with a [Log4j 2 binding](http://logging.apache.org/log4j/2.x/log4j-slf4j-impl/).
For Data Prepper versions 2.0 and later, the Log4j 2 configuration file can be found and edited in `config/log4j2.properties` in the application's home directory. The default properties for Log4j 2 can be found in `log4j2-rolling.properties` in the *shared-config* directory.

View File

@ -1,12 +1,12 @@
---
layout: default
title: Migrating from Open Distro Data Prepper
nav_order: 10
title: Migrating from Open Distro
nav_order: 35
---
# Migrating from Open Distro Data Prepper
# Migrating from Open Distro
Beginning with Data Prepper 1.1, there is only one distribution of Data Prepper: OpenSearch Data Prepper. This document helps existing users migrate from the Open Distro Data Prepper to OpenSearch Data Prepper.
Existing users can migrate from the Open Distro Data Prepper to OpenSearch Data Prepper. Beginning with Data Prepper version 1.1, there is only one distribution of OpenSearch Data Prepper.
## Change your pipeline configuration

View File

@ -1,14 +1,14 @@
---
layout: default
title: Configure Logstash for Data Prepper
nav_order: 12
title: Migrating from Logstash
nav_order: 30
---
# Configure Logstash for Data Prepper
# Migrating from Logstash
You can run Data Prepper with a Logstash configuration.
As mentioned in the [Getting Started]({{site.url}}{{site.baseurl}}/data-prepper/get-started/) guide, you'll need to configure Data Prepper with a pipeline using a `pipelines.yaml` file.
As mentioned in the [Getting started]({{site.url}}{{site.baseurl}}/data-prepper/get-started/) guide, you'll need to configure Data Prepper with a pipeline using a `pipelines.yaml` file.
Alternatively, if you have a Logstash configuration `logstash.conf` to configure Data Prepper instead of `pipelines.yaml`.

View File

@ -1,12 +1,14 @@
---
layout: default
title: Pipelines
nav_order: 20
nav_order: 10
---
# Pipelines
![Data Prepper Pipeline]({{site.url}}{{site.baseurl}}/images/data-prepper-pipeline.png)
The following image illustrates how a pipeline works.
<img src="{{site.url}}{{site.baseurl}}/images/data-prepper-pipeline.png" alt="Data Prepper pipeline">{: .img-fluid}
To use Data Prepper, you define pipelines in a configuration YAML file. Each pipeline is a combination of a source, a buffer, zero or more processors, and one or more sinks. For example: