From 53ae57f79af6e708b66858e1f07e1aa852da927c Mon Sep 17 00:00:00 2001 From: Han Jiang Date: Fri, 19 Nov 2021 13:04:20 -0600 Subject: [PATCH 1/7] Update the documentation for Data Prepper's OpenSearch Sink plugin. Signed-off-by: Han Jiang --- .../trace/data-prepper-reference.md | 18 ++++++++++++------ 1 file changed, 12 insertions(+), 6 deletions(-) diff --git a/_observability-plugins/trace/data-prepper-reference.md b/_observability-plugins/trace/data-prepper-reference.md index 99a8f522..d0323a5b 100644 --- a/_observability-plugins/trace/data-prepper-reference.md +++ b/_observability-plugins/trace/data-prepper-reference.md @@ -160,17 +160,23 @@ hosts | Yes | List of OpenSearch hosts to write to (e.g. `["https://localhost:92 cert | No | String, path to the security certificate (e.g. `"config/root-ca.pem"`) if the cluster uses the OpenSearch security plugin. username | No | String, username for HTTP basic authentication. password | No | String, password for HTTP basic authentication. -aws_sigv4 | No | Boolean, whether to use IAM signing to connect to an Amazon OpenSearch Service domain. For your access key, secret key, and optional session token, Data Prepper uses the default credential chain (environment variables, Java system properties, `~/.aws/credential`, etc.). +aws_sigv4 | No | Boolean, default false. Whether to use IAM signing to connect to an Amazon OpenSearch Service domain. For your access key, secret key, and optional session token, Data Prepper uses the default credential chain (environment variables, Java system properties, `~/.aws/credential`, etc.). aws_region | No | String, AWS region (e.g. `"us-east-1"`) for the domain if you are connecting to Amazon OpenSearch Service. -aws_sts_role | No | String, IAM role which the sink plugin will assume to sign request to Amazon OpenSearch Service. If not provided the plugin will use the default credentials. -trace_analytics_raw | No | Boolean, default false. Whether to export as trace data to the `otel-v1-apm-span-*` index pattern (alias `otel-v1-apm-span`) for use with the Trace Analytics OpenSearch Dashboards plugin. -trace_analytics_service_map | No | Boolean, default false. Whether to export as trace data to the `otel-v1-apm-service-map` index for use with the service map component of the Trace Analytics OpenSearch Dashboards plugin. -index | No | String, name of the index to export to. Only required if you don't use the `trace_analytics_raw` or `trace_analytics_service_map` presets. +aws_sts_role_arn | No | String, IAM role which the sink plugin will assume to sign request to Amazon OpenSearch Service. If not provided the plugin will use the default credentials. +socket_timeout | No | Integer, the timeout in milliseconds for waiting for data (or, put differently, a maximum period inactivity between two consecutive data packets). A timeout value of zero is interpreted as an infinite timeout. If this timeout value is either negative or not set, the underlying Apache HttpClient would rely on operating system settings for managing socket timeouts. +connect_timeout | No | Integer, the timeout in milliseconds used when requesting a connection from the connection manager. A timeout value of zero is interpreted as an infinite timeout. If this timeout value is either negative or not set, the underlying Apache HttpClient would rely on operating system settings for managing connection timeouts. +insecure | No | Boolean, default false. Whether to verify SSL certificate. If set to true, CA certificate verification will be turned off and insecure HTTP requests will be sent. +trace_analytics_raw | No | Boolean, default false. Deprecated in favor of `index_type`. Whether to export as trace data to the `otel-v1-apm-span-*` index pattern (alias `otel-v1-apm-span`) for use with the Trace Analytics OpenSearch Dashboards plugin. +trace_analytics_service_map | No | Boolean, default false. Deprecated in favor of `index_type`. Whether to export as trace data to the `otel-v1-apm-service-map` index for use with the service map component of the Trace Analytics OpenSearch Dashboards plugin. +index | No | String, name of the index to export to. Only required if you don't use the `trace_analytics_raw` or `trace_analytics_service_map` presets. In other words, this parameter is applicable and required only If index_type is explicitly `custom` or defaults to be `custom` while both `trace_analytics_raw` and `trace_analytics_service_map` are set to false. +index_type | No | String, default `custom`. This index type instructs the Sink plugin what type of data it is handling. Valid values: `custom`, `trace-analytics-raw`, `trace-analytics-service-map`. template_file | No | String, the path to a JSON [index template]({{site.url}}{{site.baseurl}}/opensearch/index-templates/) file (e.g. `/your/local/template-file.json` if you do not use the `trace_analytics_raw` or `trace_analytics_service_map`. See [otel-v1-apm-span-index-template.json](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/opensearch/src/main/resources/otel-v1-apm-span-index-template.json) for an example. document_id_field | No | String, the field from the source data to use for the OpenSearch document ID (e.g. `"my-field"`) if you don't use the `trace_analytics_raw` or `trace_analytics_service_map` presets. dlq_file | No | String, the path to your preferred dead letter queue file (e.g. `/your/local/dlq-file`). Data Prepper writes to this file when it fails to index a document on the OpenSearch cluster. bulk_size | No | Integer (long), default 5. The maximum size (in MiB) of bulk requests to the OpenSearch cluster. Values below 0 indicate an unlimited size. If a single document exceeds the maximum bulk request size, Data Prepper sends it individually. - +ism_policy_file | No | String, the absolute file path for an ISM (Index State Management) policy JSON file. This policy file is effective only when there is no built-in policy file for the index type. For example, `custom` index type is currently the only one without a built-in policy file, thus it would use the policy file here if it's provided through this parameter. OpenSearch documentation has more about [ISM policies.](https://opensearch.org/docs/latest/im-plugin/ism/policies/) +number_of_shards | No | Integer, the number of primary shards that an index should have on the destination OpenSearch server. This parameter is effective only when `template_file` is either explicitly provided in Sink configuration or built-in. If this parameter is set, it would override the value in index template file. OpenSearch documentation has [more about this parameter](https://opensearch.org/docs/latest/opensearch/rest-api/index-apis/create-index/). +number_of_replicas | No | Integer, the number of replica shards each primary shard should have on the destination OpenSearch server. For example, if you have 4 primary shards and set number_of_replicas to 3, the index has 12 replica shards. This parameter is effective only when `template_file` is either explicitly provided in Sink configuration or built-in. If this parameter is set, it would override the value in index template file. OpenSearch documentation has [more about this parameter](https://opensearch.org/docs/latest/opensearch/rest-api/index-apis/create-index/). ### file From cca6f61cd239120543822d0ec97ebea9c2bca648 Mon Sep 17 00:00:00 2001 From: Han Jiang Date: Tue, 23 Nov 2021 11:16:54 -0600 Subject: [PATCH 2/7] Rephrase some descriptions of hyper-links. Signed-off-by: Han Jiang --- _observability-plugins/trace/data-prepper-reference.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/_observability-plugins/trace/data-prepper-reference.md b/_observability-plugins/trace/data-prepper-reference.md index d0323a5b..a0e2cdd4 100644 --- a/_observability-plugins/trace/data-prepper-reference.md +++ b/_observability-plugins/trace/data-prepper-reference.md @@ -174,9 +174,9 @@ template_file | No | String, the path to a JSON [index template]({{site.url}}{{s document_id_field | No | String, the field from the source data to use for the OpenSearch document ID (e.g. `"my-field"`) if you don't use the `trace_analytics_raw` or `trace_analytics_service_map` presets. dlq_file | No | String, the path to your preferred dead letter queue file (e.g. `/your/local/dlq-file`). Data Prepper writes to this file when it fails to index a document on the OpenSearch cluster. bulk_size | No | Integer (long), default 5. The maximum size (in MiB) of bulk requests to the OpenSearch cluster. Values below 0 indicate an unlimited size. If a single document exceeds the maximum bulk request size, Data Prepper sends it individually. -ism_policy_file | No | String, the absolute file path for an ISM (Index State Management) policy JSON file. This policy file is effective only when there is no built-in policy file for the index type. For example, `custom` index type is currently the only one without a built-in policy file, thus it would use the policy file here if it's provided through this parameter. OpenSearch documentation has more about [ISM policies.](https://opensearch.org/docs/latest/im-plugin/ism/policies/) -number_of_shards | No | Integer, the number of primary shards that an index should have on the destination OpenSearch server. This parameter is effective only when `template_file` is either explicitly provided in Sink configuration or built-in. If this parameter is set, it would override the value in index template file. OpenSearch documentation has [more about this parameter](https://opensearch.org/docs/latest/opensearch/rest-api/index-apis/create-index/). -number_of_replicas | No | Integer, the number of replica shards each primary shard should have on the destination OpenSearch server. For example, if you have 4 primary shards and set number_of_replicas to 3, the index has 12 replica shards. This parameter is effective only when `template_file` is either explicitly provided in Sink configuration or built-in. If this parameter is set, it would override the value in index template file. OpenSearch documentation has [more about this parameter](https://opensearch.org/docs/latest/opensearch/rest-api/index-apis/create-index/). +ism_policy_file | No | String, the absolute file path for an ISM (Index State Management) policy JSON file. This policy file is effective only when there is no built-in policy file for the index type. For example, `custom` index type is currently the only one without a built-in policy file, thus it would use the policy file here if it's provided through this parameter. For more information, see [ISM policies.](https://opensearch.org/docs/latest/im-plugin/ism/policies/) +number_of_shards | No | Integer, the number of primary shards that an index should have on the destination OpenSearch server. This parameter is effective only when `template_file` is either explicitly provided in Sink configuration or built-in. If this parameter is set, it would override the value in index template file. For more information, see [create index](https://opensearch.org/docs/latest/opensearch/rest-api/index-apis/create-index/). +number_of_replicas | No | Integer, the number of replica shards each primary shard should have on the destination OpenSearch server. For example, if you have 4 primary shards and set number_of_replicas to 3, the index has 12 replica shards. This parameter is effective only when `template_file` is either explicitly provided in Sink configuration or built-in. If this parameter is set, it would override the value in index template file. For more information, see [create index](https://opensearch.org/docs/latest/opensearch/rest-api/index-apis/create-index/). ### file From d16a7ec350e9f0988880e51366adade634946b2c Mon Sep 17 00:00:00 2001 From: Han Jiang Date: Tue, 23 Nov 2021 12:19:19 -0600 Subject: [PATCH 3/7] Rephrase index parameter description to avoid deprecated terms. Signed-off-by: Han Jiang --- _observability-plugins/trace/data-prepper-reference.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_observability-plugins/trace/data-prepper-reference.md b/_observability-plugins/trace/data-prepper-reference.md index a0e2cdd4..4d470d2b 100644 --- a/_observability-plugins/trace/data-prepper-reference.md +++ b/_observability-plugins/trace/data-prepper-reference.md @@ -168,7 +168,7 @@ connect_timeout | No | Integer, the timeout in milliseconds used when requesting insecure | No | Boolean, default false. Whether to verify SSL certificate. If set to true, CA certificate verification will be turned off and insecure HTTP requests will be sent. trace_analytics_raw | No | Boolean, default false. Deprecated in favor of `index_type`. Whether to export as trace data to the `otel-v1-apm-span-*` index pattern (alias `otel-v1-apm-span`) for use with the Trace Analytics OpenSearch Dashboards plugin. trace_analytics_service_map | No | Boolean, default false. Deprecated in favor of `index_type`. Whether to export as trace data to the `otel-v1-apm-service-map` index for use with the service map component of the Trace Analytics OpenSearch Dashboards plugin. -index | No | String, name of the index to export to. Only required if you don't use the `trace_analytics_raw` or `trace_analytics_service_map` presets. In other words, this parameter is applicable and required only If index_type is explicitly `custom` or defaults to be `custom` while both `trace_analytics_raw` and `trace_analytics_service_map` are set to false. +index | No | String, name of the index to export to. Only required if you don't use the `trace-analytics-raw` or `trace-analytics-service-map` presets. In other words, this parameter is applicable and required only if index_type is explicitly `custom` or defaults to `custom`. index_type | No | String, default `custom`. This index type instructs the Sink plugin what type of data it is handling. Valid values: `custom`, `trace-analytics-raw`, `trace-analytics-service-map`. template_file | No | String, the path to a JSON [index template]({{site.url}}{{site.baseurl}}/opensearch/index-templates/) file (e.g. `/your/local/template-file.json` if you do not use the `trace_analytics_raw` or `trace_analytics_service_map`. See [otel-v1-apm-span-index-template.json](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/opensearch/src/main/resources/otel-v1-apm-span-index-template.json) for an example. document_id_field | No | String, the field from the source data to use for the OpenSearch document ID (e.g. `"my-field"`) if you don't use the `trace_analytics_raw` or `trace_analytics_service_map` presets. From 4bd31def05b1916223d87b88a2c17b1f6e31eea5 Mon Sep 17 00:00:00 2001 From: Han Jiang Date: Thu, 2 Dec 2021 15:09:10 -0600 Subject: [PATCH 4/7] Document the proxy parameter for Opensearch Sink. Signed-off-by: Han Jiang --- _observability-plugins/trace/data-prepper-reference.md | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/_observability-plugins/trace/data-prepper-reference.md b/_observability-plugins/trace/data-prepper-reference.md index 4d470d2b..f00400fe 100644 --- a/_observability-plugins/trace/data-prepper-reference.md +++ b/_observability-plugins/trace/data-prepper-reference.md @@ -162,10 +162,11 @@ username | No | String, username for HTTP basic authentication. password | No | String, password for HTTP basic authentication. aws_sigv4 | No | Boolean, default false. Whether to use IAM signing to connect to an Amazon OpenSearch Service domain. For your access key, secret key, and optional session token, Data Prepper uses the default credential chain (environment variables, Java system properties, `~/.aws/credential`, etc.). aws_region | No | String, AWS region (e.g. `"us-east-1"`) for the domain if you are connecting to Amazon OpenSearch Service. -aws_sts_role_arn | No | String, IAM role which the sink plugin will assume to sign request to Amazon OpenSearch Service. If not provided the plugin will use the default credentials. +aws_sts_role_arn | No | String, IAM role which the sink plugin assumes to sign request to Amazon OpenSearch Service. If not provided the plugin uses the default credentials. socket_timeout | No | Integer, the timeout in milliseconds for waiting for data (or, put differently, a maximum period inactivity between two consecutive data packets). A timeout value of zero is interpreted as an infinite timeout. If this timeout value is either negative or not set, the underlying Apache HttpClient would rely on operating system settings for managing socket timeouts. connect_timeout | No | Integer, the timeout in milliseconds used when requesting a connection from the connection manager. A timeout value of zero is interpreted as an infinite timeout. If this timeout value is either negative or not set, the underlying Apache HttpClient would rely on operating system settings for managing connection timeouts. -insecure | No | Boolean, default false. Whether to verify SSL certificate. If set to true, CA certificate verification will be turned off and insecure HTTP requests will be sent. +insecure | No | Boolean, default false. Whether to verify SSL certificates. If set to true, CA certificate verification is disabled and insecure HTTP requests are sent instead. +proxy | No | String, the address of a [forward HTTP proxy server](https://en.wikipedia.org/wiki/Proxy_server). The format is like ":". Examples: "example.com:8100", "http://example.com:8100", "112.112.112.112:8100". Note: port number cannot be omitted. trace_analytics_raw | No | Boolean, default false. Deprecated in favor of `index_type`. Whether to export as trace data to the `otel-v1-apm-span-*` index pattern (alias `otel-v1-apm-span`) for use with the Trace Analytics OpenSearch Dashboards plugin. trace_analytics_service_map | No | Boolean, default false. Deprecated in favor of `index_type`. Whether to export as trace data to the `otel-v1-apm-service-map` index for use with the service map component of the Trace Analytics OpenSearch Dashboards plugin. index | No | String, name of the index to export to. Only required if you don't use the `trace-analytics-raw` or `trace-analytics-service-map` presets. In other words, this parameter is applicable and required only if index_type is explicitly `custom` or defaults to `custom`. @@ -174,7 +175,7 @@ template_file | No | String, the path to a JSON [index template]({{site.url}}{{s document_id_field | No | String, the field from the source data to use for the OpenSearch document ID (e.g. `"my-field"`) if you don't use the `trace_analytics_raw` or `trace_analytics_service_map` presets. dlq_file | No | String, the path to your preferred dead letter queue file (e.g. `/your/local/dlq-file`). Data Prepper writes to this file when it fails to index a document on the OpenSearch cluster. bulk_size | No | Integer (long), default 5. The maximum size (in MiB) of bulk requests to the OpenSearch cluster. Values below 0 indicate an unlimited size. If a single document exceeds the maximum bulk request size, Data Prepper sends it individually. -ism_policy_file | No | String, the absolute file path for an ISM (Index State Management) policy JSON file. This policy file is effective only when there is no built-in policy file for the index type. For example, `custom` index type is currently the only one without a built-in policy file, thus it would use the policy file here if it's provided through this parameter. For more information, see [ISM policies.](https://opensearch.org/docs/latest/im-plugin/ism/policies/) +ism_policy_file | No | String, the absolute file path for an ISM (Index State Management) policy JSON file. This policy file is effective only when there is no built-in policy file for the index type. For example, `custom` index type is currently the only one without a built-in policy file, thus it would use the policy file here if it's provided through this parameter. For more information, see [ISM policies](https://opensearch.org/docs/latest/im-plugin/ism/policies/). number_of_shards | No | Integer, the number of primary shards that an index should have on the destination OpenSearch server. This parameter is effective only when `template_file` is either explicitly provided in Sink configuration or built-in. If this parameter is set, it would override the value in index template file. For more information, see [create index](https://opensearch.org/docs/latest/opensearch/rest-api/index-apis/create-index/). number_of_replicas | No | Integer, the number of replica shards each primary shard should have on the destination OpenSearch server. For example, if you have 4 primary shards and set number_of_replicas to 3, the index has 12 replica shards. This parameter is effective only when `template_file` is either explicitly provided in Sink configuration or built-in. If this parameter is set, it would override the value in index template file. For more information, see [create index](https://opensearch.org/docs/latest/opensearch/rest-api/index-apis/create-index/). From 2c2d7ffc6703bf8c7fd6464f5009395157baaa02 Mon Sep 17 00:00:00 2001 From: Christopher Manning Date: Fri, 3 Dec 2021 08:28:34 -0600 Subject: [PATCH 5/7] updating data prepper documentation for 1.2 release Signed-off-by: Christopher Manning --- .../data-prepper-reference.md | 41 ++++++-- .../data-prepper/get-started.md | 58 +++++++++++ _observability-plugins/data-prepper/index.md | 15 +++ .../pipelines.md} | 87 +++++++--------- _observability-plugins/log-analytics.md | 95 ++++++++++++++++++ _observability-plugins/trace/get-started.md | 4 +- images/la-diagram.drawio | 1 + images/la.png | Bin 0 -> 21176 bytes 8 files changed, 242 insertions(+), 59 deletions(-) rename _observability-plugins/{trace => data-prepper}/data-prepper-reference.md (77%) create mode 100644 _observability-plugins/data-prepper/get-started.md create mode 100644 _observability-plugins/data-prepper/index.md rename _observability-plugins/{trace/data-prepper.md => data-prepper/pipelines.md} (54%) create mode 100644 _observability-plugins/log-analytics.md create mode 100644 images/la-diagram.drawio create mode 100644 images/la.png diff --git a/_observability-plugins/trace/data-prepper-reference.md b/_observability-plugins/data-prepper/data-prepper-reference.md similarity index 77% rename from _observability-plugins/trace/data-prepper-reference.md rename to _observability-plugins/data-prepper/data-prepper-reference.md index f00400fe..8458054a 100644 --- a/_observability-plugins/trace/data-prepper-reference.md +++ b/_observability-plugins/data-prepper/data-prepper-reference.md @@ -1,14 +1,13 @@ --- layout: default title: Configuration reference -parent: Trace analytics -nav_order: 25 +parent: Data Prepper +nav_order: 3 --- # Data Prepper configuration reference -This page lists all supported Data Prepper sources, buffers, preppers, and sinks, along with their associated options. For example configuration files, see [Data Prepper]({{site.url}}{{site.baseurl}}/observability-plugins/trace/data-prepper/). - +This page lists all supported Data Prepper server, sources, buffers, preppers, and sinks, along with their associated options. For example configuration files, see [Data Prepper]({{site.url}}{{site.baseurl}}/observability-plugins/data-prepper/pipelines/). ## Data Prepper server options @@ -19,7 +18,7 @@ keyStoreFilePath | No | String, path to a .jks or .p12 keystore file. Required i keyStorePassword | No | String, password for keystore. Optional, defaults to empty string. privateKeyPassword | No | String, password for private key within keystore. Optional, defaults to empty string. serverPort | No | Integer, port number to use for server APIs. Defaults to 4900 - +metricRegistries | No | List, metrics registries for publishing the generated metrics. Defaults to Prometheus; Prometheus and CloudWatch are currently supported. ## General pipeline options @@ -53,7 +52,20 @@ sslKeyFile | Conditionally | String, file-system path or AWS S3 path to the secu useAcmCertForSSL | No | Boolean, enables TLS/SSL using certificate and private key from AWS Certificate Manager (ACM). Default is `false`. acmCertificateArn | Conditionally | String, represents the ACM certificate ARN. ACM certificate take preference over S3 or local file system certificate. Required if `useAcmCertForSSL` is set to `true`. awsRegion | Conditionally | String, represents the AWS region to use ACM or S3. Required if `useAcmCertForSSL` is set to `true` or `sslKeyCertChainFile` and `sslKeyFile` are AWS S3 paths. +authentication | No | An authentication configuration. By default, this runs an unauthenticated server. This uses pluggable authentication for HTTPS. To provide customer authentication use or create a plugin which implements: [GrpcAuthenticationProvider](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/armeria-common/src/main/java/com/amazon/dataprepper/armeria/authentication/GrpcAuthenticationProvider.java) +### http_source + +This is a source plugin that supports HTTP protocol. Currently ONLY support Json UTF-8 codec for incoming request, e.g. `[{"key1": "value1"}, {"key2": "value2"}]`. + +Option | Required | Description +:--- | :--- | :--- +port | No | Integer, the port the source is running on. Default is `2021`. Valid options are between `0` and `65535`. +request_timeout | No | Integer, the request timeout in millis. Default is `10_000`. +thread_count | No | Integer, the number of threads to keep in the ScheduledThreadPool. Default is `200`. +max_connection_count | No | Integer, the maximum allowed number of open connections. Default is `500`. +max_pending_requests | No | Ingeger, the maximum number of allowed tasks in ScheduledThreadPool work queue. Default is `1024. +authentication | No | An authentication configuration. By default, this runs an unauthenticated server. This uses pluggable authentication for HTTPS. To provide customer authentication use or create a plugin which implements: [ArmeriaHttpAuthenticationProvider](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/armeria-common/src/main/java/com/amazon/dataprepper/armeria/authentication/ArmeriaHttpAuthenticationProvider.java) ### file @@ -62,7 +74,8 @@ Source for flat file input. Option | Required | Description :--- | :--- | :--- path | Yes | String, path to the input file (e.g. `logs/my-log.log`). - +format | No | String, format of each line in the file. Valid options are `json` or `plain`. Default is `plain`. +record_type | No | String, the record type that will be stored. Valid options are `string` or `event`. Default is `string`. If you would like to use the file source for log analytics use cases like grok, change this to `event`. ### pipeline @@ -144,6 +157,22 @@ Option | Required | Description :--- | :--- | :--- upper_case | No | Boolean, whether to convert to uppercase (`true`) or lowercase (`false`). +### grok_prepper + +Takes unstructured data and utilizes pattern matching to structure and extract important keys and make data more structured and queryable. + +Option | Required | Description +:--- | :--- | :--- +match | No | Map, specifies which keys to match specific patterns against. Default is `{}`. +keep_empty_captures | No | Boolean, enables preserving `null` captures. Defatul value is `false`. +named_captures_only | No | Boolean, enables whether to keep only named captures. Default vaule is `true`. +break_on_match | No | Boolean, specifies wether to match all patterns or stop once the first successful match is found. Default is `true`. +keys_to_overwrite | No | List, specifies which existing keys are to be overwritten if there is a capture with the same key value. Default is `[]`. +pattern_definitions | No | Map, that allows for custom pattern use inline. Default value is `{}`. +patterns_directories | No | List, specifies the path of directories that contain customer pattern files. Default value is `[]`. +pattern_files_glob | No | String, specifies which pattern files to use from the directories specified for `pattern_directories`. Default is `*`. +target_key | No | String, specifies a parent level key to store all captures. Default value is `null`. +timeout_millis | Integer, the maximum amount of time that matching will be performed. Setting to `0` will disable the timeout. Default value is `30,000`. ## Sinks diff --git a/_observability-plugins/data-prepper/get-started.md b/_observability-plugins/data-prepper/get-started.md new file mode 100644 index 00000000..9708a0d3 --- /dev/null +++ b/_observability-plugins/data-prepper/get-started.md @@ -0,0 +1,58 @@ +--- +layout: default +title: Get Started +parent: Data Prepper +nav_order: 1 +--- + +# Get started with Data Prepper + +Data Prepper is an independent component, not an OpenSearch plugin, that converts data for use with OpenSearch. It's not bundled with the all-in-one OpenSearch installation packages. + +## Install Data Prepper + +To use the Docker image, pull it like any other image: + +```bash +docker pull opensearchproject/data-prepper:latest +``` + +## Define a pipeline + +Build a pipeline by creating a pipeline configuration YAML using any of the built-in Data Prepper plugins. + +For more examples and details on pipeline configurations see [Pipelines]({{site.url}}{{site.baseurl}}/observability-plugins/data-prepper/pipelines) guide. + +## Start Data Prepper + +Run the following command with your pipeline configuration YAML. + +```bash +docker run --name data-prepper -v /full/path/to/pipelines.yaml:/usr/share/data-prepper/pipelines.yaml opensearchproject/opensearch-data-prepper:latest +``` + +### Migrating from Logstash + +Data Prepper supports Logstash configuration files for a limited set of plugins. Simply use the logstash config to run Data Prepper. + +```bash +docker run --name data-prepper -v /full/path/to/logstash.conf:/usr/share/data-prepper/pipelines.conf opensearchproject/opensearch-data-prepper:latest +``` + +This feature is limited by feature parity of Data Prepper. As of Data Prepper 1.2 release, the following plugins from the Logstash configuration are supported: + +- HTTP Input plugin +- Grok Filter plugin +- Elasticsearch Output plugin +- Amazon Elasticsearch Output plugin + +### Configure the Data Prepper server +Data Prepper itself provides administrative HTTP endpoints such as `/list` to list pipelines and `/metrics/prometheus` to provide Prometheus-compatible metrics data. The port which serves these endpoints has a TLS configuration and is specified by a separate YAML file. Data Prepper docker images secures these endpoints by default. We strongly recommend providing your own configuration file for securing production environments. Example: + +```yml +ssl: true +keyStoreFilePath: "/usr/share/data-prepper/keystore.jks" +keyStorePassword: "password" +privateKeyPassword: "other_password" +serverPort: 1234 +``` \ No newline at end of file diff --git a/_observability-plugins/data-prepper/index.md b/_observability-plugins/data-prepper/index.md new file mode 100644 index 00000000..221de983 --- /dev/null +++ b/_observability-plugins/data-prepper/index.md @@ -0,0 +1,15 @@ +--- +layout: default +title: Data Prepper +nav_order: 80 +has_children: true +has_toc: false +--- + +# Data Prepper + +Data Prepper is a server side data collector with abilities to filter, enrich, transform, normalize and aggregate data for downstream analytics and visualization. + +Data Prepper allows users to build custom pipelines to improve their operational view of their own applications. Two common uses for Data Prepper are trace and log analytics. [Trace Anaytics]({{site.url}}{{site.baseurl}}/observability-plugins/trace/index/) can help you visualize this flow of events and identify performance problems. [Log Anayltics]({{site.url}}{{site.baseurl}}/observability-plugins/log-analytics/index) can improve searching, analyzing and provide insights into your application. + +To get started building your own custom pipelines with Data Prepper, see the [Get Started]({{site.url}}{{site.baseurl}}/observability-plugins/data-prepper/get-started/) guide. diff --git a/_observability-plugins/trace/data-prepper.md b/_observability-plugins/data-prepper/pipelines.md similarity index 54% rename from _observability-plugins/trace/data-prepper.md rename to _observability-plugins/data-prepper/pipelines.md index 7683033b..a5e175dc 100644 --- a/_observability-plugins/trace/data-prepper.md +++ b/_observability-plugins/data-prepper/pipelines.md @@ -1,27 +1,11 @@ --- layout: default -title: Data Prepper -parent: Trace analytics -nav_order: 20 +title: Pipelines +parent: Data Prepper +nav_order: 2 --- -# Data Prepper - -Data Prepper is an independent component, not an OpenSearch plugin, that converts data for use with OpenSearch. It's not bundled with the all-in-one OpenSearch installation packages. - - -## Install Data Prepper - -To use the Docker image, pull it like any other image: - -```bash -docker pull opensearchproject/data-prepper:latest -``` - -Otherwise, [download](https://opensearch.org/downloads.html) the appropriate archive for your operating system and unzip it. - - -## Configure pipelines +# Pipelines To use Data Prepper, you define pipelines in a configuration YAML file. Each pipeline is a combination of a source, a buffer, zero or more preppers, and one or more sinks: @@ -61,7 +45,39 @@ sample-pipeline: - Sinks define where your data goes. In this case, the sink is an OpenSearch cluster. -Pipelines can act as the source for other pipelines. In the following example, a pipeline takes data from the OpenTelemetry Collector and uses two other pipelines as sinks: +## Examples + +This section provides some pipeline examples that you can use to start creating your own pipelines. For more information, see [Data Prepper configuration reference]({{site.url}}{{site.baseurl}}/observability-plugins/data-prepper/data-prepper-reference/) guide. + +The Data Prepper repository has several [sample applications](https://github.com/opensearch-project/data-prepper/tree/main/examples) to help you get started. + +### Log ingestion pipeline + +The following example demonstrates how to use HTTP source and Grok prepper plugins to process unstructured log data. + +```yaml +log-pipeline: + source: + http: + ssl: false + processor: + - grok: + match: + log: [ "%{COMMONAPACHELOG}" ] + sink: + - opensearch: + hosts: [ "https://opensearch:9200" ] + insecure: true + username: admin + password: admin + index: apache_logs +``` + +Note: This example uses weak security. We strongly recommend securing all plugins which open external ports in production environments. + +### Trace Analytics pipeline + +The following example demonstrates how to build a pipeline that supports the [Trace Analytics OpenSearch Dashboards plugin]({{site.url}}{{site.baseurl}}/observability-plugins/trace/ta-dashboards/). This pipeline takes data from the OpenTelemetry Collector and uses two other pipelines as sinks. These two separate pipelines index trace and the service map documents for the dashboard plugin. ```yml entry-pipeline: @@ -104,32 +120,3 @@ service-map-pipeline: password: "ta-password" trace_analytics_service_map: true ``` - -To learn more, see the [Data Prepper configuration reference]({{site.url}}{{site.baseurl}}/observability-plugins/trace/data-prepper-reference/). - -## Configure the Data Prepper server -Data Prepper itself provides administrative HTTP endpoints such as `/list` to list pipelines and `/metrics/prometheus` to provide Prometheus-compatible metrics data. The port which serves these endpoints, as well as TLS configuration, is specified by a separate YAML file. Example: - -```yml -ssl: true -keyStoreFilePath: "/usr/share/data-prepper/keystore.jks" -keyStorePassword: "password" -privateKeyPassword: "other_password" -serverPort: 1234 -``` - -## Start Data Prepper - -**Docker** - -```bash -docker run --name data-prepper --expose 21890 -v /full/path/to/pipelines.yaml:/usr/share/data-prepper/pipelines.yaml -v /full/path/to/data-prepper-config.yaml:/usr/share/data-prepper/data-prepper-config.yaml opensearchproject/opensearch-data-prepper:latest -``` - -**macOS and Linux** - -```bash -./data-prepper-tar-install.sh config/pipelines.yaml config/data-prepper-config.yaml -``` - -For production workloads, you likely want to run Data Prepper on a dedicated machine, which makes connectivity a concern. Data Prepper uses port 21890 and must be able to connect to both the OpenTelemetry Collector and the OpenSearch cluster. In the [sample applications](https://github.com/opensearch-project/Data-Prepper/tree/main/examples), you can see that all components use the same Docker network and expose the appropriate ports. diff --git a/_observability-plugins/log-analytics.md b/_observability-plugins/log-analytics.md new file mode 100644 index 00000000..85ac9053 --- /dev/null +++ b/_observability-plugins/log-analytics.md @@ -0,0 +1,95 @@ +--- +layout: default +title: Log analytics +nav_order: 70 +--- + +# Log Ingestion + +Log ingestion provides a way to transform unstructured log data into a structure data and ingestion into OpenSearch. This data can help improve your log collection in distributed applications. + +Structured log data allows for improved queries and filtering based on the data format when you are searching logs for an event. + +## Get started with log ingestion + +OpenSearch Log Ingestion consists of three components---[Data Prepper]({{site.url}}{{site.baseurl}}/observability-plugins/data-prepper/index/), [OpenSearch]({{site.url}}{{site.baseurl}}/) and [OpenSearch Dashboards]({{site.url}}{{site.baseurl}}/)---that fit into the OpenSearch ecosystem. The Data Prepper repository has several [sample applications](https://github.com/opensearch-project/data-prepper/tree/main/examples) to help you get started. + +### Basic flow of data + +![Log data flow diagram from a distributed application to OpenSearch]({{site.url}}{{site.baseurl}}/images/la.png) + +1. Log Ingestion relies on you adding log collection to your application's environment to gather and send log data. + + (In the [example](#example) below, [FluentBit](https://docs.fluentbit.io/manual/) is used as a log collector that collects log data from a file and sends the log data to Data Prepper). + +2. [Data Prepper]({{site.url}}{{site.baseurl}}/observability-plugins/data-prepper/index/) receives the log data, transforms the data into a structure format, and indexes it on an OpenSearch cluster. + +3. The data can then be explored through OpenSearch search queries or the **Discover** page in OpenSearch Dashboards. + +### Example + +This example mimics the writing of log entries to a log file that are then processed by Data Prepper and stored in OpenSearch. + +Download or clone the [Data Prepper repository](https://github.com/opensearch-project/data-prepper). Then navigate to `examples/log-ingestion/` and open `docker-compose.yml` in a text editor. This file contains a container for: + +- [Fluent Bit](https://docs.fluentbit.io/manual/) (`fluent-bit`) +- Data Prepper (`data-prepper`) +- A single-node OpenSearch cluster (`opensearch`) +- OpenSearch Dashboards (`opensearch-dashboards`). + +Close the file and run `docker-compose up --build` to start the containers. + +After the containers start, your ingestion pipeline is set up and ready to ingest log data. The `fluent-bit` container is configured to read log data from `test.log`. Run the following command to generate log data to send to the log ingestion pipeline. + +``` +echo '63.173.168.120 - - [04/Nov/2021:15:07:25 -0500] "GET /search/tag/list HTTP/1.0" 200 5003' >> test.log +``` + +Fluent-Bit will collect the log data and send it to Data Prepper: + +```angular2html +[2021/12/02 15:35:41] [ info] [output:http:http.0] data-prepper:2021, HTTP status=200 +200 OK +``` + +Data Prepper will process the log and index it: + +``` +2021-12-02T15:35:44,499 [log-pipeline-processor-worker-1-thread-1] INFO com.amazon.dataprepper.pipeline.ProcessWorker - log-pipeline Worker: Processing 1 records from buffer +``` + +This should result in a single document being written to the OpenSearch cluster in the `apache-logs` index as defined in the `log_pipeline.yaml` file. + +Run the following command to see one of the raw documents in the OpenSearch cluster: + +```bash +curl -X GET -u 'admin:admin' -k 'https://localhost:9200/apache_logs/_search?pretty&size=1' +``` + +The response should show the parsed log data: + +``` + "hits" : [ + { + "_index" : "apache_logs", + "_type" : "_doc", + "_id" : "yGrJe30BgI2EWNKtDZ1g", + "_score" : 1.0, + "_source" : { + "date" : 1.638459307042312E9, + "log" : "63.173.168.120 - - [04/Nov/2021:15:07:25 -0500] \"GET /search/tag/list HTTP/1.0\" 200 5003", + "request" : "/search/tag/list", + "auth" : "-", + "ident" : "-", + "response" : "200", + "bytes" : "5003", + "clientip" : "63.173.168.120", + "verb" : "GET", + "httpversion" : "1.0", + "timestamp" : "04/Nov/2021:15:07:25 -0500" + } + } + ] +``` + +The same data can be viewed in OpenSearch Dashboards by visiting the **Discover** page and searching the `apache_logs` index. Remember, you must create the index in OpensSearch Dashboards if this is your first time searching for the index. diff --git a/_observability-plugins/trace/get-started.md b/_observability-plugins/trace/get-started.md index 4bc535a0..ffd16e3c 100644 --- a/_observability-plugins/trace/get-started.md +++ b/_observability-plugins/trace/get-started.md @@ -9,7 +9,6 @@ nav_order: 1 OpenSearch Trace Analytics consists of two components---Data Prepper and the Trace Analytics OpenSearch Dashboards plugin---that fit into the OpenTelemetry and OpenSearch ecosystems. The Data Prepper repository has several [sample applications](https://github.com/opensearch-project/data-prepper/tree/main/examples) to help you get started. - ## Basic flow of data ![Data flow diagram from a distributed application to OpenSearch]({{site.url}}{{site.baseurl}}/images/ta.svg) @@ -20,11 +19,10 @@ OpenSearch Trace Analytics consists of two components---Data Prepper and the Tra 1. The [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/getting-started/) receives data from the application and formats it into OpenTelemetry data. -1. [Data Prepper]({{site.url}}{{site.baseurl}}/observability-plugins/trace/data-prepper/) processes the OpenTelemetry data, transforms it for use in OpenSearch, and indexes it on an OpenSearch cluster. +1. [Data Prepper]({{site.url}}{{site.baseurl}}/observability-plugins/data-prepper/index/) processes the OpenTelemetry data, transforms it for use in OpenSearch, and indexes it on an OpenSearch cluster. 1. The [Trace Analytics OpenSearch Dashboards plugin]({{site.url}}{{site.baseurl}}/observability-plugins/trace/ta-dashboards/) displays the data in near real-time as a series of charts and tables, with an emphasis on service architecture, latency, error rate, and throughput. - ## Jaeger HotROD One Trace Analytics sample application is the Jaeger HotROD demo, which mimics the flow of data through a distributed application. diff --git a/images/la-diagram.drawio b/images/la-diagram.drawio new file mode 100644 index 00000000..16fd005c --- /dev/null +++ b/images/la-diagram.drawio @@ -0,0 +1 @@ +3VdNj5swEP01HFciQBxy7CbZrdStulIO7a1y8ADuGgYZk4/++ppgvkSSTdSsmvSC7DdjZvyenxGWO0u2z5Jm8VdkICzHZlvLnVuOM/WIfpbArgKIb1dAJDmroFELLPlvMGCdVnAGeS9RIQrFsz4YYJpCoHoYlRI3/bQQRb9qRiMYAMuAiiH6nTMVG3Rk223gM/AoNqX9sQkktE42QB5ThpsO5C4sdyYRVTVKtjMQJXc1L9W6pyPRpjEJqTpnwY+HsEi+2Nmb8lZ2Aovtz/nzg1u9ZU1FYTb8glEpF1XUtK12NRfANDVmilLFGGFKxaJFHyUWKYOyoK1nbc4LYqbBkQZ/gVI7ozMtFGooVokw0apmWejoLg2UYyEDOLG1+rRQGYE6kec0WugzDJiAkju9zpzXWj0Jgiq+7rdFzeGKmmXNm16R64bbFAzDXLfREUgPOgVbaC/bBRKOBhJ+yjLBA90upgMNW4VKujcxV7DM6J7IjTZwX42QCzFDgXK/1mUU/DDQeK4kvkEnQgIfVmGj3xqkgu1pBYeMmwUN5bUC9VWw6fivzok71iP2cVV6pF/K8Pj/NYlzpkm8OzeJM5DwSY9S9cjVQMF3TNF30D+xiHdzFpkM+P2WQboEKoPYcojQ9R9XUo+icsQwKBK91/zO3eOd6R5y5+7xBurO91ef/Sohy0Be10Jj8Jl3yEK+s3IJuY6FyM1ZaDog+c7dQc50h39YqLPt8Fesk4surkAUubrL8+7f3Hn3TzJvz2ker5BKlh/4ftSxqwoBIy3F5JAQUzJx6QcJ4Y4/Tgg9bf8vq49B+5PuLv4A \ No newline at end of file diff --git a/images/la.png b/images/la.png new file mode 100644 index 0000000000000000000000000000000000000000..224d872d119c462a26dd855eadd9d03fee5b9bea GIT binary patch literal 21176 zcmeGERa9JE(>4l+815R}Iy7#Z*C@pJ;y59)1BEn=zYGoM7b-r*qmvRfj{Zu>&5B5Cv07Q-QOEUBmO?< zp`8C;uMWG1zFFXU3NM@&_Zgr6d4=AC3JKgtkG}WkHJNe`+L%u2-xc@Qhta@wr~m73 z^#2cE=dXl5+w6u*1Mz+y%P9Z5pR+x1a?;n=*L{Ea(J~+&Q!^c5-^&wrdg)-iR6bEDJLdqG9R5ybA2A_$Cpp}5W@$>pC>*; zkD-qtT<#A4Acx;I^Y9+(u<~NWt%y71v}@qfNYLvT<-kHAn7rqwQzue8T$}6^F8; zGYC_247j>;YU;`I01nN(d#PzIxWMPi(q-V-DQ8%#<=AR4ZR2x@xT_AlB;H(&$7i8fXKl zeCqQ%@yzO`JJYoaC8xheP|*pWcE>Qt4&^D523K%umr*`_k?fz$ZI-Q9`=&3Ik5#Kg z=bhW$e9_HkjiUE55#)3V@!(`W>qPSYg$3j5W~-5WmhM*_{wO+a-a^YgVFfqrwz*hT z!bFORY_QwJeNAO$^h##1eS3i!L|7f^e>pYNaJuot zbG+Zm@WJ>69m;esE`CrdMh)&t62f-DK@h4Eh!t``wjktH!~8DnbZ+T*8KQSJKe+i8 z`of|=5t~6a@~ULudkQbN*;g!339mUK_n}+Y2khZyH~a1g3Q`7{aMkR3`-L`zmecJi z^`79tbP>{DBL&*xPEJl;X+lo%$>J`XL_gAma=t1gaZmw25%kc|Pm)0x3WfcGaZi;x zug(vbc6ZAjws@;XN2jLpS(^@Khy|1L6>oYiwD~l6otC{}z2E9_xTaJ68ryYmzVvI7 zAE!|ZncwxXwm*1pvXY_6ZLjR%`NnWg>6+AK1$n|R*2le&XJL*WpgrU%PR~IupBSF< zknsKHx$+KXJ@*e)I`FOLT1%S8&!6d%0-_+1Hu~n(dL$jW+7rGNFE*9X@lC&ZiHI+9 zEQ@Wb+Cuxj&O_@(`un;q8(S0Q+S#)Nrj8d!RvM*xn&`$M=%o7`NMH>L-V2RRvgj80 zM?GPry=}0FQEN(;dMi0WTyuG>h8k2!A2Fu>A4SYl>a=bSb{l%kVjj_{3 zu8{RPQtNVv5rG_<-Y}=ORAc6p(vqBLJKMEEYF@vOkRASPRHo{7Hu8dL&WgsQ4}4z? zxGmWwLCV_h(g4PzAl-A{cQ5x6_-K^I{{Yz! z7koJ_8_ZBQbFT*Xdc)0l+MkJ&nocoR@K5wINQA!#9<&hNpI7m&S7eB^Fb~d0ccy(G8oMSoiE0LXum|!n1Jwx9me5DdayPS z2KczoWKK_HJJlJ}q;cXA@b~>moUDVFBLyl6Nrr)~;Z9rQ?8Jl|ejk1L=%vbk|w-^Uc^YHQVBb&zTG_CTaayb-HCsm zBdix#01z!s#FS$Z<&o8okHjLw7Gs4`?P;nXclb0UQh34&7 zpVyg|K+7Q{_SILAIwE`|!TR;$mj^BlJi1kbgbH$GAt33V>6#A&7!30c`<{Nf(rJp~ z2jgREi~(Wcna`}$j@damIR%A;e8#j)zk#q#v!z2ye0M*w{De3&JhMgB+v$1(n}6F$ z?ObdTZpq8un(1yDm$cIrUzUl}y1_0H;&>iJF5+tA0+uyWLa;uU4+7ni4t5u7ejdFA z{8Rbz_3GHGMYR$g?yHj-d&{xngo9)`AAF>Ff1)O;eQxHrIHu3qX|b1D(wE}5S2@f0 zAlcFuf`pNCN0xTAJcUitT3CJgFyv^L+^N36XG1?T>!W^-XNK~Hu!;2u!CAi#&tr|V zcu!T8%3w$QW^wf_pTM25aZ?&1q=^@&&b4zL9+jh&pr zX$HS&OM9QBe)LnfDZ>@^I;HYOLx?gVGt|g93*+xH)+bFNx zvO_f5zVT z0`~K?rrp5`1{OckbVD4`&12}7!1VLTiXTbY-hI(@o4+FsUeIyF9bpfljbk$_1?6&= zL-mjTokQ35plQ_kFl1X|&eiNqC2pwWOleq=EC%{89sDI5RJpxhf??eD$nC3sLuY09@Spl-0iOTDNfDyLs zkbFvRM(N%pToxl+p;hLhyw%axG`%FkBgcEPY6|kA4QVAwEmg~pl87|lg}s8sSVtjM zl@r-?E379tkj$hi&D}|1f}|3hrbaCXj{P5leHxD>I3l90W1!n%4N2-JKw8>8S!sr? zi~FIR?EP>}`Vi!Z9UBUV_`bF;g*rbxW*g6K*maA2~{tsfB4(=ZzJM zlW{$1SxZtRUCf)L6O$9FGJ9XW?SO0pnk}x8>BQ_6MN3CCb-TC_lC87LJrTE#95q*e$u_{P9t0qgF^IsQM3GrVz+T87`w7vWhP%Fgr~gM7|5?JT z)!48F#)Q&fQx9tvi^$i584+{MZf3i)jVxCFuZCl1BtUG#j?L9IF?fXXt*nt$=X)h9TwS%?RORqF8{PYm<4Wgk-t0-IeWmG+l~D#fMDgIH)q9R4 zRe{cH#wloA?ScM7NhMF6U= zc8EYa^Ds5KNlAihNv7dirdglcNG?BtaSg30Osn%YhTw^Sxd7A22RtKPD2~t7_j$dr zjSA8J)1R}U_Y#R3f!rN7)|10A)x;S1YOcv;bL}<$QtUTH4u4EG1XMaqc4nD5DGpZ)_fz|^ z^Af!TYf4hYwx#SeTm9A{^fIBr?&vpeIYo&cs0F)Sm$*wrz4Cedb;^PkO95_K`st#A?hArNqc(iFWVvswNaY)4niL z&N3dqZ+<|SQY<#yFAi(i9{Zf6z#q+JA~^;`8`EnL9jx9;|M4XzLCE5+d~`+OTJ%KN~MU%UxWaAG}G*6EaIgY$rF zpeM@nPW>j83F1ZdYlx+$%e2V@P^n zU7wDlgZd^SopkwK(z7--=}-C`4b{2RrTX<{sZoI?a^P^vhY6RwH6;4ez40*h3ilrb z;VInYL7^m_@FoMG7Iv-jE&Mo&Mkw+_0)@$ZYGYrzxB!Q716E3} zx(a8>Y~3<^=UC$tctR@5c1Ok^*Z`iPj(4V(3b^_p8)o67f%wt@DcXi?gRdBX3(74` zC}DCOHV7xKGE^jxL+_$%*{bV|1(!U z$8;n>-Nu{-G7kYqx6wY1q6jBH{_Mv;#+*}Ny1(5c=!oPZ`C))2-k#LpenPKIEfTU2 zb{pwi=7T>+CoD?iWg2)WM=9n`^n|BCR$$$(FPV?wN19;3BVH1ErHfIW)>qi3y&>4^ zG1oC-+O8$WjYb}yaSeshddV_>;w?0IKA7RRk($}H9h*?ISSo?K|2icgw$_lIDYMn+ z`*zN|&h7P+(w*|;`8cKSP>Y$elH@UV`ss7DTsWFag3PgZSQrXm&ksYNmuJ$eVTratKR#f2NCpn zQ8ZgzEm!%Sgcxd$MA&1?p=`+Jl`d%{H|0=?4=?4sn{-eoWt%gag-SX!OphH_Wq>|v zrG`54RayHqnoX}AufA+Svhh7q)0nt8L3a0kV~m*FiGRa^PxDEZQfr;US%Jbw1@yC2 z>^I;qMs2E(R+*D!eOg_v7bZ{lAU2cD)V)#EvN?zd%}k{kKInOuoYIE+>=XVl-|G!9JviVTF#(D@xwwxOF zcJ*Dq{ApaFaV1I}&0M6%$lCJmj#qN4cVaQUsry*A=7vxq)0>2{AgbM&7(pvjm&rQ0 zaq&eS$9vf#GTO@%*V8cR2omB;w>{bV#JMpYwKAio(Axe9=Do8V)Tq(3Yxt=wQZHes z4Tjnsxe`dX0xxV*v)>(UoOFCyP2MaQvM}}}z;MPjM+FWM^zDCOp^~AQeYGd0AH!zO z^THuLm~fYA6n>AHpzdICUW)NW+Vg>}Nf$Ti5b{R{D4)DOQ==SK^|bxGOH}K0VpO>v zL*Z@EZ16ZcoMB!8ieLmDjp+sj`zhv)H_zDY(&GsvzKJt;w>H8sM2pby09VM#b8q5Nbu60$o_`M!#>(FXfrQtc`2#S`Xm2NX=zRjngQ~1LX^?Nt_ zg-7mX6KvO*aWXGD3yTua8*)q|Y+~8r ze!4p=^-I*e;!Q70!-liuvJf$Wk0U#|FvM;IczfOj3TL z1Ot|jl_u_lqE7Nc%Pg4bddX+MPxjizhzR*u@2C1;FE|>6ADUF=pAt5{wK=uYhvIB$ zLw{?>wXCGvuQv;{ePj~->dIs93u&Bj< zT!66g5|vkXdqt%Q6Z9a6G?TdGuNs1PA8nD)s=S+3h_}D7#vI?%Ex>7awH~9Ak7JT3 z|I(@|II;(X$p_ea3+RZ)pn>@w+dNiwrH;JH#S4A2WIf&fCWrrO33Lw<;s=E zUWI0|CpqTVp@MIZpX|=Jq2Nyx{d_evn|bQQx%v$c_flnECe0mv@$J#Qs#k+SU|7w< zW}b(`gF8jhW|&UT;Zwcgu=;ZHBli6*KsDN+-8F&REouHl#1b z{hXrPk;?R$l#K6xSAnQy`CLLUKM$F#yRm2c*lu7RNd}oZZpN*!=%o%P<06=evZ?kg zXz@KJZ&X|RS_4(re2liwvpd~}*>!={wRZhcQuMZJjIdS$8tQmB12_M$x6s;K5>g${ zRZoO|iO1>wCHv7>y>vXyl;1PjTt@Zr7s++*#Z9j^_ONG$7*x>|lYG;MtJD>VyQl;8c{Qk_p)yt)mqDHC?-9 zu{>Ln-JY6P#agkY}o;XfJdZ0Xx>S++$#kMXOdLNG*ATf9d1#GSii)W(G8#bE<)?9eJ%3 z!Q4)BP{8V6cHK-XyGv=DXG3UzmHfqQ>?|mZSj{8L>}zNUkWYE@K1#o(@N>2rCIb>0-hv-)Y#xd@?xwOi>sZ$LUWT3@>O5mAZD5Ce5K zcMp%EP{%O+mv83>2OgwvUf--J+vrGjYA^Sy1gas6ABK|)_)6ABv1fY^^%eCcaPYfQ zwZ3QkrqNUKcE*mfv5hj=gC44@<=i+7#Gp z3bpYyt@$*(JbM#ek_F$;dJ5udGaUmCrgS%sBefd9d{-(b{odxGlQ&&yL8g5E&h zuM+UXN5pJDtw-mX7x!!S%#fNVKYgn+&^9`H=m98U*51Rn>$7(G@4a!qvCpSUm;B2q z6V-!ib;PWO8s^1?3oQ@@ZMGl}_4TD5Oz{MEN$H__wV{MERfPelC|vCENyis4)C%Wa z9?BMnd-H>>0o(TD-BZb;jyxE|tbrs{sSNg6Ey8^_4Milw`z!OEud%&m;6QQ!js5j3p8DhE8W}CzBP1*^ z4(diAW-!B4dHg()UkD}4(Tyw70x&eUK0#YC1YNphgEHkZSxK9-y!^Ds)@eXD8au9b zGr<{D2yBrzg4Wn`f*uly3Yk?DU>3`I$Yz0nvBt z(+9g?-8ROS3`hVaR7y`N z14n7Us)CP&s)hGYGUfw*t>pXhKAJ1EVn_fR*O?bJ1wwB=eET9WA>ZY2a}-2-8Amn` zvQyMph-DSd4GYFe3ays%Oy-(sXWx;+6qTm{r>IF@bfqN<1+6uiD90Q{*ZVVc!koRu%3{tGuR0FQ>|C zfBbU~goQG75a=-9=h5rK4E4!6E2U_!Gey`9pd3mFG3U(}-qQ-3FE5Ul1fB&SFdK&5 zxW`>}^R-#foE*f5)$8Hp+ln)uxGYkePrHnrM^T3IVE+z!@Sf?4kK!O z1=7ArbHDm6AAKzkvG^9s?Rf$IxGl$?-Sw1^XNmTt4wsPr+$uk}2O=3%U|?DYa`l=I zZ6A|=$^3DgLwkRP;4@41)!a-y%XewSW}XwbZ}+{D%k{#8czw~$W%It2arD;5;nLGs zDeVNvvCU@^Ih&xjOS{&@@YAy;ADe|LEswsQE{NxW_mytrQ#+&P>=w@jLZrSxNoSB# z0auT0LTX%hu{JpIfrb60$y#oV`_ZCNhc`1~!2TnF`h2~;vCAYcTTi^cbR)=Y=$w}c zmZxC{LLe!>w$05{zMA8xeQs)|&5*l@ePZoNh#bFf0Ze8yULb%yo3-oy| z5Fp=~As*abDZ(J;qeR;aKKC#u*j2|?Uv7+rX-6aL*;qJ_0KZjwZc%#>kJ?rOGM>KL zfP5067WXE4oDIvRcDOyp!;qj6?<>dD`zG2RCziMea6X9SNw3+KZ0Aoep>7 z@Hp#oL9;FEjP!v~{>r7Bw7Lfx8{_uL1jvKp7R7Q277~L_<+^P#>~E(k|Dqv564J1ZNS9F!O&n+A2|Nl}`?H-vK?=zIYMK1_j&`rKqiH zZG-St3ac@l%iX&}8sDin1ikJ)5Y@)3irtjVl+wYlt zD7|yIMFKkXyX+YU9XsFaM9!Z6x@0|IxU5J3Cy{L{)UvFU-s)3|td^-2e(QN!OEJ;e znSx1YLIqS~H1{V@J6kF}Z!Ni2Aho3m=&%Re(9c`8<(Qm)mbz zG=WOytMks@#Q>4B;e`t<+n`F{d3#J^&JHY;33@cQf*LhOH*SBt&hNqjZ4B`ZN&W(n zB!Iti{+7rn!cQ|99$%zxNzEe2EYF8)+WXi)7SPgydn;k2xPV)F($@1|HgN4Fq z$pTh}3iZ;^&0K@WPSu6ST1mUf_%im{@BIKkMYX|amcvv-W!B;~HVcHY1uf8P3>Hv! z@T!-aU6CmA-62aD-V;6#h+4sb|2Q_Q+29Wq?kz`mbfVkLajG4U~ZMBoKg=og-__XN8Z=J(KPUg~=j*rj@K+5^4 za)QMX@#dyZysMXw_;C=S}83S5&U{qoH7}|9@i_E!NUje zPHZ&m51z6#Nnr&7rNrWkM1?Q51y<27HS&EPKG6I0^ff*ez2n|&OF*|g;b-f1A0H#I zeWQT}edqG=)`Y^DZ+QlUKo$NX6S6N|760vi85J(eqdA~>6%Q8oKIbwzyP(&yVd9(U zK1sqH) z2;@*1e6XS|O)IF^wOdenR|@(i)eN3EJu+$M17I(5_|xI6J^6qM;X)#35}5BQ!X!au zx&#`w!L`LZ>134E17Ym3{AQ4+S~E&hs8)cnx-Gdas7xm4`O+VpbcKA-jN4g8c*~MY zK>m~z^;?g@VF71HmJeKm{O5J^$>*AMI0u!6&>1_k{0dXBeX2&L%nmyH3I0>*bHC;~ zI@8>~KI4tUZ8XQjg6@w3wsVj0>7}2A+9z+rfl^zQ+MjYg5g@6y0xaLHPijN*iOIy6 zT7_F*K|UF}VI;#Q;}#X`Es|kAeUOB0amN9lRd9?Z`5@rNbJX`D0b+=dvkzkB*$0)k znr;t+NyXIbUI@1AIdnucEhtjG+4?i5M%be7nlAv9mO`M7cetbyfZ$4IL7g|e5SjJ0 zKstJ;hH>#tD$J)uuLc9tg1s~G!AEhgyDCPb20)r1V*! z>sNEZXbP5)==ETtpK8I|^g&VNweWFoTXm+Lfw8b#5rz^z%omwGOPT&~FjEX3e7tlm zTR3BHztBLn$b4(Gsj6Ra@$60cKd5s~A^hNOjqJ34t&s?MahB2wtkC4dqO$i2kY$U{ zB}otui40J-FH~#hk-;U8m?)orxp78^l5OxaMvL7@DR1*dItL%Qw$=R*C@E~dQl2Td zKOV=nx$a$NCJn}G@WXFCw`ap~{_(l8Cp6qj#VR=`@9&^8flyEI-kpN){~rpz^3b=& zrxNGP8`@(|V$!L8h5h)1ESLHzP$B~gF~rPjd1!s=8EhVEHAM@LQBSGWQoqyL` z1aQ6kH1bjnp~Vsa6o$c3%^*-w;MZi(SOLsS^A75#fQrSR-q}A2^Pf`a|5SHtgK?_G z!^GA|r?*#*KTrHV-iOYfy$$F_o=*~ToA#V-`X68}kTw8#T5P^tC>VB|lr8L_p!RDN zs|XS?jd_Qy&Vo0yT4(^g4^s|S7L_@r4-^7QRV@jrE;3HmtpR}h%G=rZs{4+Ch%vzT z53+4(Wc6TOc{mIA>ED{5`d^x`q&fLy*UWtffHv-o_lzSu8zaV>K zs9y9x>b=|V82^IP@Ttrm>cbqL$$+MvucQAIi}gv>qoe@|dHhu@2JXMBfB#oPvX>R8 zE;A8D|8NHCIr6DP06iY4{@JL&{dcPewD{VO#Or(1i3#pz9fkUx24?IGUx*O^Mvu(; zWAp$n_dnq6_Zzm8WSX5W>QP&P{5w0TpKD6?WCTRoFZD;Hc>gj6lCO2Qxy7g9Vudqy zFaFt7%`V)>asXZsK;wv9|_GDVtQjgu%Y3vP+R@eb|eI11ggfFSu<@*a!FhQ%YuyK3s1bbh*4A)yc(_+$?m9@8iyE4aR7-1z~}k3 zOHWK{!3&PRA7`Ak$!1Y9*A4{NDyb}Xe4_~7!oxvgI@$&_>(oRo#`5}!Ov7*ZrhTj) zWVE;kvdLro+6xy;4(*s~M#U5G^d?d?p87vWz9Tlva2~!y3#&Wk^0Dpzd~S5?NM}@1 zn4ej$%OrQnwT0ODNV)^)OmVBgH)^qNFoBNk>*6yd&P44)cHh0I$>HUXuZ^;7>1(2f z>P*K61O)FUoWxuLFpiCU_f*+b^tN7i>*X_5%v@unWe3Sdm|DFM*REEH2KW}lKOulZ zU0+MoPP)22YBBPpmpw53>VnAFR#;puopT5!M>Ve=t=tmr>|Csgs;CIf6qj zEit*|Rf)G}QuWP^{iUaIzSvx^mKEDS)@7C05*h z^KpxhQMM+MEf}BO)S=1bqdndypTHYiiA%Ibz9XNVh=kn{Jib^49wz2K8#X$)aJOfo z_QCotb$7o2q%gfejFO)BJOEZsM)P(>hd_sH&y1ug5%#- z(@KOB1>^JKJZiE{dOl)F|E$H>b2;l&Y1ZA5WGas6RbS}S_tPvRkedA}H(Q1shRJLL z+P)=zpr+<b&Po2-i+OmiT8?D%w_5r&pir99{;wY;rF*0!jqYXbHMC| z4l5`KXi@(AdY9?n!P$QG9_U24d8kr+jn9mzw_Ro3b@k$@`aBQFi_jQ|%sk-5+hB5OBr+Ge#Cg zXnM?FT^#>c)c;r1|1qBbYt{ePf&9l3{=e-&K81BOyj@w`-VO))a;KCQ;-VNE8+S)iQ7m=AWk(BP`$xaj0aEr7(76FH z8`3YPzkK;3N?5n@0*u%}%ar=l0ry;oX$UB6hSP{s-`w1wM3HlwN;_|jV*{-)Xj@VC z$A_?Z{fJOv`M-MZY6$S`Jpex9KG4JApC3aH?Tlkm*#O}_jLSCI`eEwxx7SvBc;Bo4 z7wDE>IMh)|KQ!TgkAt?&UXTQYqOBW~|8-XYT5!EVRi#`V@z+>C?1T)*0Dkgjy1zmg zTB^T$KfpQ-JwIIMO}wH=1|zJ*{sfVi^!Eod&z7rtv1i-P?g4KXcRsp3O{FRzNPZqD zw}j7I#t>^R7wp`qM*n(tvA``CygpC=XZSe0$@U))NTUp8tBykOIOS~w1);Km!AnX! zBFZa@_wBD2zD#UAqF(R`ok`kdklNJ%8}MmYJcq@jVp3522}AyGA6|JIE$w;S(iKY) z&#AG3*5KBHT;mH{!A&0jR@(g z&+(@}@^>|1v{btFQ#+~3)JJ4h1RwYV0;{%-e9-us;%hH{GeJGbrtf(Z{a20ehMsT^iBo0NJA+urfoHylJrEfZ7Fe`gD%1Q|e^=l+X)+i#910?|1ea#cgjvA~!e&Pu%GK$I=0Sj)dRKj6*LIZW+fd?t2w{N;A*k zEqFN>tN^o%%Uh%UV_Bb~{Mn(6!K1(I)VXYb5S^X3suD(#unS+DI!IjWVIPb2x#@B0 zL1@AfIXe=plQc_z_Vj0~)ZfUZ^C$&JFjMV`AJKRo?Y-}6j)ZRrE^Gm^u-g#g`jc?! zsB|P|dvC|XlIth2)xW;zr3ngM?t({6Hkt7XpW5Lj%}Winz!bNb?97!;S7c#1CgyCJ z_2eE->%DpXq;!Yw7o&s}+Bdf5oL@f6z^Fy-Y@mNqE^#UpeE^W4v46!Hjgs>?3Yt!S zEWRRYhp_x^D2losY4BmDQ`ddz^!_?$MnLa3Gw_MmTdZ9Htl=Dy#gKI~EOPF9@JMRu ze5{n=`j5Ok7T@2;St!5!@jgW^>8V&^xzYQ{%9WSNhiF%cVx(H-#zie-1-ORNSiNL# zW~~LvOb_cKHfHNSUg&LpHx#7s<-As{HFa@*Gou-lmL}%+SlvRU6BfnLm%)cR&A;d0 z(s-~m8&sXlV@>bdpGdgHe;s7Ib)!kc@%05ciaiaD##J@Xg$vC+Kng=e+FqkYM!3L3 zv<}I+EwQT1f7n&~$fz_9s5;U9F%2{bu>J$W0#VgGqeqk%6AwZN7%yx(N$gMdeJr?q zH_6y^8_}fF;S8FgqZ=ZHq^bOtk@?tqpB&R29h}48@efQdbu3-85UqV*QWW$#Vce`Q zjNk#?O6LeGahXi({HH_IQp|3)DKZ<0L8lq2QK*(Be1QBBl{V#s+MTdBRgW1lGpgMw zu2fv3&@oXu^&I`g6p2aam+^jcD?)!lYV)Sr1ou@NAB85&r_OF#Ipp@IW@r&ajZr1& z3q#s)k&-}n1fjr7quS47K{K6G(e4)sm^T;Hu6%E580kd6=?Rm%N5r!>t=vN2*oRgK-B27oX{QGaVKUX-l4EKTxBM-XU+T)58Zqkiuv*;0bpVIn94 z{x}9_f;%K9I?9nx!K7$LfrvwOwOV&DO$3|aD{j|VTs^mCAaAyG46bW`a-xX0_aV?;uuR-9bZnZQUP@+dZzU zeW5jBpOU(=2n(XR)Hzic6=Nyu_vSJ1kV@11Xm}Gl&Gj-i1%hNh7mllarPEL|l8_Pp z-}|)y;)+(T5YC_Ny+Qku6~?hFsGjl6a;_vjv>wNRyrv`o=s7K}h`f>VT-j-ML&?}8 zwoWuX%8db-6D;I)B@d!*l~LWNK_3;C1LYl-el`wPr9C7;F3&~c(b0d!qN3qio%3>2 zEp#@rF{-6{U~oz2w|jqMc30iIvA(`O)X>e@Svscb5tr}TD-Ezd`h>lq)KoF3jTr%5 zRlGV+V?NZRS(f)ZQx!cJhI}N2HbzDu^8dIzyhL=skT9k8g%w*)yJOU0#X43kUbobW zS{&WU@(h)ZU-VW7)t`qPxO|*g&OC0$hj_aae~4wXa>Ux;BWoxAQSy^X!TY=|z~1tC z+`W4Q7I)tZ;HCKwguF z>-KA;_B>{{&XK=flOq$SgL>%jQ$r~JsDca%c-pJEOSfcU>sW%dM0yjN|8N9lpZ2JIe7R73Gy(eJ{7 z9cAHUESDh@)dj}aSIEyvo3xeyh2Vi2rCv2-Y%Hu(Yc;wM#Fw21P@r1fZaJ>5uVZ+i zEG`uzcO0l>Zqwp`RQvgsFBG=|4$cl&Yte%l;!+I`OYg55`3|2o;g}&Jo+57k9Pnu$ z{u4m=xY@E>+$MbKe%FW_ClMjV7g7&H{tf4nbSTv!j{mSHFjm=#FsxGT5k$x}eZn8X ze|Bvf{Y!q>4-iix5UVRir~Xn)AlkbMpl9K!0gkf*Wg^5I?mShLrkQ?X~ig4DP8&Sfm%FjM0mErp{wbGKjEfA48<$t zfU`WcT&?{6&pi2ZyMy={?t&i~fjQK&(PGv(HMa989!^Mght=gwP3xo;AX_gq{){hN zvm5>Zep@%%tnP)~DwC5otKXW%1>OLGxQ|D8}_Wh+%pRt?&I3y?V*KSKeo5be(Wa zRP7{KvZ&L2ejDAe?vPk|iI#`IJ(hArkq25Y=LrPH{89+axod76q@#9UjzED%CT2UF zzlu0t7+wNgr+Oa*R1C;EB7j&{G$XJzZsUI!%y_E$2GW@Z)1DumL~y%p-k)uBL-her z;i|^Zzn6nJ{~JAb-ncB)Iq);qc_H7o1N~0XvAa}WbXLo+$b>!dZrC(&9$aEUW2(+$ zAciu|mSa!6aJRp<^w>?-??fZwGRobRovkLYg{qL<8zBtQ97@6s00*48`l-qGYRp-V zDy+tK7)3=iv(Tf@T-tu&!fe%Dn=dLR;A=639FlSqt9*>NRuiSfiG%tP zEscjOh}!;ODpr0D z8H;IfWllNB?+KR3*_X{S72Nj4GgnOPQZFw^er6{6!%#4wJ<&!Z+xvza-!UrK@%}rXJ7ah} zQq)YdnrERPWwGBbC-exj9!JTj$X4P{W=ylNRtXT6)02=O#q}awltlUkPVFXo@^P#1 zZ`Du1gN>OBn2h0>cp-!!bUfsN$;*QcD-=oJ?Fw|TcGLR{&y%Ysg04RegUaR3`YX5t z#!ka-VFVnls0`)@8ejAaU)E)Vai0qE|Cv6DJ^@z%m=xFXi<-w!mL^j?SPuh=An7<; zii2g@fDGDQu+2Xws%Fm~{vwnPC&R~QkWqW0{TOQto{@e@Yqgx4X>NS37LTbCa-CS^ zd96SkH=m45q{Cq{*6GHRwdZXlCnd=V}9{QJzC2xf#WG z-+4L~Ppcfqvf~ClZyKrMx#MV_pDX#Y}btA@6+NUygQBTY`Rq5F8Z;oB9F2# zliOc$Xx-Fe&5U1v?o=kPErX-s(--5Bp&O&KA?8R1zBVaYbJ@GZy_vR8nWr9b(r8tgxC=Xm`9F!|NfUz5dQq4gwD zE-fZMyLcQTdELQ@zRBw(}ZIk=a8Qq&_CGQIcGm7~mw)lIT>zP?{ z!hqn%V%-lUCS!xt;vRvX1J=x09MDb;dT4lH3gMi_)n($&N4yzk98*DI(~>ReRm47X zb66?RPFpJHMES>C4@q>4XU|R-&HK62aGP~&pb9LscmxRC*e`A@7Pbr%m~lK&T2-U9 z4m+Z=!Uypi;6&?%MwR1Ii?&3)l8<9hSOudjB;lgg@6JlccFT|Lq_wp{5c5ca4hW;g+GznB-Xzb+8F5ZB11@GE;&UIrp6eYbPvMm9ba&@ea@RK>_rgiNhxts zA~aOzfx0@7pi&hFT_$7-Z8B7!s z=~ll121y{<2SOxRx8~UgVO;GZV$h!ZEuGIO67p|^oGI{L7IC%hN_)incIPV}5%Ur1 zg|mwckU&+unZvj{8{q=20%s0OqY+sWVsJ4595Lic?qKD=nT}kC!u=BxBt?ChY^H?R zaH*6^i_cb6%!(l_e>GDfo|M%X?fj5PdM*{4_IQ1^tCK$C+Y?agT2E^~fSObhr+RD!T7Yuxa7HaW{l zdpGZvzP%}Q@AP8DJ-_(>Ep>M2f6#~su)+g2am}Q*uUr1=#Mya4Mo)da1)JR-_d6Xe zxbauv)HI#;<#GOoz^>Pv-34s*x;Iz2{QRSuXR+zGplDEPIL!u%#E2awa%FjUqz}aJi(gmbYO?0C#N>3}fO*CC`^ACXjD@=duK?GbA3pHp z;$^0mnt=0((_8=kcho%f)^mO`Qs;as%NyNOt_;)lg9Xn@Z3{7eXS{(OSQ{^_=NA{z zjAnS;?(f#RX==gWYo=4yMmcpBonP)R034YG)@PiP3R4eksC#j9C;uchzh9Bxrs})y z{Gz|{eh_HHCFxRi$*bmNZ>1icdDyKLyJ$)1nK}BhV&~)7fG77n-FbgcmbFppnVzNE z;;V0b&+q)SV4ku?WXhTttJSN31HG-CQxOHgfqfu1Vy_%-ZGF*ks`S*u7w?2SL@#sd zt^2#7=4?>RmmPh;RIuX06wTyx;Gh9>mifwMEw!^kGWTnrx}F=?y*9f{!RZIbiz};T zy+m|d!+dWkL?85dw`VIE=b?VIG<%O@_cwUV^Rx)!##y6uQ-O|9(yz}Ar z%3gXO3IX=E`=hSDdUO%EpY!4du6cIVt8SUByDf70?y?m>;eO!WqL{Rw@BUif*;mW`@$pf9i=51ve?g;=#%?~P1)tB( zHV3ZlVC&=SJGENR@Y$qG0eY;Pwt2`;5(`W^{FGtk(jPy5Jveyyx5qRSOW@Fk(1%*i z^qr;8)qsOGtMa(Me*GGF?-OuyT)pX&@AD4d`Y9RPzvxcJqBs4p3Za3;si8sArIN!& zZ1fmBYNDk zugmE!)CP`ZyPny>7j_mn0Hfuz#syLJC^{$nx8na%@l(;eu|(0Bp?a%j4}G+5%Y%d*@8Q&KB?`ItB#iS&YLOWy7TO` z4raut|AFnCKYs2xbpP`|=a6;{;9d>U%^O<|^(rZ9Pq$h3cfM3KOhI9x4(iOqa)<4V-yiTPI^98?beQE7#?n6>Hfche`G~4r@i{sBKD{ zMIMn-fTt}kXa@G*`6_ad>UN;K8_)prCZ*rVGgJzUvcL)a(gVpzH4aeTpb2PzpvX;j zME(Fam0u`01g>M2n~ms$13Tjlf Date: Tue, 14 Dec 2021 08:48:53 -0600 Subject: [PATCH 6/7] addressing pr feedback Signed-off-by: Christopher Manning --- _config.yml | 6 +- .../data-prepper/get-started.md | 58 ------- _observability-plugins/data-prepper/index.md | 15 -- .../data-prepper/pipelines.md | 122 -------------- _observability-plugins/index.md | 26 --- .../data-prepper/data-prepper-reference.md | 16 +- _observability/data-prepper/get-started.md | 63 ++++++++ _observability/data-prepper/index.md | 15 ++ _observability/data-prepper/pipelines.md | 151 ++++++++++++++++++ .../event-analytics.md | 6 +- _observability/index.md | 28 ++++ .../log-analytics.md | 10 +- .../notebooks.md | 0 .../operational-panels.md | 4 +- .../ppl/commands.md | 0 .../ppl/datatypes.md | 0 .../ppl/endpoint.md | 0 .../ppl/functions.md | 0 .../ppl/identifiers.md | 0 .../ppl/index.md | 0 .../ppl/protocol.md | 0 .../ppl/settings.md | 0 .../trace/get-started.md | 6 +- .../trace/index.md | 0 .../trace/ta-dashboards.md | 0 _opensearch/data-streams.md | 2 +- images/data-prepper-pipeline.drawio | 1 + images/data-prepper-pipeline.png | Bin 0 -> 25811 bytes 28 files changed, 282 insertions(+), 247 deletions(-) delete mode 100644 _observability-plugins/data-prepper/get-started.md delete mode 100644 _observability-plugins/data-prepper/index.md delete mode 100644 _observability-plugins/data-prepper/pipelines.md delete mode 100644 _observability-plugins/index.md rename {_observability-plugins => _observability}/data-prepper/data-prepper-reference.md (92%) create mode 100644 _observability/data-prepper/get-started.md create mode 100644 _observability/data-prepper/index.md create mode 100644 _observability/data-prepper/pipelines.md rename {_observability-plugins => _observability}/event-analytics.md (84%) create mode 100644 _observability/index.md rename {_observability-plugins => _observability}/log-analytics.md (78%) rename {_observability-plugins => _observability}/notebooks.md (100%) rename {_observability-plugins => _observability}/operational-panels.md (86%) rename {_observability-plugins => _observability}/ppl/commands.md (100%) rename {_observability-plugins => _observability}/ppl/datatypes.md (100%) rename {_observability-plugins => _observability}/ppl/endpoint.md (100%) rename {_observability-plugins => _observability}/ppl/functions.md (100%) rename {_observability-plugins => _observability}/ppl/identifiers.md (100%) rename {_observability-plugins => _observability}/ppl/index.md (100%) rename {_observability-plugins => _observability}/ppl/protocol.md (100%) rename {_observability-plugins => _observability}/ppl/settings.md (100%) rename {_observability-plugins => _observability}/trace/get-started.md (92%) rename {_observability-plugins => _observability}/trace/index.md (100%) rename {_observability-plugins => _observability}/trace/ta-dashboards.md (100%) create mode 100644 images/data-prepper-pipeline.drawio create mode 100644 images/data-prepper-pipeline.png diff --git a/_config.yml b/_config.yml index 89e8d8e5..c398eeb5 100644 --- a/_config.yml +++ b/_config.yml @@ -48,7 +48,7 @@ collections: replication-plugin: permalink: /:collection/:path/ output: true - observability-plugins: + observability: permalink: /:collection/:path/ output: true monitoring-plugins: @@ -90,8 +90,8 @@ just_the_docs: replication-plugin: name: Replication plugin nav_fold: true - observability-plugins: - name: Observability plugins + observability: + name: Observability nav_fold: true monitoring-plugins: name: Monitoring plugins diff --git a/_observability-plugins/data-prepper/get-started.md b/_observability-plugins/data-prepper/get-started.md deleted file mode 100644 index 9708a0d3..00000000 --- a/_observability-plugins/data-prepper/get-started.md +++ /dev/null @@ -1,58 +0,0 @@ ---- -layout: default -title: Get Started -parent: Data Prepper -nav_order: 1 ---- - -# Get started with Data Prepper - -Data Prepper is an independent component, not an OpenSearch plugin, that converts data for use with OpenSearch. It's not bundled with the all-in-one OpenSearch installation packages. - -## Install Data Prepper - -To use the Docker image, pull it like any other image: - -```bash -docker pull opensearchproject/data-prepper:latest -``` - -## Define a pipeline - -Build a pipeline by creating a pipeline configuration YAML using any of the built-in Data Prepper plugins. - -For more examples and details on pipeline configurations see [Pipelines]({{site.url}}{{site.baseurl}}/observability-plugins/data-prepper/pipelines) guide. - -## Start Data Prepper - -Run the following command with your pipeline configuration YAML. - -```bash -docker run --name data-prepper -v /full/path/to/pipelines.yaml:/usr/share/data-prepper/pipelines.yaml opensearchproject/opensearch-data-prepper:latest -``` - -### Migrating from Logstash - -Data Prepper supports Logstash configuration files for a limited set of plugins. Simply use the logstash config to run Data Prepper. - -```bash -docker run --name data-prepper -v /full/path/to/logstash.conf:/usr/share/data-prepper/pipelines.conf opensearchproject/opensearch-data-prepper:latest -``` - -This feature is limited by feature parity of Data Prepper. As of Data Prepper 1.2 release, the following plugins from the Logstash configuration are supported: - -- HTTP Input plugin -- Grok Filter plugin -- Elasticsearch Output plugin -- Amazon Elasticsearch Output plugin - -### Configure the Data Prepper server -Data Prepper itself provides administrative HTTP endpoints such as `/list` to list pipelines and `/metrics/prometheus` to provide Prometheus-compatible metrics data. The port which serves these endpoints has a TLS configuration and is specified by a separate YAML file. Data Prepper docker images secures these endpoints by default. We strongly recommend providing your own configuration file for securing production environments. Example: - -```yml -ssl: true -keyStoreFilePath: "/usr/share/data-prepper/keystore.jks" -keyStorePassword: "password" -privateKeyPassword: "other_password" -serverPort: 1234 -``` \ No newline at end of file diff --git a/_observability-plugins/data-prepper/index.md b/_observability-plugins/data-prepper/index.md deleted file mode 100644 index 221de983..00000000 --- a/_observability-plugins/data-prepper/index.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -layout: default -title: Data Prepper -nav_order: 80 -has_children: true -has_toc: false ---- - -# Data Prepper - -Data Prepper is a server side data collector with abilities to filter, enrich, transform, normalize and aggregate data for downstream analytics and visualization. - -Data Prepper allows users to build custom pipelines to improve their operational view of their own applications. Two common uses for Data Prepper are trace and log analytics. [Trace Anaytics]({{site.url}}{{site.baseurl}}/observability-plugins/trace/index/) can help you visualize this flow of events and identify performance problems. [Log Anayltics]({{site.url}}{{site.baseurl}}/observability-plugins/log-analytics/index) can improve searching, analyzing and provide insights into your application. - -To get started building your own custom pipelines with Data Prepper, see the [Get Started]({{site.url}}{{site.baseurl}}/observability-plugins/data-prepper/get-started/) guide. diff --git a/_observability-plugins/data-prepper/pipelines.md b/_observability-plugins/data-prepper/pipelines.md deleted file mode 100644 index a5e175dc..00000000 --- a/_observability-plugins/data-prepper/pipelines.md +++ /dev/null @@ -1,122 +0,0 @@ ---- -layout: default -title: Pipelines -parent: Data Prepper -nav_order: 2 ---- - -# Pipelines - -To use Data Prepper, you define pipelines in a configuration YAML file. Each pipeline is a combination of a source, a buffer, zero or more preppers, and one or more sinks: - -```yml -sample-pipeline: - workers: 4 # the number of workers - delay: 100 # in milliseconds, how long workers wait between read attempts - source: - otel_trace_source: - ssl: true - sslKeyCertChainFile: "config/demo-data-prepper.crt" - sslKeyFile: "config/demo-data-prepper.key" - buffer: - bounded_blocking: - buffer_size: 1024 # max number of records the buffer accepts - batch_size: 256 # max number of records the buffer drains after each read - prepper: - - otel_trace_raw_prepper: - sink: - - opensearch: - hosts: ["https:localhost:9200"] - cert: "config/root-ca.pem" - username: "ta-user" - password: "ta-password" - trace_analytics_raw: true -``` - -- Sources define where your data comes from. In this case, the source is the OpenTelemetry Collector (`otel_trace_source`) with some optional SSL settings. - -- Buffers store data as it passes through the pipeline. - - By default, Data Prepper uses its one and only buffer, the `bounded_blocking` buffer, so you can omit this section unless you developed a custom buffer or need to tune the buffer settings. - -- Preppers perform some action on your data: filter, transform, enrich, etc. - - You can have multiple preppers, which run sequentially from top to bottom, not in parallel. The `otel_trace_raw_prepper` prepper converts OpenTelemetry data into OpenSearch-compatible JSON documents. - -- Sinks define where your data goes. In this case, the sink is an OpenSearch cluster. - -## Examples - -This section provides some pipeline examples that you can use to start creating your own pipelines. For more information, see [Data Prepper configuration reference]({{site.url}}{{site.baseurl}}/observability-plugins/data-prepper/data-prepper-reference/) guide. - -The Data Prepper repository has several [sample applications](https://github.com/opensearch-project/data-prepper/tree/main/examples) to help you get started. - -### Log ingestion pipeline - -The following example demonstrates how to use HTTP source and Grok prepper plugins to process unstructured log data. - -```yaml -log-pipeline: - source: - http: - ssl: false - processor: - - grok: - match: - log: [ "%{COMMONAPACHELOG}" ] - sink: - - opensearch: - hosts: [ "https://opensearch:9200" ] - insecure: true - username: admin - password: admin - index: apache_logs -``` - -Note: This example uses weak security. We strongly recommend securing all plugins which open external ports in production environments. - -### Trace Analytics pipeline - -The following example demonstrates how to build a pipeline that supports the [Trace Analytics OpenSearch Dashboards plugin]({{site.url}}{{site.baseurl}}/observability-plugins/trace/ta-dashboards/). This pipeline takes data from the OpenTelemetry Collector and uses two other pipelines as sinks. These two separate pipelines index trace and the service map documents for the dashboard plugin. - -```yml -entry-pipeline: - delay: "100" - source: - otel_trace_source: - ssl: true - sslKeyCertChainFile: "config/demo-data-prepper.crt" - sslKeyFile: "config/demo-data-prepper.key" - sink: - - pipeline: - name: "raw-pipeline" - - pipeline: - name: "service-map-pipeline" -raw-pipeline: - source: - pipeline: - name: "entry-pipeline" - prepper: - - otel_trace_raw_prepper: - sink: - - opensearch: - hosts: ["https://localhost:9200" ] - cert: "config/root-ca.pem" - username: "ta-user" - password: "ta-password" - trace_analytics_raw: true -service-map-pipeline: - delay: "100" - source: - pipeline: - name: "entry-pipeline" - prepper: - - service_map_stateful: - sink: - - opensearch: - hosts: ["https://localhost:9200"] - cert: "config/root-ca.pem" - username: "ta-user" - password: "ta-password" - trace_analytics_service_map: true -``` diff --git a/_observability-plugins/index.md b/_observability-plugins/index.md deleted file mode 100644 index 9568219b..00000000 --- a/_observability-plugins/index.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -layout: default -title: About Observability -nav_order: 1 -has_children: false -redirect_from: - - /observability-plugins/ ---- - -# About Observability -OpenSearch Dashboards -{: .label .label-yellow :} - -The Observability plugins are a collection of plugins that let you visualize data-driven events by using Piped Processing Language to explore, discover, and query data stored in OpenSearch. - -Your experience of exploring data might differ, but if you're new to exploring data to create visualizations, we recommend trying a workflow like the following: - -1. Explore data over a certain timeframe using [Piped Processing Language]({{site.url}}{{site.baseurl}}/observability-plugins/ppl/index). -1. Use [event analytics]({{site.url}}{{site.baseurl}}/observability-plugins/event-analytics) to turn data-driven events into visualizations. - ![Sample Event Analytics View]({{site.url}}{{site.baseurl}}/images/event-analytics.png) -1. Create [operational panels]({{site.url}}{{site.baseurl}}/observability-plugins/operational-panels) and add visualizations to compare data the way you like. - ![Sample Operational Panel View]({{site.url}}{{site.baseurl}}/images/operational-panel.png) -1. Use [trace analytics]({{site.url}}{{site.baseurl}}/observability-plugins/trace/index) to create traces and dive deep into your data. - ![Sample Trace Analytics View]({{site.url}}{{site.baseurl}}/images/observability-trace.png) -1. Leverage [notebooks]({{site.url}}{{site.baseurl}}/observability-plugins/notebooks) to combine different visualizations and code blocks that you can share with team members. - ![Sample Notebooks View]({{site.url}}{{site.baseurl}}/images/notebooks.png) diff --git a/_observability-plugins/data-prepper/data-prepper-reference.md b/_observability/data-prepper/data-prepper-reference.md similarity index 92% rename from _observability-plugins/data-prepper/data-prepper-reference.md rename to _observability/data-prepper/data-prepper-reference.md index 8458054a..417a1f43 100644 --- a/_observability-plugins/data-prepper/data-prepper-reference.md +++ b/_observability/data-prepper/data-prepper-reference.md @@ -7,7 +7,7 @@ nav_order: 3 # Data Prepper configuration reference -This page lists all supported Data Prepper server, sources, buffers, preppers, and sinks, along with their associated options. For example configuration files, see [Data Prepper]({{site.url}}{{site.baseurl}}/observability-plugins/data-prepper/pipelines/). +This page lists all supported Data Prepper server, sources, buffers, preppers, and sinks, along with their associated options. For example configuration files, see [Data Prepper]({{site.url}}{{site.baseurl}}/observability/data-prepper/pipelines/). ## Data Prepper server options @@ -52,7 +52,7 @@ sslKeyFile | Conditionally | String, file-system path or AWS S3 path to the secu useAcmCertForSSL | No | Boolean, enables TLS/SSL using certificate and private key from AWS Certificate Manager (ACM). Default is `false`. acmCertificateArn | Conditionally | String, represents the ACM certificate ARN. ACM certificate take preference over S3 or local file system certificate. Required if `useAcmCertForSSL` is set to `true`. awsRegion | Conditionally | String, represents the AWS region to use ACM or S3. Required if `useAcmCertForSSL` is set to `true` or `sslKeyCertChainFile` and `sslKeyFile` are AWS S3 paths. -authentication | No | An authentication configuration. By default, this runs an unauthenticated server. This uses pluggable authentication for HTTPS. To provide customer authentication use or create a plugin which implements: [GrpcAuthenticationProvider](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/armeria-common/src/main/java/com/amazon/dataprepper/armeria/authentication/GrpcAuthenticationProvider.java) +authentication | No | An authentication configuration. By default, this runs an unauthenticated server. This uses pluggable authentication for HTTPS. To use basic authentication define the ```http_basic``` plugin with a `username` and `password`. To provide customer authentication use or create a plugin which implements: [GrpcAuthenticationProvider](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/armeria-common/src/main/java/com/amazon/dataprepper/armeria/authentication/GrpcAuthenticationProvider.java). ### http_source @@ -64,8 +64,8 @@ port | No | Integer, the port the source is running on. Default is `2021`. Valid request_timeout | No | Integer, the request timeout in millis. Default is `10_000`. thread_count | No | Integer, the number of threads to keep in the ScheduledThreadPool. Default is `200`. max_connection_count | No | Integer, the maximum allowed number of open connections. Default is `500`. -max_pending_requests | No | Ingeger, the maximum number of allowed tasks in ScheduledThreadPool work queue. Default is `1024. -authentication | No | An authentication configuration. By default, this runs an unauthenticated server. This uses pluggable authentication for HTTPS. To provide customer authentication use or create a plugin which implements: [ArmeriaHttpAuthenticationProvider](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/armeria-common/src/main/java/com/amazon/dataprepper/armeria/authentication/ArmeriaHttpAuthenticationProvider.java) +max_pending_requests | No | Ingeger, the maximum number of allowed tasks in ScheduledThreadPool work queue. Default is `1024`. +authentication | No | An authentication configuration. By default, this runs an unauthenticated server. This uses pluggable authentication for HTTPS. To use basic authentication define the ```http_basic``` plugin with a `username` and `password`. To provide customer authentication use or create a plugin which implements: [ArmeriaHttpAuthenticationProvider](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/armeria-common/src/main/java/com/amazon/dataprepper/armeria/authentication/ArmeriaHttpAuthenticationProvider.java). ### file @@ -163,16 +163,16 @@ Takes unstructured data and utilizes pattern matching to structure and extract i Option | Required | Description :--- | :--- | :--- -match | No | Map, specifies which keys to match specific patterns against. Default is `{}`. -keep_empty_captures | No | Boolean, enables preserving `null` captures. Defatul value is `false`. -named_captures_only | No | Boolean, enables whether to keep only named captures. Default vaule is `true`. +match | No | Map, specifies which keys to match specific patterns against. Default is an empty body. +keep_empty_captures | No | Boolean, enables preserving `null` captures. Default value is `false`. +named_captures_only | No | Boolean, enables whether to keep only named captures. Default value is `true`. break_on_match | No | Boolean, specifies wether to match all patterns or stop once the first successful match is found. Default is `true`. keys_to_overwrite | No | List, specifies which existing keys are to be overwritten if there is a capture with the same key value. Default is `[]`. pattern_definitions | No | Map, that allows for custom pattern use inline. Default value is `{}`. patterns_directories | No | List, specifies the path of directories that contain customer pattern files. Default value is `[]`. pattern_files_glob | No | String, specifies which pattern files to use from the directories specified for `pattern_directories`. Default is `*`. target_key | No | String, specifies a parent level key to store all captures. Default value is `null`. -timeout_millis | Integer, the maximum amount of time that matching will be performed. Setting to `0` will disable the timeout. Default value is `30,000`. +timeout_millis | No | Integer, the maximum amount of time that matching will be performed. Setting to `0` will disable the timeout. Default value is `30,000`. ## Sinks diff --git a/_observability/data-prepper/get-started.md b/_observability/data-prepper/get-started.md new file mode 100644 index 00000000..75cf17ce --- /dev/null +++ b/_observability/data-prepper/get-started.md @@ -0,0 +1,63 @@ +--- +layout: default +title: Get Started +parent: Data Prepper +nav_order: 1 +--- + +# Get started with Data Prepper + +Data Prepper is an independent component, not an OpenSearch plugin, that converts data for use with OpenSearch. It's not bundled with the all-in-one OpenSearch installation packages. + +## 1. Install Data Prepper + +To use the Docker image, pull it like any other image: + +```bash +docker pull opensearchproject/data-prepper:latest +``` + +## 2. Define a pipeline + +Create a Data Prepper pipeline file, `pipelines.yaml`, with: + +```yaml +simple-sample-pipeline: + workers: 2 + delay: "5000" + source: + random: + sink: + - stdout: +``` + +## 3. Start Data Prepper + +Run the following command with your pipeline configuration YAML. + +```bash +docker run --name data-prepper \ + -v /full/path/to/pipelines.yaml:/usr/share/data-prepper/pipelines.yaml \ + opensearchproject/opensearch-data-prepper:latest +``` + +You should see log output and after a few seconds some UUIDs in the output. It should look something like the following: + +```yaml +2021-09-30T20:19:44,147 [main] INFO com.amazon.dataprepper.pipeline.server.DataPrepperServer - Data Prepper server running at :4900 +2021-09-30T20:19:44,681 [random-source-pool-0] INFO com.amazon.dataprepper.plugins.source.RandomStringSource - Writing to buffer +2021-09-30T20:19:45,183 [random-source-pool-0] INFO com.amazon.dataprepper.plugins.source.RandomStringSource - Writing to buffer +2021-09-30T20:19:45,687 [random-source-pool-0] INFO com.amazon.dataprepper.plugins.source.RandomStringSource - Writing to buffer +2021-09-30T20:19:46,191 [random-source-pool-0] INFO com.amazon.dataprepper.plugins.source.RandomStringSource - Writing to buffer +2021-09-30T20:19:46,694 [random-source-pool-0] INFO com.amazon.dataprepper.plugins.source.RandomStringSource - Writing to buffer +2021-09-30T20:19:47,200 [random-source-pool-0] INFO com.amazon.dataprepper.plugins.source.RandomStringSource - Writing to buffer +2021-09-30T20:19:49,181 [simple-test-pipeline-processor-worker-1-thread-1] INFO com.amazon.dataprepper.pipeline.ProcessWorker - simple-test-pipeline Worker: Processing 6 records from buffer +07dc0d37-da2c-447e-a8df-64792095fb72 +5ac9b10a-1d21-4306-851a-6fb12f797010 +99040c79-e97b-4f1d-a70b-409286f2a671 +5319a842-c028-4c17-a613-3ef101bd2bdd +e51e700e-5cab-4f6d-879a-1c3235a77d18 +b4ed2d7e-cf9c-4e9d-967c-b18e8af35c90 +``` + +This sample pipeline configuration above demonstrates a simple pipeline with a source (`random`) sending data to a sink (`stdout`). For more examples and details on more advanced pipeline configurations see the [Pipelines]({{site.url}}{{site.baseurl}}/observability/data-prepper/pipelines) guide. diff --git a/_observability/data-prepper/index.md b/_observability/data-prepper/index.md new file mode 100644 index 00000000..a1c688c9 --- /dev/null +++ b/_observability/data-prepper/index.md @@ -0,0 +1,15 @@ +--- +layout: default +title: Data Prepper +nav_order: 80 +has_children: true +has_toc: false +--- + +# Data Prepper + +Data Prepper is a server side data collector capable of filtering, enriching, transforming, normalizing and aggregating data for downstream analytics and visualization. + +Data Prepper allows users to build custom pipelines to improve their operational view of their own applications. Two common uses for Data Prepper are trace and log analytics. [Trace Analytics]({{site.url}}{{site.baseurl}}/observability/trace/index/) can help you visualize the flow of events and identify performance problems. [Log Analytics]({{site.url}}{{site.baseurl}}/observability/log-analytics/) can improve searching, analyzing and provide insights into your application. + +To get started building your own custom pipelines with Data Prepper, see the [Get Started]({{site.url}}{{site.baseurl}}/observability/data-prepper/get-started/) guide. diff --git a/_observability/data-prepper/pipelines.md b/_observability/data-prepper/pipelines.md new file mode 100644 index 00000000..b1b9bf4e --- /dev/null +++ b/_observability/data-prepper/pipelines.md @@ -0,0 +1,151 @@ +--- +layout: default +title: Pipelines +parent: Data Prepper +nav_order: 2 +--- + +# Pipelines + +![Data Prepper Pipeline]({{site.url}}{{site.baseurl}}/images/data-prepper-pipeline.png) + +To use Data Prepper, you define pipelines in a configuration YAML file. Each pipeline is a combination of a source, a buffer, zero or more preppers, and one or more sinks. For example: + +```yml +simple-sample-pipeline: + workers: 2 # the number of workers + delay: 5000 # in milliseconds, how long workers wait between read attempts + source: + random: + buffer: + bounded_blocking: + buffer_size: 1024 # max number of records the buffer accepts + batch_size: 256 # max number of records the buffer drains after each read + processor: + - string_converter: + upper_case: true + sink: + - stdout: +``` + +- Sources define where your data comes from. In this case, the source is a random UUID generator (`random`). + +- Buffers store data as it passes through the pipeline. + + By default, Data Prepper uses its one and only buffer, the `bounded_blocking` buffer, so you can omit this section unless you developed a custom buffer or need to tune the buffer settings. + +- Preppers perform some action on your data: filter, transform, enrich, etc. + + You can have multiple preppers, which run sequentially from top to bottom, not in parallel. The `string_converter` prepper transform the strings by making them uppercase. + +- Sinks define where your data goes. In this case, the sink is stdout. + +## Examples + +This section provides some pipeline examples that you can use to start creating your own pipelines. For more information, see [Data Prepper configuration reference]({{site.url}}{{site.baseurl}}/observability/data-prepper/data-prepper-reference/) guide. + +The Data Prepper repository has several [sample applications](https://github.com/opensearch-project/data-prepper/tree/main/examples) to help you get started. + +### Log ingestion pipeline + +The following example demonstrates how to use HTTP source and Grok prepper plugins to process unstructured log data. + +```yaml +log-pipeline: + source: + http: + ssl: false + processor: + - grok: + match: + log: [ "%{COMMONAPACHELOG}" ] + sink: + - opensearch: + hosts: [ "https://opensearch:9200" ] + insecure: true + username: admin + password: admin + index: apache_logs +``` + +Note: This example uses weak security. We strongly recommend securing all plugins which open external ports in production environments. + +### Trace Analytics pipeline + +The following example demonstrates how to build a pipeline that supports the [Trace Analytics OpenSearch Dashboards plugin]({{site.url}}{{site.baseurl}}/observability/trace/ta-dashboards/). This pipeline takes data from the OpenTelemetry Collector and uses two other pipelines as sinks. These two separate pipelines index trace and the service map documents for the dashboard plugin. + +```yml +entry-pipeline: + delay: "100" + source: + otel_trace_source: + ssl: false + sink: + - pipeline: + name: "raw-pipeline" + - pipeline: + name: "service-map-pipeline" +raw-pipeline: + source: + pipeline: + name: "entry-pipeline" + prepper: + - otel_trace_raw_prepper: + sink: + - opensearch: + hosts: ["https://localhost:9200" ] + insecure: true + username: admin + password: admin + trace_analytics_raw: true +service-map-pipeline: + delay: "100" + source: + pipeline: + name: "entry-pipeline" + prepper: + - service_map_stateful: + sink: + - opensearch: + hosts: ["https://localhost:9200"] + insecure: true + username: admin + password: admin + trace_analytics_service_map: true +``` + +## Migrating from Logstash + +Data Prepper supports Logstash configuration files for a limited set of plugins. Simply use the logstash config to run Data Prepper. + +```bash +docker run --name data-prepper \ + -v /full/path/to/logstash.conf:/usr/share/data-prepper/pipelines.conf \ + opensearchproject/opensearch-data-prepper:latest +``` + +This feature is limited by feature parity of Data Prepper. As of Data Prepper 1.2 release, the following plugins from the Logstash configuration are supported: + +- HTTP Input plugin +- Grok Filter plugin +- Elasticsearch Output plugin +- Amazon Elasticsearch Output plugin + +## Configure the Data Prepper server +Data Prepper itself provides administrative HTTP endpoints such as `/list` to list pipelines and `/metrics/prometheus` to provide Prometheus-compatible metrics data. The port which serves these endpoints has a TLS configuration and is specified by a separate YAML file. Data Prepper docker images secures these endpoints by default. We strongly recommend providing your own configuration file for securing production environments. Here is an example `data-prepper-config.yaml`: + +```yml +ssl: true +keyStoreFilePath: "/usr/share/data-prepper/keystore.jks" +keyStorePassword: "password" +privateKeyPassword: "other_password" +serverPort: 1234 +``` + +To configure the Data Prepper server, run Data Prepper with the additional yaml file. + +```yaml +docker run --name data-prepper -v /full/path/to/pipelines.yaml:/usr/share/data-prepper/pipelines.yaml \ + /full/path/to/data-prepper-config.yaml:/usr/share/data-prepper/data-prepper-config.yaml \ + opensearchproject/opensearch-data-prepper:latest +```` \ No newline at end of file diff --git a/_observability-plugins/event-analytics.md b/_observability/event-analytics.md similarity index 84% rename from _observability-plugins/event-analytics.md rename to _observability/event-analytics.md index ed440f1f..82b841fa 100644 --- a/_observability-plugins/event-analytics.md +++ b/_observability/event-analytics.md @@ -6,7 +6,7 @@ nav_order: 10 # Event analytics -Event analytics in observability is where you can use [Piped Processing Language]({{site.url}}{{site.baseurl}}/observability-plugins/ppl/index) (PPL) queries to build and view different visualizations of your data. +Event analytics in observability is where you can use [Piped Processing Language]({{site.url}}{{site.baseurl}}/observability/ppl/index) (PPL) queries to build and view different visualizations of your data. ## Get started with event analytics @@ -24,10 +24,10 @@ source = opensearch_dashboards_sample_data_logs | fields host | stats count() By default, Dashboards shows results from the last 15 minutes of your data. To see data from a different timeframe, use the date and time selector. -For more information about building PPL queries, see [Piped Processing Language]({{site.url}}{{site.baseurl}}/observability-plugins/ppl/index). +For more information about building PPL queries, see [Piped Processing Language]({{site.url}}{{site.baseurl}}/observability/ppl/index). ## Save a visualization -After Dashboards generates a visualization, you must save it if you want to return to it at a later time or if you want to add it to an [operational panel]({{site.url}}{{site.baseurl}}/observability-plugins/operational-panels). +After Dashboards generates a visualization, you must save it if you want to return to it at a later time or if you want to add it to an [operational panel]({{site.url}}{{site.baseurl}}/observability/operational-panels). To save a visualization, expand the save dropdown menu next to **Run**, enter a name for your visualization, then choose **Save**. You can reopen any saved visualizations on the event analytics page. diff --git a/_observability/index.md b/_observability/index.md new file mode 100644 index 00000000..918ad21b --- /dev/null +++ b/_observability/index.md @@ -0,0 +1,28 @@ +--- +layout: default +title: About Observability +nav_order: 1 +has_children: false +redirect_from: + - /observability/ + - /observability/ +--- + +# About Observability +OpenSearch Dashboards +{: .label .label-yellow :} + +Observability is collection of plugins and applications that let you visualize data-driven events by using Piped Processing Language to explore, discover, and query data stored in OpenSearch. + +Your experience of exploring data might differ, but if you're new to exploring data to create visualizations, we recommend trying a workflow like the following: + +1. Explore data over a certain timeframe using [Piped Processing Language]({{site.url}}{{site.baseurl}}/observability/ppl/index). +2. Use [event analytics]({{site.url}}{{site.baseurl}}/observability/event-analytics) to turn data-driven events into visualizations. + ![Sample Event Analytics View]({{site.url}}{{site.baseurl}}/images/event-analytics.png) +3. Create [operational panels]({{site.url}}{{site.baseurl}}/observability/operational-panels) and add visualizations to compare data the way you like. + ![Sample Operational Panel View]({{site.url}}{{site.baseurl}}/images/operational-panel.png) +4. Use [log analytics]({{site.url}}{{site.baseurl}}/observability/log-analytics) to transform unstructured log data. +5. Use [trace analytics]({{site.url}}{{site.baseurl}}/observability/trace/index) to create traces and dive deep into your data. + ![Sample Trace Analytics View]({{site.url}}{{site.baseurl}}/images/observability-trace.png) +6. Leverage [notebooks]({{site.url}}{{site.baseurl}}/observability/notebooks) to combine different visualizations and code blocks that you can share with team members. + ![Sample Notebooks View]({{site.url}}{{site.baseurl}}/images/notebooks.png) diff --git a/_observability-plugins/log-analytics.md b/_observability/log-analytics.md similarity index 78% rename from _observability-plugins/log-analytics.md rename to _observability/log-analytics.md index 85ac9053..b7a9c2dd 100644 --- a/_observability-plugins/log-analytics.md +++ b/_observability/log-analytics.md @@ -6,13 +6,11 @@ nav_order: 70 # Log Ingestion -Log ingestion provides a way to transform unstructured log data into a structure data and ingestion into OpenSearch. This data can help improve your log collection in distributed applications. - -Structured log data allows for improved queries and filtering based on the data format when you are searching logs for an event. +Log ingestion provides a way to transform unstructured log data into structured data and ingest into OpenSearch. Structured log data allows for improved queries and filtering based on the data format when searching logs for an event. ## Get started with log ingestion -OpenSearch Log Ingestion consists of three components---[Data Prepper]({{site.url}}{{site.baseurl}}/observability-plugins/data-prepper/index/), [OpenSearch]({{site.url}}{{site.baseurl}}/) and [OpenSearch Dashboards]({{site.url}}{{site.baseurl}}/)---that fit into the OpenSearch ecosystem. The Data Prepper repository has several [sample applications](https://github.com/opensearch-project/data-prepper/tree/main/examples) to help you get started. +OpenSearch Log Ingestion consists of three components---[Data Prepper]({{site.url}}{{site.baseurl}}/observability/data-prepper/index/), [OpenSearch]({{site.url}}{{site.baseurl}}/) and [OpenSearch Dashboards]({{site.url}}{{site.baseurl}}/)---that fit into the OpenSearch ecosystem. The Data Prepper repository has several [sample applications](https://github.com/opensearch-project/data-prepper/tree/main/examples) to help you get started. ### Basic flow of data @@ -22,7 +20,7 @@ OpenSearch Log Ingestion consists of three components---[Data Prepper]({{site.ur (In the [example](#example) below, [FluentBit](https://docs.fluentbit.io/manual/) is used as a log collector that collects log data from a file and sends the log data to Data Prepper). -2. [Data Prepper]({{site.url}}{{site.baseurl}}/observability-plugins/data-prepper/index/) receives the log data, transforms the data into a structure format, and indexes it on an OpenSearch cluster. +2. [Data Prepper]({{site.url}}{{site.baseurl}}/observability/data-prepper/index/) receives the log data, transforms the data into a structure format, and indexes it on an OpenSearch cluster. 3. The data can then be explored through OpenSearch search queries or the **Discover** page in OpenSearch Dashboards. @@ -92,4 +90,4 @@ The response should show the parsed log data: ] ``` -The same data can be viewed in OpenSearch Dashboards by visiting the **Discover** page and searching the `apache_logs` index. Remember, you must create the index in OpensSearch Dashboards if this is your first time searching for the index. +The same data can be viewed in OpenSearch Dashboards by visiting the **Discover** page and searching the `apache_logs` index. Remember, you must create the index in OpenSearch Dashboards if this is your first time searching for the index. diff --git a/_observability-plugins/notebooks.md b/_observability/notebooks.md similarity index 100% rename from _observability-plugins/notebooks.md rename to _observability/notebooks.md diff --git a/_observability-plugins/operational-panels.md b/_observability/operational-panels.md similarity index 86% rename from _observability-plugins/operational-panels.md rename to _observability/operational-panels.md index 0f47d182..b037dfe4 100644 --- a/_observability-plugins/operational-panels.md +++ b/_observability/operational-panels.md @@ -6,7 +6,7 @@ nav_order: 30 # Operational panels -Operational panels in OpenSearch Dashboards are collections of visualizations generated using [Piped Processing Language]({{site.url}}{{site.baseurl}}/observability-plugins/ppl/index) (PPL) queries. +Operational panels in OpenSearch Dashboards are collections of visualizations generated using [Piped Processing Language]({{site.url}}{{site.baseurl}}/observability/ppl/index) (PPL) queries. ## Get started with operational panels @@ -16,7 +16,7 @@ If you want to start using operational panels without adding any data, expand th To create an operational panel and add visualizations: -1. From the **Add Visualization** dropdown menu, choose **Select Existing Visualization** or **Create New Visualization**, which takes you to the [event analytics]({{site.url}}{{site.baseurl}}/observability-plugins/event-analytics) explorer, where you can use PPL to create visualizations. +1. From the **Add Visualization** dropdown menu, choose **Select Existing Visualization** or **Create New Visualization**, which takes you to the [event analytics]({{site.url}}{{site.baseurl}}/observability/event-analytics) explorer, where you can use PPL to create visualizations. 1. If you're adding already existing visualizations, choose a visualization from the dropdown menu. 1. Choose **Add**. diff --git a/_observability-plugins/ppl/commands.md b/_observability/ppl/commands.md similarity index 100% rename from _observability-plugins/ppl/commands.md rename to _observability/ppl/commands.md diff --git a/_observability-plugins/ppl/datatypes.md b/_observability/ppl/datatypes.md similarity index 100% rename from _observability-plugins/ppl/datatypes.md rename to _observability/ppl/datatypes.md diff --git a/_observability-plugins/ppl/endpoint.md b/_observability/ppl/endpoint.md similarity index 100% rename from _observability-plugins/ppl/endpoint.md rename to _observability/ppl/endpoint.md diff --git a/_observability-plugins/ppl/functions.md b/_observability/ppl/functions.md similarity index 100% rename from _observability-plugins/ppl/functions.md rename to _observability/ppl/functions.md diff --git a/_observability-plugins/ppl/identifiers.md b/_observability/ppl/identifiers.md similarity index 100% rename from _observability-plugins/ppl/identifiers.md rename to _observability/ppl/identifiers.md diff --git a/_observability-plugins/ppl/index.md b/_observability/ppl/index.md similarity index 100% rename from _observability-plugins/ppl/index.md rename to _observability/ppl/index.md diff --git a/_observability-plugins/ppl/protocol.md b/_observability/ppl/protocol.md similarity index 100% rename from _observability-plugins/ppl/protocol.md rename to _observability/ppl/protocol.md diff --git a/_observability-plugins/ppl/settings.md b/_observability/ppl/settings.md similarity index 100% rename from _observability-plugins/ppl/settings.md rename to _observability/ppl/settings.md diff --git a/_observability-plugins/trace/get-started.md b/_observability/trace/get-started.md similarity index 92% rename from _observability-plugins/trace/get-started.md rename to _observability/trace/get-started.md index ffd16e3c..a9808d88 100644 --- a/_observability-plugins/trace/get-started.md +++ b/_observability/trace/get-started.md @@ -19,9 +19,9 @@ OpenSearch Trace Analytics consists of two components---Data Prepper and the Tra 1. The [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/getting-started/) receives data from the application and formats it into OpenTelemetry data. -1. [Data Prepper]({{site.url}}{{site.baseurl}}/observability-plugins/data-prepper/index/) processes the OpenTelemetry data, transforms it for use in OpenSearch, and indexes it on an OpenSearch cluster. +1. [Data Prepper]({{site.url}}{{site.baseurl}}/observability/data-prepper/index/) processes the OpenTelemetry data, transforms it for use in OpenSearch, and indexes it on an OpenSearch cluster. -1. The [Trace Analytics OpenSearch Dashboards plugin]({{site.url}}{{site.baseurl}}/observability-plugins/trace/ta-dashboards/) displays the data in near real-time as a series of charts and tables, with an emphasis on service architecture, latency, error rate, and throughput. +1. The [Trace Analytics OpenSearch Dashboards plugin]({{site.url}}{{site.baseurl}}/observability/trace/ta-dashboards/) displays the data in near real-time as a series of charts and tables, with an emphasis on service architecture, latency, error rate, and throughput. ## Jaeger HotROD @@ -78,4 +78,4 @@ curl -X GET -u 'admin:admin' -k 'https://localhost:9200/otel-v1-apm-span-000001/ Navigate to `http://localhost:5601` in a web browser and choose **Trace Analytics**. You can see the results of your single click in the Jaeger HotROD web interface: the number of traces per API and HTTP method, latency trends, a color-coded map of the service architecture, and a list of trace IDs that you can use to drill down on individual operations. -If you don't see your trace, adjust the timeframe in OpenSearch Dashboards. For more information on using the plugin, see [OpenSearch Dashboards plugin]({{site.url}}{{site.baseurl}}/observability-plugins/trace/ta-dashboards/). +If you don't see your trace, adjust the timeframe in OpenSearch Dashboards. For more information on using the plugin, see [OpenSearch Dashboards plugin]({{site.url}}{{site.baseurl}}/observability/trace/ta-dashboards/). diff --git a/_observability-plugins/trace/index.md b/_observability/trace/index.md similarity index 100% rename from _observability-plugins/trace/index.md rename to _observability/trace/index.md diff --git a/_observability-plugins/trace/ta-dashboards.md b/_observability/trace/ta-dashboards.md similarity index 100% rename from _observability-plugins/trace/ta-dashboards.md rename to _observability/trace/ta-dashboards.md diff --git a/_opensearch/data-streams.md b/_opensearch/data-streams.md index fa736ac4..7fa5fcc0 100644 --- a/_opensearch/data-streams.md +++ b/_opensearch/data-streams.md @@ -262,4 +262,4 @@ You can use wildcards to delete more than one data stream. We recommend deleting data from a data stream using an ISM policy. -You can also use [asynchronous search]({{site.url}}{{site.baseurl}}/search-plugins/async/index/) and [SQL]({{site.url}}{{site.baseurl}}/search-plugins/sql/index/) and [PPL]({{site.url}}{{site.baseurl}}/observability-plugins/ppl/index/) to query your data stream directly. You can also use the security plugin to define granular permissions on the data stream name. +You can also use [asynchronous search]({{site.url}}{{site.baseurl}}/search-plugins/async/index/) and [SQL]({{site.url}}{{site.baseurl}}/search-plugins/sql/index/) and [PPL]({{site.url}}{{site.baseurl}}/observability/ppl/index/) to query your data stream directly. You can also use the security plugin to define granular permissions on the data stream name. diff --git a/images/data-prepper-pipeline.drawio b/images/data-prepper-pipeline.drawio new file mode 100644 index 00000000..ca683ec3 --- /dev/null +++ b/images/data-prepper-pipeline.drawio @@ -0,0 +1 @@ +7VhNc9owEP01HMPYCH8dCyHtIZ1mSjtpj8JebDWy5ZHlGPLrK2P5UwaSGUh6KBest2shvbfPKzNBy3j3meM0+soCoJOZEewm6HYym5mmYcuvEtlXiOc5FRByEqikFliTF1CgodCcBJD1EgVjVJC0D/osScAXPQxzzop+2pbR/q+mOAQNWPuY6ugjCURUoa5ltPgXIGEkmg2rSIzrZAVkEQ5Y0YHQaoKWnDFRXcW7JdCSvJqX6r67I9FmYRwS8Zob1oviwVyFN4/ff76Y3g+SxzncqFmeMc3VhtVixb5mgLM8CaCcxJygRRERAesU+2W0kJpLLBIxVeFMcPbUMCX3uFA/AFzA7ujKzYYPWUjAYhB8L1PaKqpuUTU0r9kvOorUNEddNepErKogbOZuiZIXiqs38GZrNEEg60YNGRcRC1mC6apFFy2Rhhy1OfeMpYq+PyDEXpkA54L1yd2yRKig6Z0gu1zKaarlylnOfTixQ1UIAvMQxIk8NC4dB4oFee6v4+IyzLTyXVf70tShVD4x4HwBn+d4SyhdMsr4YWYUYHC3fpPZidi+C5vthSxgDSzg6hZocroWcK/lAOcjHCDZ4vtf6v7D4Hc5mFr18HbXDd7u1ehdnYNe6Zz5RzoHac5Z5NstcE3VAGdRK1kuKElkkdfN1ujrgykJE3ntS9LkXGhB8QboA8uIIKwXKN1AZJu9HyRsmBAs7iR8UlOKskQWsomm5cLiXVieN6YFbKisoWy6qVav2VMyeGfeafZMWAIjdXEBpyJv4FRPd6pjTOe6VW2JGt2PfSXt3f/WPWXJV1jX+kjrzjXrPnBI0xHv1m5JOfMhy853vw32n8KD1N8qpx/pis1DYfzMN2yRFrjBfKxFurMNsu3LGM9C541njp0S7Wu1SEs/nZDk6R3PJmBK6p0x4j3bQfhCxDvzf+1s4ukOISkc6nlIvty2GHuFGTaKDrEK0nrdsGPFJAjoMVn7z9Oen67QlMyBRJajS2SPKITerpActu+0h1jnnwG0+gs= \ No newline at end of file diff --git a/images/data-prepper-pipeline.png b/images/data-prepper-pipeline.png new file mode 100644 index 0000000000000000000000000000000000000000..e9f6c88f3994c95145661b6735a8f6142476bb11 GIT binary patch literal 25811 zcmZsC1yo$ivUWlU1PSg0XYk7jHpwQtB^Wypo5f%ikcupVDj}PF}n~ zLUESXakgyex3F_^25?CMI60-Q%^=Q>cDC>&Jm1F76lM-HGyP{7 zCp#y{dk)U`9NZcl+yE{KPCod9i<6a`pHuIj^^Kusw*Ml;!^#dPz@TIdf!R9SIavU> zfbe&at+Sam{1=`Lf2x4t4;}bF2fGmmj}aduJSkysZ*8V+W}*Od25`#oaPhHn@xrqi z6y!9N6ak!)@U%6|$_)M^YX-5h`>Tnmqp=$+%nn|igNK!$l@FdJY2j#R1Fs|_z`@GS z%FWHk$;!dY4KMg#{$x0~Sb16h;u36ZZtMvAkKX>uG;=nF{*!G9c^PdZHFF~;Q*C)U z0X;hh4QrWyvg>N*=mdky>K~H(ci(?CcJ{D0`)97HnJWwe9|$LaOByb5Q>Tc@VWC!^4JP+@$l;^YjNnoi~k~GYYgY$AEZ^`a(&P7Pa=F;7RnF-J2z=n zb`CiuRb5Frep5*gIcJa_hYi@$+1`?0-poWw+k;0>&RSMg(NTd@l}FRf7%Z!4!K(l? zb8_Z%vT!vqS5@#-26}K=a@m+E!6ypnWG87Suc9Yy=4Qhyr|V(EV`HNWadDM)w$_2E zKzL--fgaKhFe@2rn1&38B-GAS5(1Q$;*f@FS!&5kvcpwJ#?A>0kOG!<2O*>;AYiCuc4oFto$_C2m2nKmN>Pcu>NvrYb zvg^vJ8QUpn+e*o(>p-;ma#gW&)Kr!>HB~mVmv&UJgPN%5xxn;5Ixq((E`C`(M`dYgC@;INvaC8p zU6R8_lGoT?M#h6v-NM3yL&e-o$zJy_@u+%8N$c2KN$_wha(I|>dZ;VFpuBqQP!|qQ z1!H4JYY%H%6>|-kyBdckNWxV~%^hULuPn)J3gotvF*DZv50d|K}nZ0-Skk2^EN?oTQW=8pkTq3T;IML0khZq6GBsD`aItiRdRWOzbJ;qB1r)7x zWZ|MzGqIE4cL9ME+;kL_IVAMtT?FJj)SYaA5>9+Db$(526$K|NHEkz58yg;oB1}gM zW~R)mD+4zSPIlTJybfB%uAEv*9%i!UX2#}vO4<@2OH&HcX&H{m=lMTuDPd@GJJw$94&$73hpv4j!Mc-E?kPTN&=ER z+#I%+_UvFjZ5Ix(v=h9Et)7$>uR52iybVx+*Ph*-+mg#f&D=uLUCPxHJ`^B_6Ss?% z3J7ck;kC5}xjK7r%F4OO$Z8lXNI6O?u-iJCI?1Za$V*t-DQVgAc?zhQ+4AzKSu1$} z)#Z&X!JJ@K1(1fBqrC*g*#%CH-Gal?*49n-uk{Rd`pa~1!Of-2zirMx3=sVNUsj(> zI?=1@{KX5(7a%DK4R^!C%-8PI12g?b8^^~8{z0_-65w(?20U0K!Hs%zp|4k2V3N$;{j$IghWB!d5;EpPhr&$OMp6lbpcblo|(oe`lUODij@ zu5c9gmvo(osV{IpA}D_P=ga<^Z;;&HGXMW78jpK{1~A0_cfJ2<^F3eaxLEFi#s8{M&#&eL&QcW-x<7X{f0Y92^`5M@FO^95~>D*xu*=FFN8vDBrAR zF^YF=?T3T98MMYLo5xvsrd(7nIWJ`>fyZguErDmO4Q&8j)cYUhr z{ce8j>0WVmUU|79Is-d97A~D4)_#@UYRA5q(30fe=X*M&bG{BLp=P<*U?b`7-n5$% zACJA_`(V>{v*IOxQal-qAXs*qQVb{b#t|5k>%2) zKWD9TX_rB@b#*g46ysE2>E^h((33ron?hepp_Qs@DZ2Tq@!m(UOUJ zr%ET}asAl-yE?0)zQj?G48_0~QurxQS5U1=?bZrcqKK|IX^PNAMl-`^oWJw;1@x)= zbk~wv-s|RAbMUQ-aV{Pf79~sPd{b^(-C*B;jY;rjN!KggP+2LdL4)r=Ac(|KmSAmX zG&8YPjrDI!9X>h>7=q9JCHBmEM9|vM`y@EO>%V`8)YfvoM0NymgTIJ0q!|N>QRn`7nfB597SF zLAJSDdwb!%y;3Vm>R0GaUVmRegG%BpWn1g({y{;AX~V;Sqyl9(e}Dfbhs}_YX#4-B zE^dYW1{DkzDoh&a5QeNxMg?cLvNPiSvE1*vG{{Eoq zY{9C^7`}9KvIYLmPbVUGaA#RnM}%F+9g>mH75?r{QNjoiaayD)=Ms#VpA$OujUX6(iF;RTa3Y1ZXCEk_DCa3WsnuM6Vki?ZFN+V*2^3ZkyL2v1$ zi4BBbx#CA#GgK#1u+H3<@PYBvzN9Gr&&koj36rV~%F_6nJaVHf zKva?WKDl952~Xu3EJ-E6!fQ^>>CjyNEO+!v=Uld@eo1X)LBY#!ffzP`@F&^@g8MYOsVB)Tv-K)qad_1Xv$n_+PyLapEZ z)$CjW!6n!uN^gyAP2PY#qtHK^l(Tj3>M>2&YJy;Zh+!tx59%>M&xm0))ej*7d7NAL z$$5FTEfc>D4oBAZOj*g&4&MDD8(aDOpAY0iBO>thtCf-rN6rW)1Q#l zL9X~ds;@GprkNw^XVgb)c`ApHgj0UVt8Z3aV@mb)^*ILbQ4g4z8D(J(0`ieRRPS$W zwHzcEOPNUj;4I9{qhcKM+MiYj=NFUh{t;jdg`eWK?X7F{6w^$cRjPvxR{WpA9Q&LESN~t49j>- z2tgl-9v>Ped1=V|=Cl?+1SjL)c9Y$|LG}}H9)}uif1|~8EK`W`QcWx#E6<*X*$Yi) zA+2X}=EDKyHk6QBtPIN|?KO2&T#&|-RKWQ17|L;G#`@j~HSo2-kPO^rS!d{RYkNrZ zRQ8!}+-I^ZH6<>0nszO86WPXS**jGSMFs*C7uO^@HYQOEOt)ylvo3|VW+G{dPP-_L z^bsExauvM1h)7P;ze^J`YmffR?&f?(8TZJVBL#)ErUl)m54}|+7-3QRcwqhBRn0o1 ze4*eaLh$K!!FD)%&*Ah3yk8I9X^J#$g00=Hb)?<_OT8Y<%9!+UJ;v7_s8412(2?oS ztVZ1JtN%sVpnQGX`V}VBDo%A{I_?EZVej;Z-s0bsRDj0g6z7f7^z2tHZqDK<*RD~~ z|Fagy;b&&P1S}_>mmIUWLq?^n^R4sUT|beoD|y-LxhL|lnV}4QcYr~~Q3)A!49>*b z6U_w-3yMSkYXgyWvaO`I>}|$g zM%&@~mNESZ+F+P}qBY_mR=~*nNu0Vkf^1#wuyQ#3b_Bl``y~c5#?dC08i|T`NZO0GfDujWY8`s5&8)Y0k{x})!vY$ z!%%;U%1@dVfvcSupTt;m#tYw9zQ@u$wv$Qpqi5)aydV~r>Mx|>Tw^}eHim}Q)|3Oh`l&HDLyjb>VSv!&Jud}C6Z3*n31?a?^R(;m*w{;d># z%_Y_|O26FrnDE0J!9kJa07tY#oCWO62N#RduQN|s!3hb)m)=q&7$4-MJY#w%zn~Mh z+%|VE{*oDb@8EKd6Q(fQRsR*w7*S$qq93ux%Ol6_MWnWeuA?s{7Zs;{J> zAvyfo#qK=EK9b3Y+3tB`E@dgRrl(Gn$7L%z4mStdbLo7-z3<|2LI$_k=hDP(Tl>$! zg=xW{K^D2>lE3&#l$e(TP z-8L5EKL_c+ek~~t_Y@8m2zE`c{6eL9@a!I2)`QvDOD`x;(g~6bC%^AG&JWm!#+xn_D{>#jXz(wb6nl$ksONX2NotjSqD!^|R3$ZEatK`k12F2F)7!bsj zf^2~5wJy%D4&mBIOi+mneRH!>V2H?o&Ur0klKIi`+CjW(-G75q-$sQc>5D*5U*YFD z7eRtzyMl->484`wP`7o%0Z=C1y4wNp{2o~VqUcAFN^+5jurK0NG1a;`5M#W} z_I>TL1kCBy+1K})3p&=n#R&drUgVP^IgAc`%UP?c9teu7L1RfWMJ&vn{}C~a-Ec!f zJ-UodlY`ZIZ6Lh9+ZCd)FIu=4t{c{ztV)o*)3B+p&k!|;z{qBoQ11r_R-o*Jm51}fsZ()c4$R~(|#DLrR;W^~h4Y6~0- z*8>zf^^O1wdeDTarDX&tkv`IZjvgA*sGS6H`cb25i6QbU*(|p}Z>7{ay8CeSxFuk6 z!_i@)#K_Ig{jRR=3)7A`u}z;W9nHSuyWI^V2w3uxY)Z+h=L|loY7_VwwED_|dsMb4 za?*5s{CgQ;)U7MzwMD6HJp5pk(LsXA>6bKLQ|74q7K85WSPqad`TNTzn}1wh5?kHJ+ReM zXptD(QL(3tKx&OfN2En|EJiV)qu#^!e)!B8URZs%Q`%?j%&QX?VCj z)k*ifHO=t5$E2w4VrVBzS5s|w&voc4i0IG&#KjK7@fI0vbTz(jkJbY#$pnhDWSmLT zkM@sDGn(+2@5QPkndvZKUN^`Vhg(DAXZxj0hYR&+%-ZGRQc^+a06pHw!a_iiVrGP{ z)9=oetnW6ObT$O?KMG2+TxwL+zk3KYA5nLY$&I=q1v!6m+X>-)+@FOUt2v?cE-M11 zQcXu+J-BpsqKKsHo0X+NbEkXprl)62wvW@B$P-`V()ggCEUM}FIa!h-?v56|g4*R5 zl;d5sC8HXLN`ufzTh_%2d|h z!8P!R*y8S5m!V!5qO1S4qNMNHr5KAa3sDmC${FQ~(2d67pIundQ>z*3y*v@;6Xt-t zSlF|$br(P@lr3L`-_clBu`ya`hE^8vXzm|+76K$83;p^}D-2F(@--*Uw46?neFh3> z{Yc%a*(d!I-(2D{GNO#M$~$M?>#M3$H)H87gI9f@g1{>|U4eJ+z+7~Z!RlOVFgY~> z`F#$x&-j{it@nrt`BQx!pN|8DdZs6(L<9%gRf+vK9&tX1dV(_)oQ~GQ;_D=H?Q+tn zQdpA6Ypt2eEKjGd55kmx1eOH?ve4gR5v*k_(7mk1SaqUC7$UygT{92v+YHRDDxTss zwwU|MX4FE9IN9|ISrrVnez9+_8ynlD{-JSoP0L;Vw%%yYpE-tp<#*h<6p z(#qweR-+kSUU%Dj=aEzaTwdOHKd70-!w;6tQi*iy4l>Kf^*FLkD&0KZ*dO;-x>fMo zd1;Z$nwX+m!;7rkt7Uu{Y^A;R=DEXn(2y%>gx+J4KeM_!9z#g@lnLac73V`;Ixz z7MGM%Acs$)7UKH3Kbj);4vnyTU_h4UerX|d%PUFG3(3?j`$R1v^K;9VaEkzXBWrK> zZ-KQPr+Q3np-(w=j4%ERa-VT8O=_| zgI}~?E1(YBUsnw?Ftf@I4t{i|SkQ&Y`~a4$Xh`?&ggJqd=+NH=r00L`SEO~*s7Fkg zyr@1peLKOBsf~KN!Ork&^ix57SUO+L<4*S3tdXDpVaKB5{&WfM?~(M-n$3Ryj0^sM zI24?;5YYfX)5?1FOXe@Y_PhA~Q;`pW_OpyR)N~116qT zOz`CDQfaBx=}-&W;0SYH=dtMSO!x^%-5OM%sjh7c*xcIInbRlU%;xfxP)YeorXlbE zD=BgR(|%*WyC$K@TQ+0U+|;y|{pX%gCWfSN=E*vG6uw!$-H)gr0)jYfOIf9NRCxXB zwSzQXU)o}xh%;ox@#pulehL}Vm6 zFss;yMR`Qm>@aa_bzUSFr=F?6g+_I@1=qToni`!BYhu^OJ9;K2rUlxloVAY*Cpajy zWMl+~(eDum8L~7dwZhBnE%E zEI)Fgp{GYK^Syih-Dd97rVm$-%`7_yl9^>kf8 zl+K9E^(J3>G%5-5Hz*r{cb4rYzp9;#`EamYX|TIGTvPvOW#-+sEb;D5N+J9Xk8s3x++8zIi=DXacxyavGbcUE$*-LdVFWe^7 zYTRpm#Y=5@0c7L_P^{W^AV4I2Mu=xgTf9f5HdjBDl8d{zyw!U0ATQkh>~5ycciW^> z+Jd%ZNP}2fR#tOU^}X8U&M{~;j>yi!v9`9hdm1*y?sfd3LlQI^ZkkI=&;E8~6bN!j zOeRPPKN5Qv;2$p* z5pdesmq|;ioi0@EqKH8lL>y>ak+4D-_(^dh#(XKear6+M_~Yda!s;;4poH` zhA8iq$>B3GP89{KgLLs%9`}-7h`4#bQH*}8fsjc$A&Ssfh_BOZfaCOVxABsl!^F$n zCBux<_x7$hJLpW(%&bt*wWd;~%b?}s%Hy5&cutV6Np1mRn5E%sG{7%KRS(c>t@fKx z3+vr2UwH6_hEmT`#0)kAGvmOz612DJxA!GCy3+IomwP&aJhTUVw%D~ua^jrwIW1-* z56I8o+&MQVb+DrNg_E-6)ynjMP@Y|m`4VOhqxI+Iv5^tQa@~4)A~8HXJcd@!;DbaG z?h&IVtah&(ttN8Txxa9{BO`2O~IL3av<1G{I5^K?yS#cTcxG{*(&syf^)uGIre6JMMTzD00|d~1e-Nyg(j5==)J z#A4Reo1t(-EF+_Qvf+SF2I=i4y<%s#bmZDzUoK0nVvz8G^oFM!F`di|lS6xC2Y$3R zVcX0uJ4-psIAab?(N8~&l@8(AXbJU^D^|>xqswzw%E^2x?ZR-_s0$7lh(jsOKo7H7 zz{BX)H{|0e$G!_s!E5su#b;+J6x70b>%_BriBOaE!=3eG*OR9HKfd=!t=Gq$v)PZB z1_-S`0%}>u>bUy5#>d|rXj>DIkbaIhD7NjQA0$&5d6$zcX*mVlt1v58`=A#- z5Ao_*(r{{sWoBk(_fTZ8{m}jf%;j`R#!hs9j}#Ww6mu)2W|G@}y5H))cXjj2aRO#l zj2byy<2m4wN^-DSC;e3KlKP&P^&VOOo^Wy`jZ42gv!97>GxYoQA@)wHZAfTYxy`*j z=TY+xyVdABUW>TImbL@w&o>OqO^&Y;WtCx$QU|W((y?*mO24hFh;PXOzkc$!#~ONU z<3Dav{OKHK`@U50QSa%QHwixlGT7U}&da54Agj4F`P}nRy;eA%r->tKm?b(m@}sl< zw2Xn3HJ~Dz?Alu71ynX%$NM30_H6OwpayoOb$RrD_4}Wf?LwV{<8M+_EIan%+NL!0 zG(u?fN4p27vFgM!DD50RZ`suW8M&(PC@L^;5yFEK=n7Z)%6o;XSc6ikC>ow!kO-1* zI#3oacFfyqExr(qK2PzjA1$gSh7_Io0CJ@Q_S)yEsv9=~Z_bs!rlb(!;NXb+`JQ$m zTU36Gh>Q$uZ1kR~hmK`n6V7GvI{;UXhci}x{(OsPcr#Kr);%b9jY;a!+&unkDzLe^ zdBg_h)PH|qHdC$}`oGdeEH_BjH@z%=))$ciS3}J-vK5xa@{&Vr*iGD$r2bQP9gTwbX>lbg;Kf zZqRaL%iX}ov(1`kC)+tPf{+>58vfowNl7Rsn1%{Kh&F&kH7?uErUsVnzdZVVIGw?BZfZ&Py!D@^B`5xKtwv>&G-0%rWjlh$0eD zuNqC94aVc}h6yb8nBT6i@>ba>J37krZx~WQ$^%(~v=hF6NC4LU{_s`w% z8JxEM#hC3g^O#NYK$@SuuW4K1@N}$l*;^qCd8K#&l~mq z>bfvu6WakwA!cNGaCS zV)P;E8X921Q%(z6;UYHQ>Bb|-CD&sTsF3`OreNrS?0KKScXYru z(i>AjS`C{8IiI+kkmgHzxvkyJtIP&pZf~2TF&`S!WCgSqV`}|*G*S~~szAjfx&7t0 z(zS24%Dbz-lVht>@4;4XbJ%;(ETHRtQ7FLgx?6>GxAJ_-JYsl@8T|zp293>#m2g*N zqI-MQ^XVgD^gBr_H@*8K^C#ay>eTa#iyGr6&&L+m_ne%ZwI&s!3wnD z;b|}=!ByO+O_G~NYz2v0khgmxB84FBo`}7WltIU)wpboJ=smG50(Fb$Q|F!jcq|P2 zj%&pClQzs(=SIZy>HuKaisy>L)j!$`9*dDUU&i|&=vKmNC$9RU!|)kjxD5bh`Yp^w zXk6TVy5Lo{UKZP7S?z4-bD}w7C|uT+qW9o-G%$Hr#XBhL;@kM!&))TO5INbhA0FF? zH#S>a2O{C9A>P%L`?K9oH%B%l0(nOoE7zC0E3$lJ{cF7xgliK+uO*(o@#z70k1>#9 z$zOoMTp1$H*vSNMPh`p~OZqzk{rsOX$T_T&KKl9XF2{Ut*esmQ9w=k$@TZ~2*!)rN zbuxg_;XBu6PHJsKYgDwtmLx~}-si#WaIWs{a?53BOSWHNa7L@fPeUxG^0of7MY*=m z0prD9xlTq`55gZ#`kwl{U1*bNpA=GA4O%ijnX9?$=h=qX@h2PYwQil0a`^t><&JhNm;+!HnDzDQei{*3gD6H9d45 zlO_d^5HeMy)O||@02G#(=X{lT)y{W)I8W10dm!Ds+~P(kq}Y?~4jiE%Ci|+8cVR?( zz8n8sJ~GQ{&=O>`cpTjLi^9#POQS^Fe`G^3t$Of5;^|qw5Snz1*`E0O;b5%Y-Dl#! z7$`L^h1CY@`jtIcvS8kDtjc1%0y<^KNO3<}_fkoB}GIxz=cxC4sL z_0d?~BOh}|KQ1H-ON?U42kTGjO~md&USZ_NlaI0R(}l@cgGgSc0lgc1cPUk701dEP z+)@hjvf$xD6_dg}o)pi|W92X;P34I!!9Uf~^L%rK9A8-kBWPC{_GSA$&sz*n-b`$Y z$Zjr7Fnl}N2#*o**2_&Pg(W;!?%(MUVtdL$ zK&kX&F=TpvG&|z#?H@2%iF)mR^Al5==PFoTQxjn5d-3{kr4d)8^{RWpj1=xuUbecw zF4XD$pddat$hs@aEzXn3BFk)}#qi|9O71n*Z?z>|cS*I`K1;ZHv#Q0DYDo{}yF??4 zaKJ65DOiQLtiJKRUMJPLn&L|py?IS#TEHmEOVaZIVwG*T7Xk-OGg2oFatDmiFvQOh zk87S+Y_BBptu0EZi|MA`pe0zBthDJ<1CdvCTV5D=1%13Y2D|utz7AJpVwEzVK5`za zDbnXDyWE(+c$i&#e6FupAhfZ~SUT&t-9U4opPZnP#UtPUN}r7<#F_bO8|{1HxTvKN zC+E3;Lm|@JZ(e3r%wD%hJQ7?3<<{N19S*a7jo=?vR=J`*E$h)L zMj_Fk6Xidg6ad3}`$nX1qCn?P7Y_Gxdo7t%3lnKik1Si;zki_s7UFdu_cvNf+z)~q ztE~=fSv<|C481S=fnLvE>(m@?;Rd*QtU|I6!I+2=4h{GQL_d2DtVjW97`>i z;EGR@75c_r^^KAGRidpgJ`LEmHE&h4)PeDm`KV24C&5!SCR2`=q|;TEPi{sYgp0hk z1K&k`kiq6EFv=vYKQ*ZB56r&T+Ss$LVfxx<+hX4(m!gZl_m0oFtW0ix5p={rLPVDg zwOu)#I*&B3T5Ss%s>{aie|rVaHhf~Ib!NyS6VnZv2TkP%;T*{~Kl+TU??*Ee33=ds z&H8CYd!q&lR-cO>cE9@xn5*+kxqT|7m_<==v{6yh-oW+&V6=xI=kD0wJ-^p_ozBag z#O)yqYGKc_UGA-s^s>L+!p;!&{o3WZHxGTGWwU@rLj#FIeM`_PVAXI)2rTs_`Qy&E z)aLM7|2l&pPZ~SdnOFFAgDXa!9xTi-`(SLNwM5006F$0 zCceKb{mdLJmuJ8%4K7yBWQVQ@qZ9FCCDJLWe-ccU)CL)T7l0$$8w6~>-DqnbIid!0oSvi~W(GYKv3P!%Nh z1s+bT=G{o=IK|z1tTwmX-#AuTMU3KIoqrCCCmot~xZ2RH$)BEMovC7;*{pLDE5p@S z&y~vc`gTF1bm@!f7$4+d7}ZXRxXcwRh9(P@eb?fSCD zImfD}AQ{_4t2FV@aN-A*V8}G(F=^og^vP!?z;|B z>)aVvJio7#sa!3Rk!;Y9P?CR6u3Ba6KIdjUBQE?sMu3vDIS?zRij6|+*qyI)|A2ec zS#Houa4=gT>A=++lG4a$K9)w)NJ^IWK29}(-k`%7^roQQk4HG4UWF{l?KF-mmB*_A zOU{Oa$F%`?-xVW;b97pA->qAsns*^%^gMlfBH^Tf*5uX1!>%2NTG(0XKi77$V!&h? z!`^fdM^}7xc*GUfj879Z8CEP4b|H5)l?H*b`<){Q3#@~@&o*~la@~Yt{)AtXk!GQ! za+`2=SA{@sN({mxqCD5EC!aENvAR|JbsP+A-s(0Ug;O=oYiKOZ2>Plejj=huLcR-tT zyxJ^SbHXCL%yXaL9Q1#JNf-05AWV(lDG*FKO8z)oY12zPbKwMy8TRN6x8Q(KV3umEG$pC zyh?I3I6jJ=#zqSOFfI`@TUiY~$<#rauY76l`+Zke+PmXV)LPAzTrx(*qDjRJb)p__ zElza(i};tm8y*wNO`cZ(l1!Hk;!z7(MOxMxMM(EUGLurH=sguiQR82D=;*aMV^dek zzRBf={g!IGx$}BknN{-T0~CEdNdzCnh)|Q$ zQz9#$08kk3V=eKk`w&qLhW5K3VrP$SUQ~4&zm0*gtxc8gACsu0jke<7<6hp{Y!ft7 zzr!2Wm-+T&%h;^%~-o_G?mT&7K+)Hjrmd^-O}%cQ#?Bqh?Y!(NMjV?w&cRICydexhc% zV*P7XDM1!KPo7Ce&A0`*Jf-rIACXp6?h{*qWI#8)5T;j9o}i)pu(L)Joxh8 z>{s+$N#KC~Mz?N@yB0X%^23CaQ}SA7YzG0MsytZJ3?LU5H_>w&&UJd{R(3iw>{MfO zp*Lc9iMmor$n14Itv2HR+hJYVZOu98>&`>fpXig0N35#v4~)OrJPb6_6Eec+DGsO+ zHNn8}V`b`45&7auy6!HJNxaYVA&M}cZ8%BezS|f~Rxb;P zxOa$~Su>d=Mffv8m6K;?02;Z~nx+r#pG$a3I>z9ZrR13rLv^4D3<^ZX5dK*6e#{*x zDcL7L=K)-rR0r3XUs+l&G+_q^2cL7#QdYtL`2TRPr$K|#$?4xf#TI;$%-StVRr5TO zv*pQTX~IcA#GVcG z?mj6EZK7RUte|kIul7Wga3l1vWvC&2GMwgt-EQCF?4oVQ=+!b|wcEf6wPK7qd_K>; z@1LzU3;{UOQ|9lxmxK7NbuJPa{+zJ+br30ts3*7D!ujZ9Y;!(09LJ};9W7(CWdFHyzg0Q_ewY8oM`zwGa5DC*3mx}0IF+%tNKqZ8V zU6NvKVp^d-+xk0X1v_?;&koW(Ui|t?Guw!Cs4`ivStav#BSyn~RT0$r)>ZNh>t-A4 ziW-k2f^~P&m{AE+~Qpy zEz!Gv*=0#A{9LFWoIqXa1~B^zHbqj=7xPF~OCM7>$#>Z753lpCEqJn0p}OujK*Zcd zbQ7De(vB;%b@?F)3`RsN496|XfrKi1_5;Nni z_rUP|^k$1M>H*vT-&~ z*6IOV;yz{BgSHt_47(K0O7g8LQk1-mq+p7Zunzx)2h1?KpCE`&VP@v)Ocmj8%!mbh za61MDMk zO&~_dyox6wtcT% z3%c0lGSm#k{6L6ZaF=kkX+K)`LI%rddLmxCEaS8tH~EFtBa5cfeUI4*b{KkVAmn>? zXnrV}Cq^ParzM1m@qJhAXR$RuF`tXh)4>|s=kj13ZYXG(Q0K(^vWVRyC(`ucaz8TD z?czZ)y65z}%>SxVjUG+}XQp$X}q>anTG8=)AUB ztcC;C*e(wZ-BM9eQS6>0L>Xd!N_U)33g z#>txN#`~5XCLH8c6bK3165ixrfF^128TIdu95%fA!#JiDtcBNJK59 z(T?d{n32zr&YSM~9*Ct7U6|KRy^v$DDq9_PbxjjhWC;3)i%9I*|oMb$E*>j}80c6WdVkcAGBx$h_GFqBML3(uwW zkd9IBs|!E2c;p8r8wr=KaOKX!kuRLsugngo2kXJP;N!F(Ul+eFw5diY4GlD=S9Q** zQaJTrh1j;ET}|86Hi_+na#~uD%ASQ|C(q^EI7>*qKd}6&%U>#^L!s>*h$N$1fHASL z*>UmdUKmuWx9}Iy3RQ z;rnyM;i2`mGwP?xN>-VucZ{PHuU{h%f3S0L;kiCpghPf};Z_3wIgu4(F%mY3( zpcTS>^~U>&Q$bwh0$U0C#7a?@@1mC1A@`8$8}w0fi2o0t|_|#bE&CB!N_L+ z3@1cG7J@qsqeXY03zgD&jgR(!VSdONrt+#)3GKZ)MpcX|*Vyo+_Zs7eLJ;ws+J~h- zIk~yEgC`Q?RYv+7PB6k|kVME3zzRcs*BPneV8ntHK8CL!@Z&jk$HJty)}w+IeZ_l^6Q47zL%(JR2H|IuRgRr?fxhRcF~r(;6Ti$yfWj7 zENqk2GeeXiS7KX zKg91LKyaLEhD9ArNae{86F8% zNhYMKsrk#I4vxZyE$7GQ%Y@qt>4Ke(jV9qySPdI!gzte57&yb`$~XutN#Fbc)vD@G z@3gfwAs5e%xLL>2Og1373X<%{NBl};D&9&_E#So zp((4IpUmEqLL>p3d9E|XJ2)`O#vIK!2rO|aVmi}ex_)$2##wyo_<~IIb(_vVa_4{*&cqmGS%!Ml{%*8@PU@lW3Ahy&%g2TYT z)H5SFtEQnIGx5}ZCS}L_0(Jz;bCMeNeXP;l39~?Vju7dzj1}XnRb$$%B@j6V0frs-I zDaKNFb2`la8^^$Hu0Q+u&akAuS8l1o1O8{tIbZ&Fdji3`Q}ZZOA#Um^l@4MrGK4*F zzOxv`faz73M)D0>UElZ|a;&q=R{8k_A9(0=J-FrJ*?dx@f-lJuaB+9h&{W3dsK1BPad!MHfz`SKZe%?BIFhq)4Wn1&HIU1T54Insl%VZbv`k_ND$fo z8g^mN<7v$?F>$*$Kv82g%c>!^^)8;})#c3%VLztxbtrK0OJeVQ(e`kS)y`;WmQ6yT z?>_F4XZ(>s^PPj`mrroZ!3nn<^H-Iy4;5LG9J8&$i2QDl*;cKLb`j=S8V=6a4a!=( zJi9s&x3S4Ik{i_ zzc$V?EUN8oN=fiB^NjSZ`#&-bg*s$A&+y{|x*W5PPE)8mVH7tNRJMV9*_ z-)N~={R<=O4(Z}Mqs6TjcyiJGMtiQ9VsGTf7lr?;rH?;NB#*<%!tQR-rjYoQjxodg zEPH)Kbc1}8GvkR|R&^^LG0+myvtYBfW045?L`j935y`#N`LG&4snbqFq3Y^rWNRIY zdwp$W-Ra6)S9j`ZBgS}=9{JSN=JC~hS=9?GYK&tOR}M?d)(QJo=$X++WZ)wv`woGi zxsG;qAq^{}F+dOW4XAd)Wl!0z!HF~u1PIU28t5^|dLp|YxKyhrW!F-p=bO@KJi7yQ zHG#U{F*UU`jCMEiWg?Y=0wPG8v3CYsnHkfoFSKP`=?sM3@$c|S+Wjpst;kNr1cG`U zTkg8LU(2>$c2M32rmq9WsosN4= ze0-r=Y%XOFmD8Dqs7Gyx3fF*Lf5J2H{d?blnD;Ub3IcW9bu%411Tbjq3v;VCtl2k} zuJH@1wI=S}b?-w(O&zex(}SM^n_(;B!d_Q6ym?`6Q)cSV8G}PTMMzJ-WlW2%xW`$R zrF6TQ?7u2!+F4AJnQlV-puGhyGJ)g>r!>|mgY(a9yk46cHGz|5dOWx>#6(1aiw#x~ zEd%2b(GyrKy|SraplP&cB|ADD9inyTCNR@b>l12uIed}y&kP}A`K5L?;~}gcQmjMv z1(TPARo7jVsyVTVh{(ql*oau2F@9O5#$`pN zJ--CvnfLhxg(rw5-1mUE`LV~rX^cyPw_!a7u=?91?E77@EIz^o-s>T%h9czRFn{)hAwKI&s0I#5y06}9z12>MTt8zjP1TfJ1M*FKsdfKKfgol# zFHGtp;(H+LJ4nzY@IrjkgJu5uR{Wmiv*#w2?A*`~_k{PHYvOB&hRVy4c4$pjzUAu> z5F~a^jiJN_xjdy;ZWmQHSeH+}{-v4HLRkrd!CsnZU5_+_J&yp#h#2~84yoo0>oNY^Mg+i_uKwJJoGcG~f=*Ose+PgT0bzl$LSe!(c+Sk^RANSa34|8~^6cdbm=Xr_qPZ)CmvRz0sD?1G)KTySuKr zYLcuS_Y6sR%jq$~>pp*v0*da0jq_kG+)!-|UB=Rnmq;EDW`^@gxgAEQsO^N9?yQ&7 zzhV9`7=A2SWUvdjef}g5obgGdYO1XSt$qee6PKExr^uM<1&SZjnn02mUMzS*UPM7y z3Bn~I&=lA_Nby}SUsC+-@-U*@nO-0_yzyMq<22nhWLu$$#dYSxGKROYvIuT})tvoG z*;bJ}P~qAnn-;$y$c%}#tgC#yiq}gbr-;&eaI~aGz>}AvBH384{WTD>vd*`L0juJu z&MmcB`1f5zb(YH~d8@J(KIrt3RK$th8GK&4JDvOj0eO`S;R>$+34P5m3-)1+QC@oS_kcG z&wxeJDk&_fBK4ga%_@r+4QB``l8GQW?J#eBoXyHUo+d!gpUU)0*nvmSPk2~*brJh& z%>I{^h^|N0`2G(5H+TW!n+Bcp+EndVz|Z^Ami8_9a=uI6tm!?`#`@`-ACEw7x7`+0 zNrCSFvU4fProY&Pts~gBONjR8tyZu`wqEC|0b^2b1Zv>v!4kW`xoZ769*`kJbv+LY z6tPJ}{Dn_Qe(n@_Utc!3DQzMwsfsh!u%kW;h$*4k1V+(msR%0G4*!_ueTeNMslmMX zgh{`>$Tw=y<=5JgPpNq%s^!Jr_CNZ7iM3-DM|Zb}s6WM?hkPGb))}FHt`lJ)5Xsy+ z$>0?Z81#K(2ucRZ@UtOiYw@8(476;OZL#d$OL@8o#v?fQVzr20n%SB#Z?)e=I&|Zh z9P&S@lj^}H_rBKDXD1&k#3o|Z^*^CJYiDszh5VG;H|n^B6?!+5_#hFlyb~T-%`o&E z9k=X5tZ?*3#dH3~$zfOB%p$b&jsnRhBc!Jezacu44mGGSO7LAIEA-4a)&ez3OaitY z*Nh_|FW~5ex*a@pJ@>W47R+2;Z>M7_cZZbp%Y0(SV@W3Y{&BGv_vUM4Wfd1>dFqSJ zG2b~|aJam*u@%4EC#E|n|5y)iUPf1>vvUY{%odOL4 z`)R+nJod&%v+UU9kfo6Ac9)PEu$0(qI3`=+Au{j64nm^|(gRyYAw>)&IBf;&eIITp zm#APR_j6UedB8zKCM1{pYh!Ex=W;R1yTS6yw$xn0E5YsVkHphYd>(A%%oO=6?9=AU z4~43t9aTwoh?*gtABc^vsjrd!_VLyV`T}<>%sazBIIg+JD!ptOwgN|Ndv}9$@0fBh zJ>Y#PQ{FF2CgQA5=9dC~Aw>NyFKxs0*=UeH9JQ0vR>!NuaEYQL52GOr3<(S_a_u`V zFqp2x=U1b&0vdSI-wdURyT3N2YHE)_C*azy({b>YQ0Vn;kmX!*(qjKM$iBJ2=~EoO zoBpcVi}4=aotPDjxW6o^ccqm9RG`K@#<}M`Q%O`tR6uPPb6&UE6d`97yHxKzL)cYx zU2cBqB*aUYNUn^xd-`Ry-MI^L`BRgudP9-F%(^sbhovmxdR0N2vraZ*#q7qx%7bFi zMZLp`aTmDebG|GO>AOyl;TGvlK9BiFzy0K)xWO&a{z)aM!A%*#7gT9Ld$zn}N)t}4 zaIW(a(Jt&z-v7SDFSG8B}7 z9pl`BlMJf|*-!7g+h8Tw*;o1(SNW11>C_o8u$GC+NeRbn$POc-(m}=`(^z8d^(ZHG z5{V}#>|7(7*6WSjzn@D%G#_arXZ$Z~GZNZpNmAyNFhLpFP;4*0~k4(_}F@?cTV4{SL&xS0{bXB6|H z!;(ry&;0Ji!28Fh^QBo02E=Bx)g`AyWSuYc53t~2IYW+&3Ezg3Zqk!G(l{MdOT9_N zL}<18>DalFYC`_^8COsMF@WYi#~oMOe3*>}8Jc4ldAKhIb>CEq{!w4sAh3t8^vq}k z9oC=+=usOW$v^pFJfI?ytd|P?$03ejAatYF#8@!4SfdgXukmPAeuzo@XhE=xcJ=;4 zKC13~yt_P*vB^>2^g70^QUxRVcloHLEDA47b8ZBbCFuzJSQx}yyiuzUL!%{?Z63e3 zv>*!~ozS7$Heeb3I*3L&U$$gX=mo2cPax*`{I_phv;<{@70tjj)qx(d*ghF}O>EhP zDGexfW|BgJkdROrIUnHz_QZRkpz0V00ER%h|MYDob)k$P_avqW4{0(PboR?VFp-9- z>C73d1HUt-t{?dp#z8<3f4`!Vz_@Ef$DxYTuQ0ahb!!@n z4oTevt^Gd)i`q0eRC*4!KA8cjo&^fr$SIu%5-95Euac6;J7?xez@&x!2{!@4r;#pE z|Gm235QM0vgBY|ZY9SybAG;@@v7Q5VQ(tx`R%$%3=Xb zbFXHZxHUB|sJX;@m2zVmhWRje1CRz{MNZglpkPz55p!yb=}2efCf9vwMl z6_y}UQo7HHNpCP-D6=qNG!_&PadUGQy)PHj)Wl>ed9SLf8F3KGv>z9*ol}~#&*OOA zu9HIcVYG)pss5o|6?Z^b8>#;G;ENC0PG!{zjjF$+ZB8lceYNiVXk2Ve&{nNyRLQ-7 zl1bMI4-!QF;a4w;=!cG@gqkvhy-ddlv#n~?^K-voZ96$pkE+KED>`kp))ydiPu9I$ z4-&sqvQb=$JnJ~_R`M6&!l59SUECqR#<|)*A}qe~NAN{wu?gUHS&){O8CpWz2ssYr zqSK!TZoO6eMifWWP(*QlM4+8j*_h)C@vjz8Xwh6NK&E~SV3R18?#viQmPQzQ@-)B+ z^l>I?TQGhw{fDIif~KY>j9l7a!Q{-0R)i+gc%AySf*fP^9l`Jk8{6P@)eqiV?qPShzEG% zIamK-xOB6sxS4SGsl2^6_ZY85x~>iy5Zg|EAO2~JLhZ`ahiFFt-tyL^FdWm_=w%)@ zK8;WkSyfml5es1i)u)~a@_oRs5@E*zwneAS2~^;T=s{;|g#*WhFDC&0$M5`b{U;v` zU~wJ_Uv6=7ouu^meS`nSjhbWkt;R!(ctpHxziJX+lvkJ|w}qkVuc)Y5cQ&u_TqPO~ zZ9D`})x@opn_ikbk({;tl-qzpRs`Q}wEM34oYxuhvD99=}2+^HAIgZTc5KHDUqmzI9~o zHc6oVXyQhp(S5ghSGzDarUu|y%f*Em;=rhhbx$xDz_0ojSiZwA0wYedL2-{^Y|G0( zFi^E*V0f1WW{?ALho~kJ(Aec|$G*9Mg|Y3Oe6d}LRTjo&qo<_>8_8tC!orx%$?544 z^Yi+R&CO^ZW7E}XW-eockr(bQ=ZG&U!CpGw0n6V7LBC?xx-Q7&h>hJ zfpTUfP#Ti<^z`J+|E3&0(k>^+_5|zpvz?aRJ@ASax5oP@zu{JZz65Fs2kty37?iq^ zZ!{Q2`SP)a&qTSNA_TicZ0L`X+qS~b9m~qg`-Oyr42Ty4W*e{5Mije4)&Z@}*49>3 zeEj56T~-#gzrX)dexf1t_ImNQb9h*}R@Uq1ei-101@%(-P~63$FsF{&@G1HVU`;>K zZ6qAVGM~j@DZd|u-4_}3?*3cAdJ_GCI>F=kdgeug^%vwh%$SGIT;lT9lg)cW6dsI? zkN^0zc&lq{Obgg_UAv5)!#zDxSNdvdZ*3Nu?=vx>O_v);*xA`FMB;_ex-azi%l=po zzWW=CC%ORAfLa6T@(l>?EGGn@zY?e8;;Jq8#$%>ck_lJQZFA=~9ZaF~)~Z*@NvEAt z&ABK}zmUBkew-fbetkUs!yhkT$?JGhTW-CRkmDa-mg3Ki1TLB(zyo{}h;P0bj)$7a z=eQQEI@UzPVfZ>p#eFqidvEdfromStgIPdMK7qOOoyg6ZObkKf>G^u*G-`gw%>_D* zbmWBH#k|+=6bUeB0WqdSb>3!Q!MUo+Dz7F^H}38tyNpOEGle5FzI4-On!Pv$JoZZ3BN?tffp zaqTvcaY_i;VdM8WVZJ!}PSdx&&_WCj5b#7kKZ+}2eIJWNnk+0+_Tl>w9fxCZaBz{O zyao1n8r>Kj4b5LPy5Is9r1x$|H9AmOk~ER z67DOGYglOO>FIsZcz7X1`LqDBUp|qQP^7^L4I8^RPd1j!IuJ|7_50Wzwzea@4J-o4 zgVaU9u+wjIC%m}0U>WOX>$t-E2{a69=zsYdETmtrMDXlE(b-VwS(JVUp5hpOQ8Msdm!)`f~e!K?Dk(bbhW zp~J+)gq@4as=D+6=}yD>LZf}IU1N-9GSk?@M5BKK|L<6JL(1^j;j@~lga^+4?r>7h zlu`xLY}z*c=&sLAj_WC`hD**u%uDh&5rGy&V%`1ZFN?uZ!A7 zfq)&IY!PyKK0dxi@9XBGLQr&^PqSnWfr-KVG<37Rvhq!FF(YuN21MoF7KX8?tEt(o zej?O)_AJCsjFYqGZc2WAPHrwPK0XcWFxYetii}hW<6Qe68mXw>{TDA^z6C`#o7BqM zI+db+Ks>}E=Thte4UPTU0O=9Hg7$w(jm>7vehL$J$71B;dp579CUT^tt*w2PaUp2j zhhu-93zTbf&CZVkv8fDC$tI9&*G?FISZNNbATeUZVJq4^S+=p5+iq180~_X~|E9t2_Hnml26 z95BfIFtmW{Nho260|_)-;h7>{NZdC9#uXJTPP%-z%j+NR%~ZRbYwa0LnfuL)yPE3C zoTvMT18a-7&EAQj9F$;2GM=*zAh9(dd1C}dIpT`K!6V`Pb@?#R+}!Lrl}v^=cga2& z%oInUGcq^FPuuT`NGNAEoKXj8g&t-bpA*@HCt05{fcasvH*}`XGSM{#l%`c946+Kt zwu2lgiB6o}NXo-vPc*ZUo@jdIj3Y=OIuUbaWMpKKIHTKhMXHIz`i2IN<&(4hSGP59 z=^rX2EoHjy%zwA6*5Kvl;kn$kS!}niF&kDo6mq~oK%ssE_xI(91_y7KjEpRljG+TD zj494$zR?IoXg#BP!3>ybF)6juRSz&Q-h*dz5qc=LLGx!QyE+fl{0rv$M_iu1t9`*{I_HI@>n}@ z+?FO}3YX~RzrYYm&3GJ+lUC(^Y_aS4`0uv+)rwi3zTl)OMXj#ld529H5sRF2c9FQO zw#Hql@y`=Rs)hM#G>V()4kP5`3FKDX@^B4ck^>y|Z`i*sus;|~R# zH#J5d1w<5MQ3+dloNUP&FhI&ifzv6hJbx!j{Bee$8!Ww~5=+(+D7;fNc0}u?*`$tw}mnYjl zPRT{=mfkRC`~Bw~?Gdt5^Gw+0Y`GQZ3>qcC58_K`7sq%onP@V6ck}S@2n&q>kkwg` z{;Zav0R_(5n*_x=jNHdcRSUWw^NwZF3}srRAX(zF>bv673a3S7Wh|aQyu7@=x~Y*! z&hf?B-*Qky{M)LcH+P_4Wp$Mt_m(<~h*?Wi#wf)#8f}Wm&4jAKc{8$XCngz__*9vm z8NjpF;2_~X8K2Y;bWRc{ID1DWA~+z1`&Rce?M5jY-T7cD&oI~s-rg9^EzoFZAIcQ5 z1&R82DI8o)6h9yT5nFw2qB6~4AxO$OaRv+lNP-4i&;ay$ht*FCRW;0UM=wBy->Vg- zGth5x=q|(-e!Z79;;n9!gCL0u`_f>0>NL-Omat^)dcQ2XMY===!a%N43gyS8YjSof2iU^TPZYUvGXe_sL=A$p(w8 za@&FTP)YUGF$}uiw%7a6+{SoJHCM&gCetcM!+9$~s@X6{*(v0n8QKP)NBg z;s8t0M3*^#{p>Va#OLNnoY9}p^X6(6opn1pkK*{7X>EVaVw=ah`gZMdX+iJg#019L zW3}`WM(l9x|B1ogvGf6b;(-E=oF`SGmRq ziGeURhHi6v=066MPlYIo%?Np%Ts7?)bn4k5e0L=8>X`iiW_zKoKqW)gRA& z8^r#t0&wtzGG%3D846|DD0bFOv8(s;l8NP(E!AC%o0?|ZeQtL;)Q;*NTpaB^D_GgL zef#gofI~PZR#EXw;h0O2(c3iF(AL%i)TJZSYDg`U7Q^?oOkuABIpydtAUm=Oz7wb7 zqARbjpX!OEv;@n7+U1Wyg>HtCe?AEaKnCCD!ouF>mX>XRvp(5dmcWLfNPOA1nyv0O zD`(_x#>>qKP&@F|yf$cap9EtGef)rFUtAymKW#yu_1)QVeW(tUH-*BPV|s35@EQ0C z$e5ud+w=BXY${@I5E7c-ScPhNJRkPErj0{ZmmDNmWk`i6)m zgQUDY!XLeaLe1`v;FFij<-glo_UBi?`I^I{<(qRT{ZVP*d|S~~7*?;z*RKEkS8Cli z;*2Tb`2V@Li$Eafb{ps>^1Dp`?5bd*ES1oQD@cFd>uZ4vQ2W|p^cDNhm(RJw5SP=> g3;*Z61J*t4Ne$0-_XJFMIPfPcsr0m5{Keb<122!3xc~qF literal 0 HcmV?d00001 From c1a666cc6b1c8292bb8f5d56b83391413b6dfba4 Mon Sep 17 00:00:00 2001 From: keithhc2 Date: Wed, 15 Dec 2021 13:09:58 -0800 Subject: [PATCH 7/7] Language tweaks Signed-off-by: keithhc2 --- .../data-prepper/data-prepper-reference.md | 230 +++++++++--------- _observability/data-prepper/get-started.md | 12 +- _observability/data-prepper/index.md | 2 +- _observability/data-prepper/pipelines.md | 16 +- 4 files changed, 131 insertions(+), 129 deletions(-) diff --git a/_observability/data-prepper/data-prepper-reference.md b/_observability/data-prepper/data-prepper-reference.md index 417a1f43..f884bd1a 100644 --- a/_observability/data-prepper/data-prepper-reference.md +++ b/_observability/data-prepper/data-prepper-reference.md @@ -11,21 +11,21 @@ This page lists all supported Data Prepper server, sources, buffers, preppers, a ## Data Prepper server options -Option | Required | Description -:--- | :--- | :--- -ssl | No | Boolean, indicating whether TLS should be used for server APIs. Defaults to true. -keyStoreFilePath | No | String, path to a .jks or .p12 keystore file. Required if ssl is true. -keyStorePassword | No | String, password for keystore. Optional, defaults to empty string. -privateKeyPassword | No | String, password for private key within keystore. Optional, defaults to empty string. -serverPort | No | Integer, port number to use for server APIs. Defaults to 4900 -metricRegistries | No | List, metrics registries for publishing the generated metrics. Defaults to Prometheus; Prometheus and CloudWatch are currently supported. +Option | Required | Type | Description +:--- | :--- | :--- | :--- +ssl | No | Boolean | Indicates whether TLS should be used for server APIs. Defaults to true. +keyStoreFilePath | No | String | Path to a .jks or .p12 keystore file. Required if ssl is true. +keyStorePassword | No | String | Password for keystore. Optional, defaults to empty string. +privateKeyPassword | No | String | Password for private key within keystore. Optional, defaults to empty string. +serverPort | No | Integer | Port number to use for server APIs. Defaults to 4900 +metricRegistries | No | List | Metrics registries for publishing the generated metrics. Currently supports Prometheus and CloudWatch. Defaults to Prometheus. ## General pipeline options -Option | Required | Description -:--- | :--- | :--- -workers | No | Integer, default 1. Essentially the number of application threads. As a starting point for your use case, try setting this value to the number of CPU cores on the machine. -delay | No | Integer (milliseconds), default 3,000. How long workers wait between buffer read attempts. +Option | Required | Type | Description +:--- | :--- | :--- | :--- +workers | No | Integer | Essentially the number of application threads. As a starting point for your use case, try setting this value to the number of CPU cores on the machine. Default is 1. +delay | No | Integer | Amount of time in milliseconds workers wait between buffer read attempts. Default is 3,000. ## Sources @@ -37,53 +37,53 @@ Sources define where your data comes from. Source for the OpenTelemetry Collector. -Option | Required | Description -:--- | :--- | :--- -port | No | Integer, the port OTel trace source is running on. Default is `21890`. -request_timeout | No | Integer, the request timeout in millis. Default is `10_000`. -health_check_service | No | Boolean, enables a gRPC health check service under `grpc.health.v1/Health/Check`. Default is `false`. -proto_reflection_service | No | Boolean, enables a reflection service for Protobuf services (see [gRPC reflection](https://github.com/grpc/grpc/blob/master/doc/server-reflection.md) and [gRPC Server Reflection Tutorial](https://github.com/grpc/grpc-java/blob/master/documentation/server-reflection-tutorial.md) docs). Default is `false`. -unframed_requests | No | Boolean, enable requests not framed using the gRPC wire protocol. -thread_count | No | Integer, the number of threads to keep in the ScheduledThreadPool. Default is `200`. -max_connection_count | No | Integer, the maximum allowed number of open connections. Default is `500`. -ssl | No | Boolean, enables connections to the OTel source port over TLS/SSL. Defaults to `true`. -sslKeyCertChainFile | Conditionally | String, file-system path or AWS S3 path to the security certificate (e.g. `"config/demo-data-prepper.crt"` or `"s3://my-secrets-bucket/demo-data-prepper.crt"`). Required if ssl is set to `true`. -sslKeyFile | Conditionally | String, file-system path or AWS S3 path to the security key (e.g. `"config/demo-data-prepper.key"` or `"s3://my-secrets-bucket/demo-data-prepper.key"`). Required if ssl is set to `true`. +Option | Required | Type | Description +:--- | :--- | :--- | :--- +port | No | Integer | The port OTel trace source is running on. Default is `21890`. +request_timeout | No | Integer | The request timeout in milliseconds. Default is `10_000`. +health_check_service | No | Boolean | Enables a gRPC health check service under `grpc.health.v1/Health/Check`. Default is `false`. +proto_reflection_service | No | Boolean | Enables a reflection service for Protobuf services (see [gRPC reflection](https://github.com/grpc/grpc/blob/master/doc/server-reflection.md) and [gRPC Server Reflection Tutorial](https://github.com/grpc/grpc-java/blob/master/documentation/server-reflection-tutorial.md) docs). Default is `false`. +unframed_requests | No | Boolean | Enable requests not framed using the gRPC wire protocol. +thread_count | No | Integer | The number of threads to keep in the ScheduledThreadPool. Default is `200`. +max_connection_count | No | Integer | The maximum allowed number of open connections. Default is `500`. +ssl | No | Boolea | Enables connections to the OTel source port over TLS/SSL. Defaults to `true`. +sslKeyCertChainFile | Conditionally | String | File-system path or AWS S3 path to the security certificate (e.g. `"config/demo-data-prepper.crt"` or `"s3://my-secrets-bucket/demo-data-prepper.crt"`). Required if ssl is set to `true`. +sslKeyFile | Conditionally | String | File-system path or AWS S3 path to the security key (e.g. `"config/demo-data-prepper.key"` or `"s3://my-secrets-bucket/demo-data-prepper.key"`). Required if ssl is set to `true`. useAcmCertForSSL | No | Boolean, enables TLS/SSL using certificate and private key from AWS Certificate Manager (ACM). Default is `false`. -acmCertificateArn | Conditionally | String, represents the ACM certificate ARN. ACM certificate take preference over S3 or local file system certificate. Required if `useAcmCertForSSL` is set to `true`. -awsRegion | Conditionally | String, represents the AWS region to use ACM or S3. Required if `useAcmCertForSSL` is set to `true` or `sslKeyCertChainFile` and `sslKeyFile` are AWS S3 paths. -authentication | No | An authentication configuration. By default, this runs an unauthenticated server. This uses pluggable authentication for HTTPS. To use basic authentication define the ```http_basic``` plugin with a `username` and `password`. To provide customer authentication use or create a plugin which implements: [GrpcAuthenticationProvider](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/armeria-common/src/main/java/com/amazon/dataprepper/armeria/authentication/GrpcAuthenticationProvider.java). +acmCertificateArn | Conditionally | String | Represents the ACM certificate ARN. ACM certificate take preference over S3 or local file system certificate. Required if `useAcmCertForSSL` is set to `true`. +awsRegion | Conditionally | String | Represents the AWS region to use ACM or S3. Required if `useAcmCertForSSL` is set to `true` or `sslKeyCertChainFile` and `sslKeyFile` are AWS S3 paths. +authentication | No | Object| An authentication configuration. By default, this runs an unauthenticated server. This uses pluggable authentication for HTTPS. To use basic authentication, define the `http_basic` plugin with a `username` and `password`. To provide customer authentication use or create a plugin which implements: [GrpcAuthenticationProvider](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/armeria-common/src/main/java/com/amazon/dataprepper/armeria/authentication/GrpcAuthenticationProvider.java). ### http_source This is a source plugin that supports HTTP protocol. Currently ONLY support Json UTF-8 codec for incoming request, e.g. `[{"key1": "value1"}, {"key2": "value2"}]`. -Option | Required | Description -:--- | :--- | :--- -port | No | Integer, the port the source is running on. Default is `2021`. Valid options are between `0` and `65535`. -request_timeout | No | Integer, the request timeout in millis. Default is `10_000`. -thread_count | No | Integer, the number of threads to keep in the ScheduledThreadPool. Default is `200`. -max_connection_count | No | Integer, the maximum allowed number of open connections. Default is `500`. -max_pending_requests | No | Ingeger, the maximum number of allowed tasks in ScheduledThreadPool work queue. Default is `1024`. -authentication | No | An authentication configuration. By default, this runs an unauthenticated server. This uses pluggable authentication for HTTPS. To use basic authentication define the ```http_basic``` plugin with a `username` and `password`. To provide customer authentication use or create a plugin which implements: [ArmeriaHttpAuthenticationProvider](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/armeria-common/src/main/java/com/amazon/dataprepper/armeria/authentication/ArmeriaHttpAuthenticationProvider.java). +Option | Required | Type | Description +:--- | :--- | :--- | :--- +port | No | Integer | The port the source is running on. Default is `2021`. Valid options are between `0` and `65535`. +request_timeout | No | Integer | The request timeout in millis. Default is `10_000`. +thread_count | No | Integer | The number of threads to keep in the ScheduledThreadPool. Default is `200`. +max_connection_count | No | Integer | The maximum allowed number of open connections. Default is `500`. +max_pending_requests | No | Integer | The maximum number of allowed tasks in ScheduledThreadPool work queue. Default is `1024`. +authentication | No | Object | An authentication configuration. By default, this runs an unauthenticated server. This uses pluggable authentication for HTTPS. To use basic authentication define the `http_basic` plugin with a `username` and `password`. To provide customer authentication use or create a plugin which implements: [ArmeriaHttpAuthenticationProvider](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/armeria-common/src/main/java/com/amazon/dataprepper/armeria/authentication/ArmeriaHttpAuthenticationProvider.java). ### file Source for flat file input. -Option | Required | Description -:--- | :--- | :--- -path | Yes | String, path to the input file (e.g. `logs/my-log.log`). -format | No | String, format of each line in the file. Valid options are `json` or `plain`. Default is `plain`. -record_type | No | String, the record type that will be stored. Valid options are `string` or `event`. Default is `string`. If you would like to use the file source for log analytics use cases like grok, change this to `event`. +Option | Required | Type | Description +:--- | :--- | :--- | :--- +path | Yes | String | Path to the input file (e.g. `logs/my-log.log`). +format | No | String | Format of each line in the file. Valid options are `json` or `plain`. Default is `plain`. +record_type | No | String | The record type that will be stored. Valid options are `string` or `event`. Default is `string`. If you would like to use the file source for log analytics use cases like grok, set this option to `event`. ### pipeline Source for reading from another pipeline. -Option | Required | Description -:--- | :--- | :--- -name | Yes | String, name of the pipeline to read from. +Option | Required | Type | Description +:--- | :--- | :--- | :--- +name | Yes | String | Name of the pipeline to read from. ### stdin @@ -100,10 +100,10 @@ Buffers store data as it passes through the pipeline. If you implement a custom The default buffer. Memory-based. -Option | Required | Description -:--- | :--- | :--- -buffer_size | No | Integer, default 512. The maximum number of records the buffer accepts. -batch_size | No | Integer, default 8. The maximum number of records the buffer drains after each read. +Option | Required | Type | Description +:--- | :--- | :--- | :--- +buffer_size | No | Integer | The maximum number of records the buffer accepts. Default is 512. +batch_size | No | Integer | The maximum number of records the buffer drains after each read. Default is 8. ## Preppers @@ -115,115 +115,115 @@ Preppers perform some action on your data: filter, transform, enrich, etc. Converts OpenTelemetry data to OpenSearch-compatible JSON documents. -Option | Required | Description -:--- | :--- | :--- -root_span_flush_delay | No | Integer, representing the time interval in seconds to flush all the root spans in the prepper together with their descendants. Defaults to 30. -trace_flush_interval | No | Integer, representing the time interval in seconds to flush all the descendant spans without any root span. Defaults to 180. +Option | Required | Type | Description +:--- | :--- | :--- | :--- +root_span_flush_delay | No | Integer | Represents the time interval in seconds to flush all the root spans in the prepper together with their descendants. Default is 30. +trace_flush_interval | No | Integer | Represents the time interval in seconds to flush all the descendant spans without any root span. Default is 180. ### service_map_stateful Uses OpenTelemetry data to create a distributed service map for visualization in OpenSearch Dashboards. -Option | Required | Description -:--- | :--- | :--- -window_duration | No | Integer, representing the fixed time window in seconds to evaluate service-map relationships. Defaults to 180. +Option | Required | Type | Description +:--- | :--- | :--- | :--- +window_duration | No | Integer | Represents the fixed time window in seconds to evaluate service-map relationships. Default is 180. ### peer_forwarder Forwards ExportTraceServiceRequests via gRPC to other Data Prepper instances. Required for operating Data Prepper in a clustered deployment. -Option | Required | Description -:--- | :--- | :--- -time_out | No | Integer, forwarded request timeout in seconds. Defaults to 3 seconds. -span_agg_count | No | Integer, batch size for number of spans per request. Defaults to 48. -target_port | No | Integer, the destination port to forward requests to. Defaults to `21890`. -discovery_mode | No | String, peer discovery mode to be used. Allowable values are `static`, `dns`, and `aws_cloud_map`. Defaults to `static`. -static_endpoints | No | List, containing string endpoints of all Data Prepper instances. -domain_name | No | String, single domain name to query DNS against. Typically used by creating multiple DNS A Records for the same domain. -ssl | No | Boolean, indicating whether TLS should be used. Default is true. -awsCloudMapNamespaceName | Conditionally | String, name of your CloudMap Namespace. Required if `discovery_mode` is set to `aws_cloud_map`. -awsCloudMapServiceName | Conditionally | String, service name within your CloudMap Namespace. Required if `discovery_mode` is set to `aws_cloud_map`. -sslKeyCertChainFile | Conditionally | String, represents the SSL certificate chain file path or AWS S3 path. S3 path example `s3:///`. Required if `ssl` is set to `true`. -useAcmCertForSSL | No | Boolean, enables TLS/SSL using certificate and private key from AWS Certificate Manager (ACM). Default is `false`. -awsRegion | Conditionally | String, represents the AWS region to use ACM, S3, or CloudMap. Required if `useAcmCertForSSL` is set to `true` or `sslKeyCertChainFile` and `sslKeyFile` are AWS S3 paths. -acmCertificateArn | Conditionally | String represents the ACM certificate ARN. ACM certificate take preference over S3 or local file system certificate. Required if `useAcmCertForSSL` is set to `true`. +Option | Required | Type | Description +:--- | :--- | :--- | :--- +time_out | No | Integer | Forwarded request timeout in seconds. Defaults to 3 seconds. +span_agg_count | No | Integer | Batch size for number of spans per request. Defaults to 48. +target_port | No | Integer | The destination port to forward requests to. Defaults to `21890`. +discovery_mode | No | String | Peer discovery mode to be used. Allowable values are `static`, `dns`, and `aws_cloud_map`. Defaults to `static`. +static_endpoints | No | List | List containing string endpoints of all Data Prepper instances. +domain_name | No | String | Single domain name to query DNS against. Typically used by creating multiple DNS A Records for the same domain. +ssl | No | Boolean | Indicates whether TLS should be used. Default is true. +awsCloudMapNamespaceName | Conditionally | String | Name of your CloudMap Namespace. Required if `discovery_mode` is set to `aws_cloud_map`. +awsCloudMapServiceName | Conditionally | String | Service name within your CloudMap Namespace. Required if `discovery_mode` is set to `aws_cloud_map`. +sslKeyCertChainFile | Conditionally | String | Represents the SSL certificate chain file path or AWS S3 path. S3 path example `s3:///`. Required if `ssl` is set to `true`. +useAcmCertForSSL | No | Boolean | Enables TLS/SSL using certificate and private key from AWS Certificate Manager (ACM). Default is `false`. +awsRegion | Conditionally | String | Represents the AWS region to use ACM, S3, or CloudMap. Required if `useAcmCertForSSL` is set to `true` or `sslKeyCertChainFile` and `sslKeyFile` are AWS S3 paths. +acmCertificateArn | Conditionally | String | Represents the ACM certificate ARN. ACM certificate take preference over S3 or local file system certificate. Required if `useAcmCertForSSL` is set to `true`. ### string_converter Converts strings to uppercase or lowercase. Mostly useful as an example if you want to develop your own prepper. -Option | Required | Description -:--- | :--- | :--- -upper_case | No | Boolean, whether to convert to uppercase (`true`) or lowercase (`false`). +Option | Required | Type | Description +:--- | :--- | :--- | :--- +upper_case | No | Boolean | Whether to convert to uppercase (`true`) or lowercase (`false`). ### grok_prepper Takes unstructured data and utilizes pattern matching to structure and extract important keys and make data more structured and queryable. -Option | Required | Description -:--- | :--- | :--- -match | No | Map, specifies which keys to match specific patterns against. Default is an empty body. -keep_empty_captures | No | Boolean, enables preserving `null` captures. Default value is `false`. -named_captures_only | No | Boolean, enables whether to keep only named captures. Default value is `true`. -break_on_match | No | Boolean, specifies wether to match all patterns or stop once the first successful match is found. Default is `true`. -keys_to_overwrite | No | List, specifies which existing keys are to be overwritten if there is a capture with the same key value. Default is `[]`. -pattern_definitions | No | Map, that allows for custom pattern use inline. Default value is `{}`. -patterns_directories | No | List, specifies the path of directories that contain customer pattern files. Default value is `[]`. -pattern_files_glob | No | String, specifies which pattern files to use from the directories specified for `pattern_directories`. Default is `*`. -target_key | No | String, specifies a parent level key to store all captures. Default value is `null`. -timeout_millis | No | Integer, the maximum amount of time that matching will be performed. Setting to `0` will disable the timeout. Default value is `30,000`. +Option | Required | Type | Description +:--- | :--- | :--- | :--- +match | No | Map | Specifies which keys to match specific patterns against. Default is an empty body. +keep_empty_captures | No | Boolean | Enables preserving `null` captures. Default value is `false`. +named_captures_only | No | Boolean | enables whether to keep only named captures. Default value is `true`. +break_on_match | No | Boolean | Specifies wether to match all patterns or stop once the first successful match is found. Default is `true`. +keys_to_overwrite | No | List | Specifies which existing keys are to be overwritten if there is a capture with the same key value. Default is `[]`. +pattern_definitions | No | Map | Allows for custom pattern use inline. Default value is an empty body. +patterns_directories | No | List | Specifies the path of directories that contain customer pattern files. Default value is an empty list. +pattern_files_glob | No | String | Specifies which pattern files to use from the directories specified for `pattern_directories`. Default is `*`. +target_key | No | String | Specifies a parent level key to store all captures. Default value is `null`. +timeout_millis | No | Integer | Maximum amount of time that should take place for the matching. Setting to `0` disables the timeout. Default value is `30,000`. ## Sinks Sinks define where Data Prepper writes your data to. -### opensearch +### OpenSearch Sink for an OpenSearch cluster. -Option | Required | Description -:--- | :--- | :--- -hosts | Yes | List of OpenSearch hosts to write to (e.g. `["https://localhost:9200", "https://remote-cluster:9200"]`). -cert | No | String, path to the security certificate (e.g. `"config/root-ca.pem"`) if the cluster uses the OpenSearch security plugin. -username | No | String, username for HTTP basic authentication. -password | No | String, password for HTTP basic authentication. -aws_sigv4 | No | Boolean, default false. Whether to use IAM signing to connect to an Amazon OpenSearch Service domain. For your access key, secret key, and optional session token, Data Prepper uses the default credential chain (environment variables, Java system properties, `~/.aws/credential`, etc.). -aws_region | No | String, AWS region (e.g. `"us-east-1"`) for the domain if you are connecting to Amazon OpenSearch Service. -aws_sts_role_arn | No | String, IAM role which the sink plugin assumes to sign request to Amazon OpenSearch Service. If not provided the plugin uses the default credentials. -socket_timeout | No | Integer, the timeout in milliseconds for waiting for data (or, put differently, a maximum period inactivity between two consecutive data packets). A timeout value of zero is interpreted as an infinite timeout. If this timeout value is either negative or not set, the underlying Apache HttpClient would rely on operating system settings for managing socket timeouts. -connect_timeout | No | Integer, the timeout in milliseconds used when requesting a connection from the connection manager. A timeout value of zero is interpreted as an infinite timeout. If this timeout value is either negative or not set, the underlying Apache HttpClient would rely on operating system settings for managing connection timeouts. -insecure | No | Boolean, default false. Whether to verify SSL certificates. If set to true, CA certificate verification is disabled and insecure HTTP requests are sent instead. -proxy | No | String, the address of a [forward HTTP proxy server](https://en.wikipedia.org/wiki/Proxy_server). The format is like ":". Examples: "example.com:8100", "http://example.com:8100", "112.112.112.112:8100". Note: port number cannot be omitted. -trace_analytics_raw | No | Boolean, default false. Deprecated in favor of `index_type`. Whether to export as trace data to the `otel-v1-apm-span-*` index pattern (alias `otel-v1-apm-span`) for use with the Trace Analytics OpenSearch Dashboards plugin. -trace_analytics_service_map | No | Boolean, default false. Deprecated in favor of `index_type`. Whether to export as trace data to the `otel-v1-apm-service-map` index for use with the service map component of the Trace Analytics OpenSearch Dashboards plugin. -index | No | String, name of the index to export to. Only required if you don't use the `trace-analytics-raw` or `trace-analytics-service-map` presets. In other words, this parameter is applicable and required only if index_type is explicitly `custom` or defaults to `custom`. -index_type | No | String, default `custom`. This index type instructs the Sink plugin what type of data it is handling. Valid values: `custom`, `trace-analytics-raw`, `trace-analytics-service-map`. -template_file | No | String, the path to a JSON [index template]({{site.url}}{{site.baseurl}}/opensearch/index-templates/) file (e.g. `/your/local/template-file.json` if you do not use the `trace_analytics_raw` or `trace_analytics_service_map`. See [otel-v1-apm-span-index-template.json](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/opensearch/src/main/resources/otel-v1-apm-span-index-template.json) for an example. -document_id_field | No | String, the field from the source data to use for the OpenSearch document ID (e.g. `"my-field"`) if you don't use the `trace_analytics_raw` or `trace_analytics_service_map` presets. -dlq_file | No | String, the path to your preferred dead letter queue file (e.g. `/your/local/dlq-file`). Data Prepper writes to this file when it fails to index a document on the OpenSearch cluster. -bulk_size | No | Integer (long), default 5. The maximum size (in MiB) of bulk requests to the OpenSearch cluster. Values below 0 indicate an unlimited size. If a single document exceeds the maximum bulk request size, Data Prepper sends it individually. -ism_policy_file | No | String, the absolute file path for an ISM (Index State Management) policy JSON file. This policy file is effective only when there is no built-in policy file for the index type. For example, `custom` index type is currently the only one without a built-in policy file, thus it would use the policy file here if it's provided through this parameter. For more information, see [ISM policies](https://opensearch.org/docs/latest/im-plugin/ism/policies/). -number_of_shards | No | Integer, the number of primary shards that an index should have on the destination OpenSearch server. This parameter is effective only when `template_file` is either explicitly provided in Sink configuration or built-in. If this parameter is set, it would override the value in index template file. For more information, see [create index](https://opensearch.org/docs/latest/opensearch/rest-api/index-apis/create-index/). -number_of_replicas | No | Integer, the number of replica shards each primary shard should have on the destination OpenSearch server. For example, if you have 4 primary shards and set number_of_replicas to 3, the index has 12 replica shards. This parameter is effective only when `template_file` is either explicitly provided in Sink configuration or built-in. If this parameter is set, it would override the value in index template file. For more information, see [create index](https://opensearch.org/docs/latest/opensearch/rest-api/index-apis/create-index/). +Option | Required | Type | Description +:--- | :--- | :--- | :--- +hosts | Yes | List | List of OpenSearch hosts to write to (e.g. `["https://localhost:9200", "https://remote-cluster:9200"]`). +cert | No | String | Path to the security certificate (e.g. `"config/root-ca.pem"`) if the cluster uses the OpenSearch security plugin. +username | No | String | Username for HTTP basic authentication. +password | No | String | Password for HTTP basic authentication. +aws_sigv4 | No | Boolean | default false. Whether to use IAM signing to connect to an Amazon OpenSearch Service domain. For your access key, secret key, and optional session token, Data Prepper uses the default credential chain (environment variables, Java system properties, `~/.aws/credential`, etc.). +aws_region | No | String | AWS region (e.g. `"us-east-1"`) for the domain if you are connecting to Amazon OpenSearch Service. +aws_sts_role_arn | No | String | IAM role which the sink plugin assumes to sign request to Amazon OpenSearch Service. If not provided the plugin uses the default credentials. +socket_timeout | No | Integer | the timeout in milliseconds for waiting for data (or, put differently, a maximum period inactivity between two consecutive data packets). A timeout value of zero is interpreted as an infinite timeout. If this timeout value is either negative or not set, the underlying Apache HttpClient would rely on operating system settings for managing socket timeouts. +connect_timeout | No | Integer | The timeout in milliseconds used when requesting a connection from the connection manager. A timeout value of zero is interpreted as an infinite timeout. If this timeout value is either negative or not set, the underlying Apache HttpClient would rely on operating system settings for managing connection timeouts. +insecure | No | Boolean | Whether to verify SSL certificates. If set to true, CA certificate verification is disabled and insecure HTTP requests are sent instead. Default is false. +proxy | No | String | The address of a [forward HTTP proxy server](https://en.wikipedia.org/wiki/Proxy_server). The format is "<host name or IP>:<port>". Examples: "example.com:8100", "http://example.com:8100", "112.112.112.112:8100". Port number cannot be omitted. +trace_analytics_raw | No | Boolean | Deprecated in favor of `index_type`. Whether to export as trace data to the `otel-v1-apm-span-*` index pattern (alias `otel-v1-apm-span`) for use with the Trace Analytics OpenSearch Dashboards plugin. Default is false. +trace_analytics_service_map | No | Boolean | Deprecated in favor of `index_type`. Whether to export as trace data to the `otel-v1-apm-service-map` index for use with the service map component of the Trace Analytics OpenSearch Dashboards plugin. | Default is false. +index | No | String | Name of the index to export to. Only required if you don't use the `trace-analytics-raw` or `trace-analytics-service-map` presets. In other words, this parameter is applicable and required only if index_type is explicitly `custom` or defaults to `custom`. +index_type | No | String | This index type instructs the Sink plugin what type of data it is handling. Valid values: `custom`, `trace-analytics-raw`, `trace-analytics-service-map`. Default is `custom`. +template_file | No | String | Path to a JSON [index template]({{site.url}}{{site.baseurl}}/opensearch/index-templates/) file (e.g. `/your/local/template-file.json` if you do not use the `trace_analytics_raw` or `trace_analytics_service_map`.) See [otel-v1-apm-span-index-template.json](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/opensearch/src/main/resources/otel-v1-apm-span-index-template.json) for an example. +document_id_field | No | String | The field from the source data to use for the OpenSearch document ID (e.g. `"my-field"`) if you don't use the `trace_analytics_raw` or `trace_analytics_service_map` presets. +dlq_file | No | String | The path to your preferred dead letter queue file (e.g. `/your/local/dlq-file`). Data Prepper writes to this file when it fails to index a document on the OpenSearch cluster. +bulk_size | No | Integer (long) | The maximum size (in MiB) of bulk requests to the OpenSearch cluster. Values below 0 indicate an unlimited size. If a single document exceeds the maximum bulk request size, Data Prepper sends it individually. Default is 5. +ism_policy_file | No | String | The absolute file path for an ISM (Index State Management) policy JSON file. This policy file is effective only when there is no built-in policy file for the index type. For example, `custom` index type is currently the only one without a built-in policy file, thus it would use the policy file here if it's provided through this parameter. For more information, see [ISM policies]({{site.url}}{{site.baseurl}}/im-plugin/ism/policies/). +number_of_shards | No | Integer | The number of primary shards that an index should have on the destination OpenSearch server. This parameter is effective only when `template_file` is either explicitly provided in Sink configuration or built-in. If this parameter is set, it would override the value in index template file. For more information, see [create index]({{site.url}}{{site.baseurl}}/opensearch/rest-api/index-apis/create-index/). +number_of_replicas | No | Integer | The number of replica shards each primary shard should have on the destination OpenSearch server. For example, if you have 4 primary shards and set number_of_replicas to 3, the index has 12 replica shards. This parameter is effective only when `template_file` is either explicitly provided in Sink configuration or built-in. If this parameter is set, it would override the value in index template file. For more information, see [create index]({{site.url}}{{site.baseurl}}/opensearch/rest-api/index-apis/create-index/). ### file Sink for flat file output. -Option | Required | Description -:--- | :--- | :--- -path | Yes | String, path for the output file (e.g. `logs/my-transformed-log.log`). +Option | Required | Type | Description +:--- | :--- | :--- | :--- +path | Yes | String | Path for the output file (e.g. `logs/my-transformed-log.log`). ### pipeline Sink for writing to another pipeline. -Option | Required | Description -:--- | :--- | :--- -name | Yes | String, name of the pipeline to write to. +Option | Required | Type | Description +:--- | :--- | :--- | :--- +name | Yes | String | Name of the pipeline to write to. ### stdout diff --git a/_observability/data-prepper/get-started.md b/_observability/data-prepper/get-started.md index 75cf17ce..4b6a1363 100644 --- a/_observability/data-prepper/get-started.md +++ b/_observability/data-prepper/get-started.md @@ -19,9 +19,9 @@ docker pull opensearchproject/data-prepper:latest ## 2. Define a pipeline -Create a Data Prepper pipeline file, `pipelines.yaml`, with: +Create a Data Prepper pipeline file, `pipelines.yaml`, with the following configuration: -```yaml +```yml simple-sample-pipeline: workers: 2 delay: "5000" @@ -41,9 +41,11 @@ docker run --name data-prepper \ opensearchproject/opensearch-data-prepper:latest ``` -You should see log output and after a few seconds some UUIDs in the output. It should look something like the following: +This sample pipeline configuration above demonstrates a simple pipeline with a source (`random`) sending data to a sink (`stdout`). For more examples and details on more advanced pipeline configurations, see [Pipelines]({{site.url}}{{site.baseurl}}/observability/data-prepper/pipelines). -```yaml +After starting Data Prepper, you should see log output and some UUIDs after a few seconds: + +```yml 2021-09-30T20:19:44,147 [main] INFO com.amazon.dataprepper.pipeline.server.DataPrepperServer - Data Prepper server running at :4900 2021-09-30T20:19:44,681 [random-source-pool-0] INFO com.amazon.dataprepper.plugins.source.RandomStringSource - Writing to buffer 2021-09-30T20:19:45,183 [random-source-pool-0] INFO com.amazon.dataprepper.plugins.source.RandomStringSource - Writing to buffer @@ -59,5 +61,3 @@ You should see log output and after a few seconds some UUIDs in the output. It s e51e700e-5cab-4f6d-879a-1c3235a77d18 b4ed2d7e-cf9c-4e9d-967c-b18e8af35c90 ``` - -This sample pipeline configuration above demonstrates a simple pipeline with a source (`random`) sending data to a sink (`stdout`). For more examples and details on more advanced pipeline configurations see the [Pipelines]({{site.url}}{{site.baseurl}}/observability/data-prepper/pipelines) guide. diff --git a/_observability/data-prepper/index.md b/_observability/data-prepper/index.md index a1c688c9..f2d0644e 100644 --- a/_observability/data-prepper/index.md +++ b/_observability/data-prepper/index.md @@ -10,6 +10,6 @@ has_toc: false Data Prepper is a server side data collector capable of filtering, enriching, transforming, normalizing and aggregating data for downstream analytics and visualization. -Data Prepper allows users to build custom pipelines to improve their operational view of their own applications. Two common uses for Data Prepper are trace and log analytics. [Trace Analytics]({{site.url}}{{site.baseurl}}/observability/trace/index/) can help you visualize the flow of events and identify performance problems. [Log Analytics]({{site.url}}{{site.baseurl}}/observability/log-analytics/) can improve searching, analyzing and provide insights into your application. +Data Prepper lets users build custom pipelines to improve the operational view of applications. Two common uses for Data Prepper are trace and log analytics. [Trace analytics]({{site.url}}{{site.baseurl}}/observability/trace/index/) can help you visualize the flow of events and identify performance problems, and [log analytics]({{site.url}}{{site.baseurl}}/observability/log-analytics/) can improve searching, analyzing and provide insights into your application. To get started building your own custom pipelines with Data Prepper, see the [Get Started]({{site.url}}{{site.baseurl}}/observability/data-prepper/get-started/) guide. diff --git a/_observability/data-prepper/pipelines.md b/_observability/data-prepper/pipelines.md index b1b9bf4e..32f5e59a 100644 --- a/_observability/data-prepper/pipelines.md +++ b/_observability/data-prepper/pipelines.md @@ -13,7 +13,7 @@ To use Data Prepper, you define pipelines in a configuration YAML file. Each pip ```yml simple-sample-pipeline: - workers: 2 # the number of workers + workers: 2 # the number of workers delay: 5000 # in milliseconds, how long workers wait between read attempts source: random: @@ -50,7 +50,7 @@ The Data Prepper repository has several [sample applications](https://github.com The following example demonstrates how to use HTTP source and Grok prepper plugins to process unstructured log data. -```yaml +```yml log-pipeline: source: http: @@ -68,7 +68,8 @@ log-pipeline: index: apache_logs ``` -Note: This example uses weak security. We strongly recommend securing all plugins which open external ports in production environments. +This example uses weak security. We strongly recommend securing all plugins which open external ports in production environments. +{: .note} ### Trace Analytics pipeline @@ -93,7 +94,7 @@ raw-pipeline: - otel_trace_raw_prepper: sink: - opensearch: - hosts: ["https://localhost:9200" ] + hosts: ["https://localhost:9200"] insecure: true username: admin password: admin @@ -132,7 +133,8 @@ This feature is limited by feature parity of Data Prepper. As of Data Prepper 1. - Amazon Elasticsearch Output plugin ## Configure the Data Prepper server -Data Prepper itself provides administrative HTTP endpoints such as `/list` to list pipelines and `/metrics/prometheus` to provide Prometheus-compatible metrics data. The port which serves these endpoints has a TLS configuration and is specified by a separate YAML file. Data Prepper docker images secures these endpoints by default. We strongly recommend providing your own configuration file for securing production environments. Here is an example `data-prepper-config.yaml`: + +Data Prepper itself provides administrative HTTP endpoints such as `/list` to list pipelines and `/metrics/prometheus` to provide Prometheus-compatible metrics data. The port that has these endpoints has a TLS configuration and is specified by a separate YAML file. By default, these endpoints are secured by Data Prepper docker images. We strongly recommend providing your own configuration file for securing production environments. Here is an example `data-prepper-config.yaml`: ```yml ssl: true @@ -144,8 +146,8 @@ serverPort: 1234 To configure the Data Prepper server, run Data Prepper with the additional yaml file. -```yaml +```bash docker run --name data-prepper -v /full/path/to/pipelines.yaml:/usr/share/data-prepper/pipelines.yaml \ /full/path/to/data-prepper-config.yaml:/usr/share/data-prepper/data-prepper-config.yaml \ opensearchproject/opensearch-data-prepper:latest -```` \ No newline at end of file +````