From adb4abd72e89426da180f49f50ad903263c06b7a Mon Sep 17 00:00:00 2001 From: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Date: Fri, 17 Feb 2023 13:31:57 -0600 Subject: [PATCH] Fix dead links and Observing TOC (#2947) Signed-off-by: Naarcha-AWS --- _dashboards/reporting.md | 2 +- _dashboards/visualize/area.md | 2 +- _dashboards/visualize/maptiles.md | 2 +- _dashboards/visualize/selfhost-maps-server.md | 2 ++ .../common-use-cases/trace-analytics.md | 16 ++++++++-------- .../upgrade-opensearch/index.md | 2 +- _observing-your-data/notebooks.md | 2 +- _observing-your-data/notifications/index.md | 2 +- _opensearch/query-dsl/full-text/index.md | 2 ++ 9 files changed, 18 insertions(+), 14 deletions(-) diff --git a/_dashboards/reporting.md b/_dashboards/reporting.md index 77ab9b3a..f4222c23 100644 --- a/_dashboards/reporting.md +++ b/_dashboards/reporting.md @@ -22,7 +22,7 @@ To generate a report from the interface: Reports generate asynchronously in the background and might take a few minutes, depending on the size of the report. A notification appears when your report is ready to download. {: .label} -3. To create a schedule-based report, choose **Create report definition**. Then proceed to [Create reports using a definition](#create-reports-using-a-definition). This option pre-fills many of the fields for you based on the visualization, dashboard, or data you were viewing. +3. To create a schedule-based report, choose **Create report definition**. Then proceed to [Create reports using a definition](#creating-reports-using-a-definition). This option pre-fills many of the fields for you based on the visualization, dashboard, or data you were viewing. ## Creating reports using a definition diff --git a/_dashboards/visualize/area.md b/_dashboards/visualize/area.md index e315c939..0f3b7863 100644 --- a/_dashboards/visualize/area.md +++ b/_dashboards/visualize/area.md @@ -55,6 +55,6 @@ You've now created the following aggregation-based area chart. # Related links - [Visualize]({{site.url}}{{site.baseurl}}/dashboards/visualize/viz-index/) -- [Visualization types in OpenSearch Dashboards]({{site.url}}{{site.baseurl}}/dashboards/visualize/viz-types/) +- [Visualization types in OpenSearch Dashboards]({{site.url}}{{site.baseurl}}/dashboards/visualize/viz-index/) - [Install and configure OpenSearch Dashboards]({{site.url}}{{site.baseurl}}/install-and-configure/install-dashboards/index/) - [Aggregations]({{site.url}}{{site.baseurl}}/opensearch/aggregations/) \ No newline at end of file diff --git a/_dashboards/visualize/maptiles.md b/_dashboards/visualize/maptiles.md index 2b55cb75..0252f056 100644 --- a/_dashboards/visualize/maptiles.md +++ b/_dashboards/visualize/maptiles.md @@ -5,7 +5,7 @@ grand_parent: Building data visualizations parent: Using coordinate and region maps nav_order: 5 redirect_from: - - /docs/opensearch-dashboards/maptiles/ + - /dashboards/maptiles/ --- {%- comment -%}The `/docs/opensearch-dashboards/maptiles/` redirect is specifically to support the UI links in OpenSearch Dashboards 1.0.0.{%- endcomment -%} diff --git a/_dashboards/visualize/selfhost-maps-server.md b/_dashboards/visualize/selfhost-maps-server.md index a1e36095..eead0b68 100644 --- a/_dashboards/visualize/selfhost-maps-server.md +++ b/_dashboards/visualize/selfhost-maps-server.md @@ -4,6 +4,8 @@ title: Using the self-host maps server grand_parent: Building data visualizations parent: Using coordinate and region maps nav_order: 10 +redirect_from: + - /dashboards/selfhost-maps-server/ --- # Using the self-host maps server diff --git a/_data-prepper/common-use-cases/trace-analytics.md b/_data-prepper/common-use-cases/trace-analytics.md index 857a4407..32358f72 100644 --- a/_data-prepper/common-use-cases/trace-analytics.md +++ b/_data-prepper/common-use-cases/trace-analytics.md @@ -32,31 +32,31 @@ To monitor trace analytics in Data Prepper, we provide three pipelines: `entry-p ### OpenTelemetry trace source -The [OpenTelemetry source]({{site.url}}{{site.baseurl}}/data-prepper/configuration/processors/otel-trace-raw/) accepts trace data from the OpenTelemetry Collector. The source follows the [OpenTelemetry Protocol](https://github.com/open-telemetry/opentelemetry-specification/tree/master/specification/protocol) and officially supports transport over gRPC and the use of industry-standard encryption (TLS/HTTPS). +The [OpenTelemetry source]({{site.url}}{{site.baseurl}}/data-prepper/pipelines/configuration/processors/otel-trace-raw/) accepts trace data from the OpenTelemetry Collector. The source follows the [OpenTelemetry Protocol](https://github.com/open-telemetry/opentelemetry-specification/tree/master/specification/protocol) and officially supports transport over gRPC and the use of industry-standard encryption (TLS/HTTPS). ### Processor There are three processors for the trace analytics feature: -* *otel_trace_raw* - The *otel_trace_raw* processor receives a collection of [span](https://github.com/opensearch-project/data-prepper/blob/fa65e9efb3f8d6a404a1ab1875f21ce85e5c5a6d/data-prepper-api/src/main/java/org/opensearch/dataprepper/model/trace/Span.java) records from [*otel-trace-source*]({{site.url}}{{site.baseurl}}/data-prepper/configuration/sources/otel-trace/), and performs stateful processing, extraction, and completion of trace-group-related fields. +* *otel_trace_raw* - The *otel_trace_raw* processor receives a collection of [span](https://github.com/opensearch-project/data-prepper/blob/fa65e9efb3f8d6a404a1ab1875f21ce85e5c5a6d/data-prepper-api/src/main/java/org/opensearch/dataprepper/model/trace/Span.java) records from [*otel-trace-source*]({{site.url}}{{site.baseurl}}/data-prepper/pipelines/configuration/sources/otel-trace/), and performs stateful processing, extraction, and completion of trace-group-related fields. * *otel_trace_group* - The *otel_trace_group* processor fills in the missing trace-group-related fields in the collection of [span](https://github.com/opensearch-project/data-prepper/blob/fa65e9efb3f8d6a404a1ab1875f21ce85e5c5a6d/data-prepper-api/src/main/java/com/amazon/dataprepper/model/trace/Span.java) records by looking up the OpenSearch backend. * *service_map_stateful* – The *service_map_stateful* processor performs the required preprocessing for trace data and builds metadata to display the `service-map` dashboards. ### OpenSearch sink -OpenSearch provides a generic sink that writes data to OpenSearch as the destination. The [OpenSearch sink]({{site.url}}{{site.baseurl}}/data-prepper/configuration/sinks/opensearch/) has configuration options related to the OpenSearch cluster, such as endpoint, SSL, username/password, index name, index template, and index state management. +OpenSearch provides a generic sink that writes data to OpenSearch as the destination. The [OpenSearch sink]({{site.url}}{{site.baseurl}}/data-prepper/pipelines/configuration/sinks/opensearch/) has configuration options related to the OpenSearch cluster, such as endpoint, SSL, username/password, index name, index template, and index state management. The sink provides specific configurations for the trace analytics feature. These configurations allow the sink to use indexes and index templates specific to trace analytics. The following OpenSearch indexes are specific to trace analytics: -* *otel-v1-apm-span* – The *otel-v1-apm-span* index stores the output from the [otel_trace_raw]({{site.url}}{{site.baseurl}}/data-prepper/configuration/processors/otel-trace-raw/) processor. -* *otel-v1-apm-service-map* – The *otel-v1-apm-service-map* index stores the output from the [service_map_stateful]({{site.url}}{{site.baseurl}}/data-prepper/configuration/processors/service-map-stateful/) processor. +* *otel-v1-apm-span* – The *otel-v1-apm-span* index stores the output from the [otel_trace_raw]({{site.url}}{{site.baseurl}}/data-prepper/pipelines/configuration/processors/otel-trace-raw/) processor. +* *otel-v1-apm-service-map* – The *otel-v1-apm-service-map* index stores the output from the [service_map_stateful]({{site.url}}{{site.baseurl}}/data-prepper/pipelines/configuration/processors/service-map-stateful/) processor. ## Trace tuning Starting with version 0.8.x, Data Prepper supports both vertical and horizontal scaling for trace analytics. You can adjust the size of a single Data Prepper instance to meet your workload's demands and scale vertically. -You can scale horizontally by using the core [peer forwarder]({{site.url}}{{site.baseurl}}/data-prepper/peer_forwarder/) to deploy multiple Data Prepper instances to form a cluster. This enables Data Prepper instances to communicate with instances in the cluster and is required for horizontally scaling deployments. +You can scale horizontally by using the core [peer forwarder]({{site.url}}{{site.baseurl}}/data-prepper/managing-data-prepper/peer-forwarder/) to deploy multiple Data Prepper instances to form a cluster. This enables Data Prepper instances to communicate with instances in the cluster and is required for horizontally scaling deployments. ### Scaling recommendations @@ -80,7 +80,7 @@ The `workers` setting determines the number of threads that are used by Data Pre Configure the Data Prepper heap by setting the `JVM_OPTS` environment variable. We recommend that you set the heap value to a minimum value of `4` * `batch_size` * `otel_send_batch_size` * `maximum size of indvidual span`. -As mentioned in the [setup guide]({{site.url}}{{site.baseurl}}/data-prepper/trace_analytics/#opentelemetry-collector), set `otel_send_batch_size` to a value of `50` in your OpenTelemetry Collector configuration. +As mentioned in the [OpenTelemetry Collector](#opentelemetry-collector) section, set `otel_send_batch_size` to a value of `50` in your OpenTelemetry Collector configuration. #### Local disk @@ -318,7 +318,7 @@ You must make the following changes: * `aws_sigv4` – If you are using Amazon OpenSearch Service with AWS signing, set this value to `true`. It will sign requests with the default AWS credentials provider. * `aws_region` – If you are using Amazon OpenSearch Service with AWS signing, set this value to your AWS Region. -For other configurations available for OpenSearch sinks, see [Data Prepper OpenSearch sink]({{site.url}}{{site.baseurl}}/data-prepper/configuration/sinks/opensearch/). +For other configurations available for OpenSearch sinks, see [Data Prepper OpenSearch sink]({{site.url}}{{site.baseurl}}/data-prepper/pipelines/configuration/sinks/opensearch/). ## OpenTelemetry Collector diff --git a/_install-and-configure/upgrade-opensearch/index.md b/_install-and-configure/upgrade-opensearch/index.md index f41aaf12..917cd348 100644 --- a/_install-and-configure/upgrade-opensearch/index.md +++ b/_install-and-configure/upgrade-opensearch/index.md @@ -69,7 +69,7 @@ Choose an appropriate method for upgrading your cluster to a new version of Open - A [rolling upgrade](#rolling-upgrade) upgrades nodes one at a time without stopping the cluster. - A [cluster restart upgrade](#cluster-restart-upgrade) upgrades services while the cluster is stopped. -Upgrades spanning more than a single major version of OpenSearch will require additional effort due to the need for reindexing. For more information, refer to the [Reindex]({{site.url}}{{site.baseurl}}/api-reference/document-apis/reindex/) API. See the [Lucene version reference](#lucene-version-reference) table included later in this guide for help planning your data migration. +Upgrades spanning more than a single major version of OpenSearch will require additional effort due to the need for reindexing. For more information, refer to the [Reindex]({{site.url}}{{site.baseurl}}/api-reference/document-apis/reindex/) API. See the [Index compatibility reference](#index-compatibility-reference) table included later in this guide for help planning your data migration. ### Rolling upgrade diff --git a/_observing-your-data/notebooks.md b/_observing-your-data/notebooks.md index e85750ff..8d4c059c 100644 --- a/_observing-your-data/notebooks.md +++ b/_observing-your-data/notebooks.md @@ -120,7 +120,7 @@ You can use notebooks to create PNG and PDF reports: Reports generate asynchronously in the background and might take a few minutes, depending on the size of the report. A notification appears when your report is ready to download. -1. To create a schedule-based report, choose **Create report definition**. For steps to create a report definition, see [Create reports using a definition]({{site.url}}{{site.baseurl}}/dashboards/reporting#create-reports-using-a-definition). +1. To create a schedule-based report, choose **Create report definition**. For steps to create a report definition, see [Create reports using a definition]({{site.url}}{{site.baseurl}}/dashboards/reporting#creating-reports-using-a-definition). 1. To see all your reports, choose **View all reports**. ![Report notebooks]({{site.url}}{{site.baseurl}}/images/report_notebooks.gif) diff --git a/_observing-your-data/notifications/index.md b/_observing-your-data/notifications/index.md index c04d9787..714dfce6 100644 --- a/_observing-your-data/notifications/index.md +++ b/_observing-your-data/notifications/index.md @@ -2,7 +2,7 @@ layout: default title: Notifications nav_order: 80 -has_children: false +has_children: true redirect_from: - /notifications-plugin/ - /notifications-plugin/index/ diff --git a/_opensearch/query-dsl/full-text/index.md b/_opensearch/query-dsl/full-text/index.md index 5421ffeb..6f609631 100644 --- a/_opensearch/query-dsl/full-text/index.md +++ b/_opensearch/query-dsl/full-text/index.md @@ -4,6 +4,8 @@ title: Full-text queries parent: Query DSL has_children: true nav_order: 30 +redirect_from: + - /opensearch/query-dsl/full-text/ --- # Full-text queries