Fix links in Alerting documentation and other for 2.9 release (#4606)

* fix#4056 fix links for 2.9

Signed-off-by: cwillum <cwmmoore@amazon.com>

* fix#4056 fix links for 2.9

Signed-off-by: cwillum <cwmmoore@amazon.com>

---------

Signed-off-by: cwillum <cwmmoore@amazon.com>
This commit is contained in:
Chris Moore 2023-07-24 10:56:07 -07:00 committed by GitHub
parent f9aa7f3d59
commit 837a0f5c89
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
5 changed files with 7 additions and 7 deletions

View File

@ -17,7 +17,7 @@ To add an action:
1. In the **Triggers** panel, select **Add action**.
1. Enter the action details, including action name, notification channel, and notification message body, in the **Notification** section.
You can add variables to your messages using [Mustache templates](https://mustache.github.io/mustache.5.html/). You have access to `ctx.action.name`, the name of the current action, and all [actions variables](#actions-variables).
You can add variables to your messages using [Mustache templates](https://mustache.github.io/mustache.5.html). You have access to `ctx.action.name`, the name of the current action, and all [actions variables](#actions-variables).
If your notification channel is a custom webhook that expects a particular data format, include JSON (or XML) directly in the message body:

View File

@ -57,7 +57,7 @@ You create composite monitors by combining individual monitors in a workflow tha
1. Because the composite monitor's trigger conditions require that the first and second monitors both generate audit alerts, the composite monitor then triggers a chained alert.
1. If notifications are configured in the composite monitor's definition, users receive a notification about the chained alert. They do not, however, receive the individual audit alerts generated by the two delegate monitors.
In this simple example, the first monitor could be a per document monitor configured to analyze a data source using three different queries, while the second monitor could be a a per bucket monitor that aggregates data by client IP. By combining the requirements of each delegate monitor, the composite monitor focuses the criteria that decide whether an alert is generated or not. This can improve the meaningfulness of the alert while removing extraneous alerts that provide no deterministic value.
In this simple example, the first monitor could be a per document monitor configured to analyze a data source using three different queries, while the second monitor could be a per bucket monitor that aggregates data by client IP. By combining the requirements of each delegate monitor, the composite monitor focuses the criteria that decide whether an alert is generated or not. This can improve the meaningfulness of the alert while removing extraneous alerts that provide no deterministic value.
## Managing composite monitors with the API
@ -536,7 +536,7 @@ GET /_plugins/_alerting/workflows/alerts?workflowIds=<workflow_ids>&getAssociate
### Acknowledge Chained Alerts
[After getting your alerts](#get-chained-alerts-api), you can acknowledge multiple active alerts in one call. If the alert is already in an ERROR, COMPLETED, or ACKNOWLEDGED state, it appears in the failed array.
[After getting your alerts](#get-chained-alerts), you can acknowledge multiple active alerts in one call. If the alert is already in an ERROR, COMPLETED, or ACKNOWLEDGED state, it appears in the failed array.
```json
POST _plugins/_alerting/workflows/<workflow_id>/_acknowledge/alerts

View File

@ -64,12 +64,12 @@ You query each trigger using up to 10 tags, adding the tag as a single trigger c
The Alerting plugin also creates a list of document findings that contains metadata about which document matches each query. Security analytics can use the document findings data to keep track of and analyze the query data separately from the alert processes.
The Alerting API provides a document-level monitor that programmatically accomplishes the same function as the per document monitor in the OpenSearch Dashboards. To learn more, see [Document-level monitors]({{site.url}}{{site.baseurl}}/monitoring-plugins/alerting/api/#document-level-monitors).
The Alerting API provides a document-level monitor that programmatically accomplishes the same function as the per document monitor in the OpenSearch Dashboards. To learn more, see [Document-level monitors]({{site.url}}{{site.baseurl}}/observing-your-data/alerting/api/#document-level-monitors).
{: .note}
### Document findings
When a per document monitor executes a query that matches a document in an index, a finding is created. OpenSearch provides a Findings index: `.opensearch-alerting-finding*` that contains findings data for all per document monitor queries. You can search the findings index with the Alerting API search operation. To learn more, see [Search for monitor findings]({{site.url}}{{site.baseurl}}/monitoring-plugins/alerting/api/#search-for-monitor-findings).
When a per document monitor executes a query that matches a document in an index, a finding is created. OpenSearch provides a Findings index: `.opensearch-alerting-finding*` that contains findings data for all per document monitor queries. You can search the findings index with the Alerting API search operation. To learn more, see [Search the findings index]({{site.url}}{{site.baseurl}}/observing-your-data/alerting/api/#search-the-findings-index).
The following metadata is provided for each document finding entry:

View File

@ -128,4 +128,4 @@ The following sample shows the RBAC roles specified by the RBAC parameter:
}
```
To see a full request sample, see [Create a monitor]({{site.url}}{{site.baseurl}}/monitoring-plugins/alerting/api/#query-level-monitors).
To see a full request sample, see [Create a monitor]({{site.url}}{{site.baseurl}}/observing-your-data/alerting/api/#create-a-query-level-monitor).

View File

@ -58,7 +58,7 @@ If you choose to perform manual field mapping, you should be familiar with the f
Security Analytics takes advantage of prepackaged Sigma rules for security event detection. Therefore, the field names are derived from a Sigma rule field standard. To make them easier to identify, however, we have created aliases for the Sigma rule fields based on the open-source Elastic Common Schema (ECS) specification. These alias rule field names are the field names used in these steps. They appear in the **Detector field name** column of the mapping tables.
Although the ECS rule field names are largely self-explanatory, you can find predefined mappings of the Sigma rule field names to ECS rule field names, for all supported log types, in the GitHub Security Analytics repository. Navigate to the [OSMappings](https://github.com/opensearch-project/security-analytics/tree/main/src/main/resources/OSMapping) folder, choose the folder named for the log type, and open the `fieldmappings.yml` file. For example, to see the Sigma rule fields that correspond to ECS rule fields for the Windows log type, open the [fieldmappings.yml file](https://github.com/opensearch-project/security-analytics/blob/main/src/main/resources/OSMapping/windows/fieldmappings.yml) in the **windows** folder.
Although the ECS rule field names are largely self-explanatory, you can find predefined mappings of the Sigma rule field names to ECS rule field names, for all supported log types, in the GitHub Security Analytics repository. Navigate to the [OSMappings](https://github.com/opensearch-project/security-analytics/tree/main/src/main/resources/OSMapping) folder and select the file for the specific log type. For example, to see the Sigma rule fields that correspond to ECS rule fields for the Windows log type, select the [`windows_logtype.json` file](https://github.com/opensearch-project/security-analytics/blob/main/src/main/resources/OSMapping/windows_logtype.json). The `raw_field` value in the file represents the Sigma rule field name in the mapping.
#### Amazon Security Lake logs