diff --git a/_observing-your-data/alerting/api.md b/_observing-your-data/alerting/api.md index 5b08e7a1..846098a4 100644 --- a/_observing-your-data/alerting/api.md +++ b/_observing-your-data/alerting/api.md @@ -9,15 +9,28 @@ redirect_from: # Alerting API -Use the Alerting API to programmatically create, update, and manage monitors and alerts. +Use the Alerting API to programmatically create, update, and manage monitors and alerts. For APIs that support the composite monitor specifically, see [Managing composite monitors with the API]({{site.url}}{{site.baseurl}}/observing-your-data/alerting/composite-monitors/#managing-composite-monitors-with-the-api). -## Query-level monitors +--- + +
+ + Table of contents + + {: .text-delta } +- TOC +{:toc} +
+ +--- + +## Create a query-level monitor Introduced 1.0 {: .label .label-purple } Query-level monitors run the query and check whether or not the results should trigger an alert. Query-level monitors can only trigger one alert at a time. For more information about query-level monitors and bucket-level monitors, see [Creating monitors]({{site.url}}{{site.baseurl}}/monitoring-plugins/alerting/monitors/). -#### Sample Request +#### Example request ```json POST _plugins/_alerting/monitors @@ -247,7 +260,7 @@ If you want to specify a timezone, you can do so by including a [cron expression The following example creates a monitor that runs at 12:10 PM Pacific Time on the 1st day of every month. -#### Request +#### Example request ```json { @@ -311,7 +324,7 @@ The following example creates a monitor that runs at 12:10 PM Pacific Time on th } ``` -For a full list of timezone names, refer to [Wikipedia](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones). The alerting plugin uses the Java [TimeZone](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/TimeZone.html) class to convert a [`ZoneId`](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/time/ZoneId.html) to a valid timezone. +For a full list of time zone names, refer to [Wikipedia](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones). The alerting plugin uses the Java [TimeZone](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/TimeZone.html) class to convert a [`ZoneId`](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/time/ZoneId.html) to a valid time zone. --- @@ -624,9 +637,9 @@ Tag | Creates alerts for documents that match a multiple query with this tag app Query by name | Creates alerts for documents matched or returned by the named query. | `query[name=]` Query by ID | Creates alerts for documents that were returned by the identified query. | `query[id=]` -#### Sample Request +#### Example request -The following sample shows how to create a document-level monitor: +The following example shows how to create a document-level monitor: ```json POST _plugins/_alerting/monitors diff --git a/_observing-your-data/alerting/composite-monitors.md b/_observing-your-data/alerting/composite-monitors.md new file mode 100644 index 00000000..26fd71c6 --- /dev/null +++ b/_observing-your-data/alerting/composite-monitors.md @@ -0,0 +1,662 @@ +--- +layout: default +title: Composite monitors +nav_order: 3 +parent: Alerting +has_children: false +redirect_from: +--- + +# Composite monitors + +--- + +
+ + Table of contents + + {: .text-delta } +- TOC +{:toc} +
+ +--- + +## About composite monitors + +Basic [monitor types]({{site.url}}{{site.baseurl}}/observing-your-data/alerting/monitors/#monitor-types) for the Alerting plugin are designed to define a single trigger type. For example, a per document monitor can trigger an alert based on a query's match with documents, while a per bucket monitor can trigger an alert based on queries aimed at aggregated values in a data source. The composite monitor combines multiple monitors in a sequence to analyze a data source based on multiple criteria and then uses their individual alerts to generate a single chained alert. This allows you to derive more granular information about a data source, and it doesn't require you to manually coordinate the scheduling of the separate monitors. + +Composite monitors remove the limitations of basic monitors in the following ways: + +* Composite monitors give you the ability to create complex queries through a combination of triggers generated by multiple types of monitors. +* They have the capacity to define a pipeline of rules and queries that are run as a single execution. +* They deliver a single alert to users instead of multiple alerts from the individual monitors in their workflows. +* They provide a more complete view of a given data source by running multiple monitors and multiple types of monitors in sequence, creating more focused results and reducing noise in the results. + + +## Key terms + +The key terms in the following table describe the basic concepts of composite monitors. For additional terms common to all types of monitors, see [Key terms]({{site.url}}{{site.baseurl}}/observing-your-data/alerting/monitors/#key-terms) for basic monitors. + +| Term | Definition | +| :--- | :--- | +| Composite monitor | A composite monitor is a type of monitor that supports the execution of multiple monitors in a sequential workflow. It supports configuring triggers to create chained alerts. | +| Delegate monitor | Delegate monitors are executed sequentially according to their order in a composite monitor's definition. When a delegate monitor's trigger conditions are met, it generates an audit alert. This audit alert then becomes a condition for the composite monitor's trigger. The composite monitor supports per query, per bucket, and per document monitors as delegate monitors. | +| workflow ID | The workflow ID provides an identifier for the entire workflow of all delegate monitors. It is synonymous with a composite monitor's monitor ID. | +| Chained alert | Chained alerts are generated from composite monitor triggers when delegate monitors generate audit alerts. The chained alert trigger condition supports the use of the logical operators AND, OR, and NOT so you can combine multiple functions into a single expression. | +| Audit alert | Delegate monitors generate alerts in an **audit** state. Users are not notified about each individual audit alert and don't need to acknowledge them. Audit alerts are used to evaluate chained alert trigger conditions in composite monitors. | +| Execution | A single run of all delegate monitors in the sequence defined in the composite monitor's configuration. | + + +## Basic workflow + +You create composite monitors by combining individual monitors in a workflow that executes each monitor in a defined sequence. When individual audit alerts from the delegate monitors meet the trigger conditions for a composite monitor, the composite monitor generates its own chained alert. Consider the following sequence of events to understand how a simple composite monitor configured with two delegate monitors executes its workflow. In this example, the trigger condition for the composite monitor is met when the first monitor and the second monitor both generate an alert. + +1. The composite monitor starts the execution and delegates it to the first monitor. The trigger conditions for the first monitor are met and it generates an audit alert. +1. The composite monitor then delegates the execution to the second monitor. The second monitor's trigger conditions are also met and it generates its own audit alert. +1. Because the composite monitor's trigger conditions require that the first and second monitors both generate audit alerts, the composite monitor then triggers a chained alert. +1. If notifications are configured in the composite monitor's definition, users receive a notification about the chained alert. They do not, however, receive the individual audit alerts generated by the two delegate monitors. + +In this simple example, the first monitor could be a per document monitor configured to analyze a data source using three different queries, while the second monitor could be a a per bucket monitor that aggregates data by client IP. By combining the requirements of each delegate monitor, the composite monitor focuses the criteria that decide whether an alert is generated or not. This can improve the meaningfulness of the alert while removing extraneous alerts that provide no deterministic value. + + +## Managing composite monitors with the API + +You can manage composite monitors using the REST API or OpenSearch Dashboards. This section covers API functionality for composite monitors. + + +### Create Composite Monitor + +This API allows you to create a composite monitor. + +```json +POST _plugins/_alerting/workflows +``` +{% include copy-curl.html %} + +#### Request fields + +| Field | Type | Description | +| :--- | :--- | :--- | +| `schedule` | Object | The schedule that determines how often the execution runs. | +| `schedule.period.interval` | Numeral | Accepts a numerical value to set how often the execution runs. | +| `schedule.period.unit` | Object | The time unit of measure for the interval: `SECONDS`, `MINUTES`, `HOURS`, `DAYS`. | +| `inputs` | Object | Accepts inputs to define the delegate monitors, which specify both the delegate monitors and their order in the execution's sequence. | +| `inputs.composite_input.sequence.delegates` | Object | The settings for the individual monitors that underlie the composite monitor. | +| `inputs.composite_input.sequence.delegates.order` | Number | Designates the order in which the monitor runs in the execution. | +| `inputs.composite_input.sequence.delegates.monitor_id` | String | The unique identifier for the monitor. | +| `enabled_time` | Number | The time at which the monitor was enabled. Expressed in epoch time. | +| `enabled` | Boolean | The setting that determines whether the composite monitor is enabled or not. Setting it to `true` enables the composite monitor. Default is `true`. | +| `workflow_type` | String | Set to `composite` for composite monitor. | +| `triggers` | Object | Details for the individual alert triggers. | +| `triggers.chained_alert_trigger` | Object | Details for each individual alert trigger. Each monitor's alert trigger will require settings for its configuration. | +| `triggers.chained_alert_trigger.id` | String | The unique identifier for the alert trigger. | +| `triggers.chained_alert_trigger.name` | String | The name of the alert trigger. | +| `triggers.chained_alert_trigger.severity` | Number | The alert severity. 1 = highest; 2 = high; 3 = medium; 4 = low; 5 = lowest. | +| `triggers.chained_alert_trigger.condition.script` | Object | The script details that determine the conditions for triggering an alert. | +| `triggers.chained_alert_trigger.condition.script.source` | String | The Painless script that defines the conditions for triggering an alert. | +| `triggers.chained_alert_trigger.condition.script.lang` | String | Enter `painless` for the Painless scripting language. | +| `actions` | Object | Provides fields for configuring an alert notification. | + +#### Example request + +```json +POST _plugins/_alerting/workflows +{ + "last_update_time": 1679468231835, + "owner": "alerting", + "type": "workflow", + "schedule": { + "period": { + "interval": 1, + "unit": "MINUTES" + } + }, + "inputs": [{ + "composite_input": { + "sequence": { + "delegates": [{ + "order": 1, + "monitor_id": "grsbCIcBvEHfkjWFeCqb" + }, + { + "order": 2, + "monitor_id": "agasbCIcBvEHfkjWFeCqa" + } + ] + } + } + }], + "enabled_time": 1679468231835, + "enabled": true, + "workflow_type": "composite", + "name": "scale_up", + "triggers": [{ + "chained_alert_trigger": { + "id": "m1ANDm2", + "name": "jnkjn", + "severity": "1", + "condition": { + "script": { + "source": "(monitor[id=grsbCIcBvEHfkjWFeCqb] && monitor[id=agasbCIcBvEHfkjWFeCqa])", + "lang": "painless" + } + } + }, + "actions": [{ + "name": "test-action", + "destination_id": "ld7912sBlQ5JUWWFThoW", + "message_template": { + "source": "This is my message body." + }, + "throttle_enabled": true, + "throttle": { + "value": 27, + "unit": "MINUTES" + }, + "subject_template": { + "source": "TheSubject" + } + }] + }, + { + "chained_alert_trigger": { + "id": "m1ORm2", + "name": "jnkjn", + "severity": "1", + "condition": { + "script": { + "source": "(monitor[id=grsbCIcBvEHfkjWFeCqb] || monitor[id=agasbCIcBvEHfkjWFeCqa])", + "lang": "painless" + } + } + } + } + ] +} +``` +{% include copy-curl.html %} + +#### Using Painless scripting language to define chained alert trigger conditions + +Composite monitor configurations employ the Painless scripting language to define the conditions for generating chained alerts. Conditions are applied for each execution of the composite monitor. You define the alert trigger conditions in the `triggers.chained_alert_triggers.condition.script.source` field of the request. Using Painless syntax, you can apply logic to links between monitors with the basic Boolean operators AND, OR, NOT, and precedence: + +* AND = `&&` +* OR = `||` +* NOT = `!` +* Precedence = `()` + +See the following examples to understand how each is used in the monitor definition. + +* **Example 1** + + `monitor[id=1] && monitor[id=2]` + + The following conditions for delegate monitors will trigger the composite monitor to produce a chained alert when both monitor #1 AND monitor #2 generate an alert. + +* **Example 2** + + `monitor[id=1] || !monitor[id=2]` + + The following conditions will trigger the composite monitor to produce a chained alert when either monitor #1 generates an alert OR monitor #2 does NOT generate an alert. + +* **Example 3** + + `monitor[id=1] && (monitor[id=2] || monitor[id=3])` + + The following conditions will trigger the composite monitor to produce a chained alert when monitor #1 generates an alert AND monitor #2 OR monitor #3 generates an alert. + +The order of monitor IDs in the Painless script does not define the execution sequence for the monitors. The monitor execution sequence is defined in the `inputs.composite_input.sequence.delegates.order` field in the request. +{: .note } + + +### Get Composite Monitor + +This API retrieves information on the specified monitor. + +```json +GET _plugins/_alerting/workflows/ +``` +{% include copy-curl.html %} + +#### Path parameters + +| Field | Type | Description | +| :--- | :--- | :--- | +| `workflow_id` | String | The composite monitor's [workflow ID](#key-terms). | + + +### Update Composite Monitor + +This API updates the composite monitor's details. See [Create Composite Monitor](#create-composite-monitor) for descriptions of the request fields. + +#### Example request + +```json +PUT _plugins/_alerting/workflows/ +{ + "owner": "security_analytics", + "type": "workflow", + "schedule": { + "period": { + "interval": 1, + "unit": "MINUTES" + } + }, + "inputs": [ + { + "composite_input": { + "sequence": { + "delegates": [ + { + "order": 1, + "monitor_id": "grsbCIcBvEHfkjWFeCqb" + }, + { + "order": 2, + "monitor_id": "agasbCIcBvEHfkjWFeCqa" + } + ] + } + } + } + ], + "enabled_time": 1679468231835, + "enabled": true, + "workflow_type": "composite", + "name": "NTxdwApKbv" +} +``` +{% include copy-curl.html %} + + +### Delete Composite Monitor + +```json +DELETE _plugins/_alerting/workflows/ +``` +{% include copy-curl.html %} + + +### Execute Composite Monitor + +This API begins the workflow execution for a composite monitor: + +```json +POST /_plugins/_alerting/workflows//_execute +``` +{% include copy-curl.html %} + +#### Example response + +```json +{ + "execution_id": "I0GXeIgBYKBG2nHoiHCL_2023-06-01T20:18:48.511884_a9c1d055-9b70-49c2-b32a-716cff1f562e", + "workflow_name": "scale_up", + "workflow_id": "I0GXeIgBYKBG2nHoiHCL", + "trigger_results": { + "m1ANDm2": { + "name": "jnkjn", + "triggered": true, + "action_results": {}, + "error": null + }, + "m1ORm2": { + "name": "jnkjn", + "triggered": true, + "action_results": {}, + "error": null + } + }, + "monitor_run_results": [{ + "monitor_name": "test triggers", + "period_start": 1685650668501, + "period_end": 1685650728501, + "error": null, + "input_results": { + "results": [{ + "bhjh": [ + "OkGceIgBYKBG2nHoyHAn|test1", + "O0GceIgBYKBG2nHozHCW|test1" + ], + "nkjkj": [ + "OkGceIgBYKBG2nHoyHAn|test1", + "O0GceIgBYKBG2nHozHCW|test1" + ], + "jknkjn": [ + "OkGceIgBYKBG2nHoyHAn|test1", + "O0GceIgBYKBG2nHozHCW|test1" + ] + }], + "error": null + }, + "trigger_results": { + "NC3Dd4cBCDCIfBYtViLI": { + "name": "njkkj", + "triggeredDocs": [ + "OkGceIgBYKBG2nHoyHAn|test1", + "O0GceIgBYKBG2nHozHCW|test1" + ], + "action_results": {}, + "error": null + } + } + }, + { + "monitor_name": "test triggers 2", + "period_start": 1685650668501, + "period_end": 1685650728501, + "error": null, + "input_results": { + "results": [{ + "bhjh": [ + "PEGceIgBYKBG2nHo1HCw|test", + "PUGceIgBYKBG2nHo3HA8|test" + ], + "nkjkj": [ + "PEGceIgBYKBG2nHo1HCw|test", + "PUGceIgBYKBG2nHo3HA8|test" + ], + "jknkjn": [ + "PEGceIgBYKBG2nHo1HCw|test", + "PUGceIgBYKBG2nHo3HA8|test" + ] + }], + "error": null + }, + "trigger_results": { + "NC3Dd4cBCDCIfBYtViLI": { + "name": "njkkj", + "triggeredDocs": [ + "PEGceIgBYKBG2nHo1HCw|test", + "PUGceIgBYKBG2nHo3HA8|test" + ], + "action_results": {}, + "error": null + } + } + } + ], + "execution_start_time": "2023-06-01T20:18:48.511874Z", + "execution_end_time": "2023-06-01T20:18:53.682405Z", + "error": null +} +``` + + +### Get Chained Alerts + +This API returns an array of chained alerts generated in composite monitor workflows: + +```json +GET /_plugins/_alerting/workflows/alerts?workflowIds=&getAssociatedAlerts=true +``` + +#### Query parameters + +| Field | Type | Required | Description | +| :--- | :--- | :--- | :--- | +| `workflowIds` | Array | No | When this parameter is used, it returns alerts created by the specified workflows. | +| `getAssociatedAlerts` | Boolean | No | When `true`, the response returns audit alerts that the composite monitor used to create a chained alert. Default is `false`. | + + +#### Example response + +```json +{ + "alerts": [ + { + "id": "PbQoZokBfd2ci_FqMGi6", + "version": 1, + "monitor_id": "", + "workflow_id": "G7QoZokBfd2ci_FqD2iZ", + "workflow_name": "scale_up", + "associated_alert_ids": [ + "4e8256c5-529a-484c-bf7b-d3980c03e9a4", + "513a8cb3-44bc-4eee-8aac-131be10b399e" + ], + "schema_version": -1, + "monitor_version": -1, + "monitor_name": "", + "execution_id": "G7QoZokBfd2ci_FqD2iZ_2023-07-17T23:20:55.244970_edd977d2-c02b-4cbe-8a79-2aa7991c4191", + "trigger_id": "m1ANDm2", + "trigger_name": "jnkjn", + "finding_ids": [], + "related_doc_ids": [], + "state": "ACTIVE", + "error_message": null, + "alert_history": [], + "severity": "1", + "action_execution_results": [], + "start_time": 1689636057269, + "last_notification_time": 1689636057270, + "end_time": null, + "acknowledged_time": null + }, + { + "id": "PrQoZokBfd2ci_FqMGj8", + "version": 1, + "monitor_id": "", + "workflow_id": "G7QoZokBfd2ci_FqD2iZ", + "workflow_name": "scale_up", + "associated_alert_ids": [ + "4e8256c5-529a-484c-bf7b-d3980c03e9a4", + "513a8cb3-44bc-4eee-8aac-131be10b399e" + ], + "schema_version": -1, + "monitor_version": -1, + "monitor_name": "", + "execution_id": "G7QoZokBfd2ci_FqD2iZ_2023-07-17T23:20:55.244970_edd977d2-c02b-4cbe-8a79-2aa7991c4191", + "trigger_id": "m1ORm2", + "trigger_name": "jnkjn", + "finding_ids": [], + "related_doc_ids": [], + "state": "ACTIVE", + "error_message": null, + "alert_history": [], + "severity": "1", + "action_execution_results": [], + "start_time": 1689636057340, + "last_notification_time": 1689636057340, + "end_time": null, + "acknowledged_time": null + } + ], + "associatedAlerts": [ + { + "id": "4e8256c5-529a-484c-bf7b-d3980c03e9a4", + "version": -1, + "monitor_id": "DrQoZokBfd2ci_FqCWh8", + "workflow_id": "G7QoZokBfd2ci_FqD2iZ", + "workflow_name": "", + "associated_alert_ids": [], + "schema_version": 5, + "monitor_version": 1, + "monitor_name": "test triggers", + "execution_id": "G7QoZokBfd2ci_FqD2iZ_2023-07-17T23:20:55.244970_edd977d2-c02b-4cbe-8a79-2aa7991c4191", + "trigger_id": "NC3Dd4cBCDCIfBYtViLI", + "trigger_name": "njkkj", + "finding_ids": [ + "277afca7-d5aa-46ed-8023-5449ece65d36" + ], + "related_doc_ids": [ + "H7QoZokBfd2ci_FqFmii|test1" + ], + "state": "AUDIT", + "error_message": null, + "alert_history": [], + "severity": "1", + "action_execution_results": [], + "start_time": 1689636056410, + "last_notification_time": 1689636056410, + "end_time": null, + "acknowledged_time": null + }, + { + "id": "513a8cb3-44bc-4eee-8aac-131be10b399e", + "version": -1, + "monitor_id": "EbQoZokBfd2ci_FqCmiR", + "workflow_id": "G7QoZokBfd2ci_FqD2iZ", + "workflow_name": "", + "associated_alert_ids": [], + "schema_version": 5, + "monitor_version": 1, + "monitor_name": "test triggers 2", + "execution_id": "G7QoZokBfd2ci_FqD2iZ_2023-07-17T23:20:55.244970_edd977d2-c02b-4cbe-8a79-2aa7991c4191", + "trigger_id": "NC3Dd4cBCDCIfBYtViLI", + "trigger_name": "njkkj", + "finding_ids": [ + "6d185585-a077-4dde-8e43-b4c01b9f3102" + ], + "related_doc_ids": [ + "ILQoZokBfd2ci_FqGmhb|test" + ], + "state": "AUDIT", + "error_message": null, + "alert_history": [], + "severity": "1", + "action_execution_results": [], + "start_time": 1689636056943, + "last_notification_time": 1689636056943, + "end_time": null, + "acknowledged_time": null + } + ], + "totalAlerts": 2 +} +``` + +#### Request fields + +| Field | Type | Description | +| :--- | :--- | :--- | +| `alerts` | Array | A list of chained alerts generated by the composite monitor. | +| `associatedAlerts` | Array | A list of audit alerts generated by the delegate monitors. | + + +### Acknowledge Chained Alerts + +[After getting your alerts](#get-chained-alerts-api), you can acknowledge multiple active alerts in one call. If the alert is already in an ERROR, COMPLETED, or ACKNOWLEDGED state, it appears in the failed array. + +```json +POST _plugins/_alerting/workflows//_acknowledge/alerts +{ + "alerts": ["eQURa3gBKo1jAh6qUo49"] +} +``` +{% include copy-curl.html %} + +#### Request fields + +| Field | Type | Description | +| :--- | :--- | :--- | +| `alerts` | Array | A list of alerts by ID. The results include alerts that are acknowledged by the system as well as alerts not recognized by the system. | + +#### Example response + +```json +{ + "success": [ + "eQURa3gBKo1jAh6qUo49" + ], + "failed": [] +} +``` + + +## Creating composite monitors in OpenSearch Dashboards + +Begin by navigating to the **Create monitor** page in OpenSearch Dashboards: **Alerting > Monitors** and select **Create monitor**. Give the monitor a name and then select **Composite monitor** as the monitor type. Steps for creating a composite monitor workflow and trigger conditions vary depending on whether you use the **Visual editor** or the **Extraction query editor**. The first provides basic UI selectors for defining the composite monitor, while the second allows you to build the workflow and trigger conditions using a script. After deciding which method to use, refer to the corresponding section. + +### Visual editor + +To use the visual editor for defining a workflow and trigger conditions, select the **Visual editor** radio button in the **Monitor defining method** section. This is shown in the following image. + +Selecting the Visual editor + +To finish creating a composite monitor in the visual editor, follow these steps: + +1. In the **Frequency** dropdown list, select either **By interval**, **Daily**, **Weekly**, **Monthly**, or **Custom cron expression**: + * **By interval** — Allows you to run the schedule repeatedly based on the number of minutes, hours, or days you specify. + * **Daily** — Specify a time of day and a time zone. + * **Weekly** — Specify a day of the week, a time of day, and a time zone. + * **Monthly** — Specify a day of the month, a time of day, and a time zone. + * **Custom cron expression** — Create a custom cron expression for the schedule. Use the **cron expressions** link for help with creating these expressions, or see the [Cron expression reference]({{site.url}}{{site.baseurl}}/observing-your-data/alerting/cron/). + +1. In the **Delegate monitors** section, enter the individual monitors you want to include in the workflow by selecting them in the dropdown lists. In the **Visual editor**, the order in which you select the monitors determines their order in the workflow. + + Select **Add another monitor** to add another dropdown list. A minimum of two delegate monitors are required, and a maximum of 10 are allowed in total. Keep in mind that composite monitors support per query, per bucket, and per document monitors as delegate monitors. + + Beside each dropdown list, you can select the view monitor icon ({::nomarkdown}view monitor icon{:/}) to open the monitor's details window and review information about it. + +1. Define a trigger or triggers for the composite monitor. In the **Triggers** section, select **Add trigger**. Add a trigger name, then define the trigger conditions. + * Use the **Select delegate monitor** label to open the pop-up window shown in the following image. + + This pop-up window shows options for selecting a delegate monitor and trigger condition operator + + * Use the **Select delegate monitor** dropdown list to select a delegate monitor from those defined in the previous step. For the first delegate monitor, you can select NOT as the operator if you prefer. After the monitor is populated in the field, you can use the trash can icon ({::nomarkdown}trash can icon{:/}) to the right of the list to remove the monitor if needed. + * Select the plus sign ({::nomarkdown}plus sign{:/}) to the right of the first monitor to select a second delegate monitor. After selecting a second monitor, select one of the operators `AND`, `OR`, `AND NOT`, or `OR NOT` to apply the condition between the two monitors. After the operator is applied, you can select the operator to open the pop-up window again and change the selection. + * Select the severity level for the alert. The options include **1 (Highest)**, **2 (High)**, **3 (Medium)**, **4 (Low)**, and **5 (Lowest)**. + * In the **Notifications** section, select a notification channel from the dropdown list. If no channels exist, select the **Manage channels** label to the right of the dropdown list to set up a notification channel. For more information about notifications, see the [Notifications]({{site.url}}{{site.baseurl}}/observing-your-data/notifications/index/) documentation. You can also select **Add notification** to specify additional notifications for the alert trigger. + + Notifications are optional for all monitor types. + {: .note } + + * To define an additional trigger, select **Add another trigger**. You can have a maximum of of 10 triggers in total. Select **Remove trigger** to the right of the screen to remove a trigger. + +1. After completing the monitor workflow and defining triggers, select **Create** in the lower-right corner of the screen. The composite monitor is created, and the monitor's details window opens. + +### Extraction query editor + +To use the extraction query editor for defining a workflow and triggers, select the **Extraction query editor** radio button in the **Monitor defining method** section. This is shown in the following image. + +Selecting the Extraction query editor + +The extraction query editor follows the same general steps as the visual editor, but it allows you to build the composite monitor workflow and alert triggers using extractions from the API query. This provides you with the ability to create more advanced configurations not supported by the visual editor. The following sections provide examples of content for each of these two fields. All other steps for composite monitor creation are the same as in those for the visual editor. + +* **Define workflow** + + In the **Define workflow** field, enter a sequence that defines the delegate monitors and their order in the workflow. The following example shows the delegate monitors that are included in the workflow, along with their order in the sequence: + + ```json + { + "sequence": { + "delegates": [ + { + "order": 1, + "monitor_id": "0TgBZokB2ZtsLaRvXz70" + }, + { + "order": 2, + "monitor_id": "8jgBZokB2ZtsLaRv6z4N" + } + ] + } + } + ``` + + All delegate monitors included in the workflow require a `monitor_id` and a value for `order`. + +* **Trigger condition** + + In the **Trigger condition** field, enter the monitors and the operators that will be used to define the conditions between them. This field requires that trigger conditions be formatted in Painless scripting language. To see how these scripts are formed for trigger conditions, see [Using Painless scripting to define chained alert trigger conditions](#using-painless-scripting-language-to-define-chained-alert-trigger-conditions). + + The following example shows a trigger condition requiring the first monitor OR the second monitor to generate an audit alert before the composite monitor can generate a chained alert: + + ```painless + (monitor[id=8d36S4kB0DWOHH7wpkET] || monitor[id=4t36S4kB0DWOHH7wL0Hk]) + ``` + + +### Viewing monitor details + +After a composite monitor is created, it appears in the list of monitors on the **Monitors** tab. The **Type** column indicates the type of monitor, including the composite monitor type. The **Associations with composite monitors** column provides a count of how many composite monitors a basic monitor is used in as a delegate monitor. Select a monitor in the **Monitor name** column to open its details window. + +For composite monitors, The **Alerts** section of the details window includes the **Actions** column, which includes the view details icon ({::nomarkdown}view monitor icon{:/}). The following image shows the **Actions** column as the last column to the right. + +Alerts section of the monitor details window + +Select this icon to open the **Alert details** window. This window shows you all of the audit alerts that were part of the execution that generated the chained alert and includes the delegate monitor that generated the audit alert. Select the **X** in the upper-right corner of the window to close **Alert details**. + +After returning to the **Alerts** section of the monitor's details window, you can select the check box to the left of the **Alert start time** to highlight the alert. After the alert is highlighted, you can select **Acknowledge** in the upper-right portion of this section. The alert is acknowledged and the status in the **State** column changes from Active to Acknowledged. + diff --git a/_observing-your-data/alerting/monitors.md b/_observing-your-data/alerting/monitors.md index ecfb2391..4309b6c6 100644 --- a/_observing-your-data/alerting/monitors.md +++ b/_observing-your-data/alerting/monitors.md @@ -10,37 +10,348 @@ redirect_from: # Monitors -Proactively monitor your data in OpenSearch with alerting and anomaly detection. Set up alerts to receive notifications when your data exceeds certain thresholds. Anomaly detection uses machine learning (ML) to automatically detect any outliers in your streaming data. You can pair anomaly detection with alerting to ensure that you're notified as soon as an anomaly is detected. +--- -See [Creating monitors](#creating-monitors), [Triggers]({{site.url}}{{site.baseurl}}/observing-your-data/alerting/triggers/), [Actions]({{site.url}}{{site.baseurl}}/observing-your-data/alerting/actions/), and [Notifications]({{site.url}}{{site.baseurl}}/observing-your-data/notifications/index/) to learn more about the use of these alerting features in OpenSearch. +
+ + Table of contents + + {: .text-delta } +- TOC +{:toc} +
-The Alerting plugin provides four monitor types: +--- -1. **per query**: Runs a query and generates alert notifications based on the matching criteria. -1. **per bucket**: Runs a query that evaluates trigger criteria based on aggregated values in the dataset. -1. **per cluster metrics**: Runs API requests on the cluster to monitor its health. -1. **per document**: Runs a query (or multiple queries combined by a tag) that returns individual documents that match the alert notification trigger condition. +## Monitor types -![Monitor types in OpenSearch]({{site.url}}{{site.baseurl}}/images/monitors.png) +The OpenSearch Dashboard Alerting plugin provides four basic monitor types as well as a composite monitor that can integrate the functionality of multiple monitors into a single workflow: +* **per query** – This monitor runs a query and generates alert notifications based on criteria that matches. +* **per bucket** – This monitor runs a query that evaluates trigger criteria based on aggregated values in the dataset. +* **per cluster metrics** – This monitor runs API requests on the cluster to monitor its health. +* **per document** – This monitor runs a query (or multiple queries combined by a tag) that returns individual documents that match the alert notification trigger condition. +* **composite monitor** — The composite monitor allows you to run multiple monitors in a single workflow and generate a single alert based on multiple trigger conditions. See [Composite monitors]({{site.url}}{{site.baseurl}}/observing-your-data/alerting/composite-monitors/) for information about creating and using these types of monitors. -## Creating monitors +## Key terms -To create a monitor: +Term | Definition +:--- | :--- +Monitor | A job that runs on a defined schedule and queries OpenSearch indexes. The results of these queries are then used as input for one or more *triggers*. +Trigger | Conditions that, if met, generate *alerts*. +Tag | A label that can be applied to multiple queries to combine them with the logical OR operation in a per document monitor. You cannot use tags with other monitor types. +Alert | An event associated with a trigger. When an alert is created, the trigger performs *actions*, which can include sending a notification. +Action | The information that you want the monitor to send out after being triggered. Actions have a *destination*, a message subject, and a message body. +Destination | A reusable location for an action. Supported locations are Amazon Chime, Email, Slack, or custom webhook. +Finding | An entry for an individual document found by a per document monitor query that contains the document ID, index name, and timestamp. Findings are stored in the Findings index: `.opensearch-alerting-finding*`. +Channel | A notification channel to use in an action. See [Notifications]({{site.url}}{{site.baseurl}}/notifications-plugin/index) for more information. -1. In the **OpenSearch Plugins** main menu, choose **Alerting**. -1. Choose **Create monitor**. -1. Enter the **Monitor details**, including monitor type, method, and schedule. -1. Select a data source from the dropdown list. -1. Define the metrics in the Query section. -1. Add a [trigger]({{site.url}}{{site.baseurl}}/observing-your-data/alerting/triggers/). -1. Select **Create**. +## Per document monitors -The maximum number of monitors you can create is 1,000. You can change the default maximum number of alerts for your cluster by updating the `plugins.alerting.monitor.max_monitors` setting using the [cluster settings API]({{site.url}}{{site.baseurl}}/observing-your-data/alerting/settings/). -{: .note} +Introduced 2.0 +{: .label .label-purple } -## Monitor variables +Per document monitors allow you to define up to 10 queries that compare the selected field with your desired value. You can define supported field data types using the following operators: -The following table lists the variables you can use to customize your monitors. +- `is` +- `is not` +- `is greater than` +- `is greater than equal` +- `is less than` +- `is less than equal` + +You query each trigger using up to 10 tags, adding the tag as a single trigger condition instead of specifying a single query. The Alerting plugin processes the trigger conditions from all queries as a logical `OR` operation, so if any of the query conditions are met, it triggers an alert. Next, the Alerting plugin tells the Notifications plugin to send the notification to a channel. + +The Alerting plugin also creates a list of document findings that contains metadata about which document matches each query. Security analytics can use the document findings data to keep track of and analyze the query data separately from the alert processes. + + +The Alerting API provides a document-level monitor that programmatically accomplishes the same function as the per document monitor in the OpenSearch Dashboards. To learn more, see [Document-level monitors]({{site.url}}{{site.baseurl}}/monitoring-plugins/alerting/api/#document-level-monitors). +{: .note} + +### Document findings + +When a per document monitor executes a query that matches a document in an index, a finding is created. OpenSearch provides a Findings index: `.opensearch-alerting-finding*` that contains findings data for all per document monitor queries. You can search the findings index with the Alerting API search operation. To learn more, see [Search for monitor findings]({{site.url}}{{site.baseurl}}/monitoring-plugins/alerting/api/#search-for-monitor-findings). + +The following metadata is provided for each document finding entry: + +* **Document** – The document ID and index name. For example: `Re5akdirhj3fl | test-logs-index`. +* **Query** – The query name that matched the document. +* **Time found** – The timestamp that indicates when the document was found during the runtime. + +It is possible to configure an alert notification for each finding, however we don't recommend this unless rules are well defined to prevent a huge volume of findings in a high ingestion cluster. + + +## Create destinations + +1. Choose **Alerting**, **Destinations**, **Add destination**. +1. Specify a name for the destination so that you can identify it later. +1. For **Type**, choose Slack, Amazon Chime, custom webhook, or [email](#email-as-a-destination). + +For Email, refer to the [Email as a destination](#email-as-a-destination) section below. For all other types, specify the webhook URL. See the documentation for [Slack](https://api.slack.com/incoming-webhooks) and [Amazon Chime](https://docs.aws.amazon.com/chime/latest/ug/webhooks.html) to learn more about webhooks. + +If you're using custom webhooks, you must specify more information: parameters and headers. For example, if your endpoint requires basic authentication, you might need to add a header with a key of `Authorization` and a value of `Basic `. You might also need to change `Content-Type` to whatever your webhook requires. Popular values are `application/json`, `application/xml`, and `text/plain`. + +This information is stored in plain text in the OpenSearch cluster. We will improve this design in the future, but for now, the encoded credentials (which are neither encrypted nor hashed) might be visible to other OpenSearch users. + + +### Email as a destination + +To send or receive an alert notification as an email, choose **Email** as the destination type. Next, add at least one sender and recipient. We recommend adding email groups if you want to notify more than a few people of an alert. You can configure senders and recipients using **Manage senders** and **Manage email groups**. + +#### Manage senders + +You need to specify an email account from which the Alerting plugin can send notifications. + +To configure a sender email, do the following: + +1. After you choose **Email** as the destination type, choose **Manage senders**. +1. Choose **Add sender**, **New sender** and enter a unique name. +1. Enter the email address, SMTP host (e.g. `smtp.gmail.com` for a Gmail account), and the port. +1. Choose an encryption method, or use the default value of **None**. However, most email providers require SSL or TLS, which require a username and password in OpenSearch keystore. Refer to [Authenticate sender account](#authenticate-sender-account) to learn more. +1. Choose **Save** to save the configuration and create the sender. You can create a sender even before you add your credentials to the OpenSearch keystore. However, you must [authenticate each sender account](#authenticate-sender-account) before you use the destination to send your alert. + +You can reuse senders across many different destinations, but each destination only supports one sender. + + +#### Manage email groups or recipients + +Use email groups to create and manage reusable lists of email addresses. For example, one alert might email the DevOps team, whereas another might email the executive team and the engineering team. + +You can enter individual email addresses or an email group in the **Recipients** field. + +1. After you choose **Email** as the destination type, choose **Manage email groups**. Then choose **Add email group**, **New email group**. +1. Enter a unique name. +1. For recipient emails, enter any number of email addresses. +1. Choose **Save**. + + +#### Authenticate sender account + +If your email provider requires SSL or TLS, you must authenticate each sender account before you can send an email. Enter these credentials in the OpenSearch keystore using the CLI. Run the following commands (in your OpenSearch directory) to enter your username and password. The `` is the name you entered for **Sender** earlier. + +```bash +./bin/opensearch-keystore add plugins.alerting.destination.email..username +./bin/opensearch-keystore add plugins.alerting.destination.email..password +``` + +Note: Keystore settings are node-specific. You must run these commands on each node. +{: .note} + +To change or update your credentials (after you've added them to the keystore on every node), call the reload API to automatically update those credentials without restarting OpenSearch: + +```json +POST _nodes/reload_secure_settings +{ + "secure_settings_password": "1234" +} +``` + + +## Create a monitor + +1. Choose **Alerting**, **Monitors**, **Create monitor**. +1. Specify a name for the monitor. +1. Choose either **Per query monitor**, **Per bucket monitor**, **Per cluster metrics monitor**, or **Per document monitor**. + +OpenSearch supports the following types of monitors: + +- **Per query monitors** run your specified query and then check whether the query's results trigger any alerts. Per query monitors can only trigger one alert at a time. +- **Per bucket monitors** let you create buckets based on selected fields and then categorize your results into those buckets. The Alerting plugin runs each bucket's unique results against a script you define later, so you have finer control over which results should trigger alerts. Furthermore, each bucket can trigger an alert. + +The maximum number of monitors you can create is 1,000. You can change the default maximum number of alerts for your cluster by calling the cluster settings API `plugins.alerting.monitor.max_monitors`. + +1. Decide how you want to define your query and triggers. You can use any of the following methods: visual editor, query editor, or anomaly detector. + + - Visual definition works well for monitors that you can define as "some value is above or below some threshold for some amount of time." + + - Query definition gives you flexibility in terms of what you query for (using [OpenSearch query DSL]({{site.url}}{{site.baseurl}}/opensearch/query-dsl/full-text/index)) and how you evaluate the results of that query (Painless scripting). + + This example averages the `cpu_usage` field: + + ```json + { + "size": 0, + "query": { + "match_all": {} + }, + "aggs": { + "avg_cpu": { + "avg": { + "field": "cpu_usage" + } + } + } + } + ``` + + You can even filter query results using `{% raw %}{{period_start}}{% endraw %}` and `{% raw %}{{period_end}}{% endraw %}`: + + ```json + { + "size": 0, + "query": { + "bool": { + "filter": [{ + "range": { + "timestamp": { + "from": "{% raw %}{{period_end}}{% endraw %}||-1h", + "to": "{% raw %}{{period_end}}{% endraw %}", + "include_lower": true, + "include_upper": true, + "format": "epoch_millis", + "boost": 1 + } + } + }], + "adjust_pure_negative": true, + "boost": 1 + } + }, + "aggregations": {} + } + ``` + + "Start" and "end" refer to the interval at which the monitor runs. See [Available variables](#available-variables). + + To define a monitor visually, choose **Visual editor**. Then choose a source index, a timeframe, an aggregation (for example, `count()` or `average()`), a data filter if you want to monitor a subset of your source index, and a group-by field if you want to include an aggregation field in your query. At least one group-by field is required if you're defining a bucket-level monitor. Visual definition works well for most monitors. + + If you use the Security plugin, you can only choose indexes that you have permission to access. For details, see [Alerting security]({{site.url}}{{site.baseurl}}/security/). + + To use a query, choose **Extraction query editor**, add your query (using [OpenSearch query DSL]({{site.url}}{{site.baseurl}}/opensearch/query-dsl/full-text/index)), and test it using the **Run** button. + + The monitor makes this query to OpenSearch as often as the schedule dictates; check the **Query Performance** section and make sure you're comfortable with the performance implications. + + To use an anomaly detector, choose **Anomaly detector** and select your **Detector**. + + The anomaly detection option is for pairing with the anomaly detection plugin. See [Anomaly Detection]({{site.url}}{{site.baseurl}}/monitoring-plugins/ad/). + For anomaly detector, choose an appropriate schedule for the monitor based on the detector interval. Otherwise, the alerting monitor might miss reading the results. + + For example, assume you set the monitor interval and the detector interval as 5 minutes, and you start the detector at 12:00. If an anomaly is detected at 12:05, it might be available at 12:06 because of the delay between writing the anomaly and it being available for queries. The monitor reads the anomaly results between 12:00 and 12:05, so it does not get the anomaly results available at 12:06. + + To avoid this issue, make sure the alerting monitor is at least twice the detector interval. + When you create a monitor using OpenSearch Dashboards, the anomaly detector plugin generates a default monitor schedule that's twice the detector interval. + + Whenever you update a detector’s interval, make sure to update the associated monitor interval as well, as the anomaly detection plugin does not do this automatically. + + **Note**: Anomaly detection is available only if you are defining a per query monitor. + {: .note} + +1. Choose how frequently to run your monitor. You can run it either by time intervals (minutes, hours, or days) or on a schedule. If you run it on a daily, weekly or monthly schedule or according to a custom [custom cron expression]({{site.url}}{{site.baseurl}}/monitoring-plugins/alerting/cron/), then you need to also provide the time zone. + +1. Add a trigger to your monitor. + + +## Create triggers + +Steps to create a trigger differ depending on whether you chose **Visual editor**, **Extraction query editor**, or **Anomaly detector** when you created the monitor. + +You begin by specifying a name and severity level for the trigger. Severity levels help you manage alerts. A trigger with a high severity level (e.g. 1) might page a specific individual, whereas a trigger with a low severity level might message a chat room. + +Remember that query-level monitors run your trigger's script just once against the query's results, but bucket-level monitors execute your trigger's script on each bucket, so you should create a trigger that best fits the monitor you chose. If you want to execute multiple scripts, you must create multiple triggers. + +### Visual editor + +For a query-level monitor's **Trigger condition**, specify a threshold for the aggregation and timeframe you chose earlier, such as "is below 1,000" or "is exactly 10." + +The line moves up and down as you increase and decrease the threshold. Once this line is crossed, the trigger evaluates to true. + +Bucket-level monitors also require you to specify a threshold and value for your aggregation and timeframe, but you can use a maximum of five conditions to better refine your trigger. Optionally, you can also use a keyword filter to filter for a specific field in your index. + +Document-level monitors provide the added option to use tags that represent multiple queries connected by the logical OR operator. + +To create a multiple query combination trigger, do the following steps: + +1. Create a per document monitor with more than one query. +2. Create the first query with a field, an operator, and a value. For example, set the query to search for the `region` field with either operator: "is" or "is not", and set the value "us-west-2". +3. Select **Add Tag** and give the tag a name. +3. Create the second query and add the same tag to it. +4. Now you can create the trigger condition and specify the tag name. This creates a combination trigger that checks two queries that both contain the same tag. The monitor checks both queries with a logical OR operation and if either query's conditions are met, then it will generate the alert notification. + +### Extraction query + +If you're using a query-level monitor, specify a Painless script that returns true or false. Painless is the default OpenSearch scripting language and has a syntax similar to Groovy. + +Trigger condition scripts revolve around the `ctx.results[0]` variable, which corresponds to the extraction query response. For example, your script might reference `ctx.results[0].hits.total.value` or `ctx.results[0].hits.hits[i]._source.error_code`. + +A return value of true means the trigger condition has been met, and the trigger should execute its actions. Test your script using the **Run** button. + +The **Info** link next to **Trigger condition** contains a useful summary of the variables and results available to your query. +{: .tip } + +Bucket-level monitors require you to specify more information in your trigger condition. At a minimum, you must have the following fields: + +- `buckets_path`, which maps variable names to metrics to use in your script. +- `parent_bucket_path`, which is a path to a multi-bucket aggregation. The path can include single-bucket aggregations, but the last aggregation must be multi-bucket. For example, if you have a pipeline such as `agg1>agg2>agg3`, `agg1` and `agg2` are single-bucket aggregations, but `agg3` must be a multi-bucket aggregation. +- `script`, which is the script that OpenSearch runs to evaluate whether to trigger any alerts. + +For example, you might have a script that looks like the following: + +```json +{ + "buckets_path": { + "count_var": "_count" + }, + "parent_bucket_path": "composite_agg", + "script": { + "source": "params.count_var > 5" + } +} +``` + +After mapping the `count_var` variable to the `_count` metric, you can use `count_var` in your script and reference `_count` data. Finally, `composite_agg` is a path to a multi-bucket aggregation. + +### Anomaly detector + +For **Trigger type**, choose **Anomaly detector grade and confidence**. + +Specify the **Anomaly grade condition** for the aggregation and timeframe you chose earlier, "IS ABOVE 0.7" or "IS EXACTLY 0.5." The *anomaly grade* is a number between 0 and 1 that indicates the level of severity of how anomalous a data point is. + +Specify the **Anomaly confidence condition** for the aggregation and timeframe you chose earlier, "IS ABOVE 0.7" or "IS EXACTLY 0.5." The *anomaly confidence* is an estimate of the probability that the reported anomaly grade matches the expected anomaly grade. + +The line moves up and down as you increase and decrease the threshold. Once this line is crossed, the trigger evaluates to true. + + +#### Sample scripts + +{::comment} +These scripts are Painless, not Groovy, but calling them Groovy in Jekyll gets us syntax highlighting in the generated HTML. +{:/comment} + +```groovy +// Evaluates to true if the query returned any documents +ctx.results[0].hits.total.value > 0 +``` + +```groovy +// Returns true if the avg_cpu aggregation exceeds 90 +if (ctx.results[0].aggregations.avg_cpu.value > 90) { + return true; +} +``` + +```groovy +// Performs some crude custom scoring and returns true if that score exceeds a certain value +int score = 0; +for (int i = 0; i < ctx.results[0].hits.hits.length; i++) { + // Weighs 500 errors 10 times as heavily as 503 errors + if (ctx.results[0].hits.hits[i]._source.http_status_code == "500") { + score += 10; + } else if (ctx.results[0].hits.hits[i]._source.http_status_code == "503") { + score += 1; + } +} +if (score > 99) { + return true; +} else { + return false; +} +``` + + +### Available variables + +You can include the following variables in your message using Mustache templates to see more information about your monitors. + +#### Monitor variables Variable | Data type | Description :--- | :--- | :--- @@ -55,81 +366,160 @@ Variable | Data type | Description `ctx.monitor.inputs.search.indices` | Array | An array that contains the indexes the monitor observes. `ctx.monitor.inputs.search.query` | N/A | The definition used to define the monitor. -## Creating per document monitors -Introduced 2.0 -{: .label .label-purple } +#### Trigger variables -Per document monitors allow you to define up to 10 queries that compare a selected field with a desired value. You can define supported field data types using the following operators: +Variable | Data type | Description +:--- | :--- | : --- +`ctx.trigger.id` | String | The trigger's ID. +`ctx.trigger.name` | String | The trigger's name. +`ctx.trigger.severity` | String | The trigger's severity. +`ctx.trigger.condition`| Object | Contains the Painless script used when creating the monitor. +`ctx.trigger.condition.script.source` | String | The language used to define the script. Must be painless. +`ctx.trigger.condition.script.lang` | String | The script used to define the trigger. +`ctx.trigger.actions`| Array | An array with one element that contains information about the action the monitor needs to trigger. -- `is` -- `is not` -- `is greater than` -- `is greater than equal` -- `is less than` -- `is less than equal` +#### Action variables -You can query each trigger using up to 10 tags, adding the tag as a single trigger condition instead of specifying a single query. The Alerting plugin processes the trigger conditions from all queries as a logical `OR` operation, so if any of the query conditions are met, it triggers an alert. The Alerting plugin then tells the Notifications plugin to send the alert notification to a channel. +Variable | Data type | Description +:--- | :--- | : --- +`ctx.trigger.actions.id` | String | The action's ID. +`ctx.trigger.actions.name` | String | The action's name. +`ctx.trigger.actions.message_template.source` | String | The message to send in the alert. +`ctx.trigger.actions.message_template.lang` | String | The scripting language used to define the message. Must be Mustache. +`ctx.trigger.actions.throttle_enabled` | Boolean | Whether throttling is enabled for this trigger. See [adding actions](#add-actions) for more information about throttling. +`ctx.trigger.actions.subject_template.source` | String | The message's subject in the alert. +`ctx.trigger.actions.subject_template.lang` | String | The scripting language used to define the subject. Must be mustache. -The Alerting plugin also creates a list of document findings that contain metadata about which document matches each query. Security Analytics can use the document findings data to keep track of and analyze the query data separately from the alert processes. +#### Other variables -The Alerting API provides a _document-level monitor_ that programmatically accomplishes the same function as the _per document monitor_ in OpenSearch Dashboards. See [Document-level monitors]({{site.url}}{{site.baseurl}}/monitoring-plugins/alerting/api/#document-level-monitors) to learn more. -{: .note} +Variable | Data type | Description +:--- | :--- : :--- +`ctx.results` | Array | An array with one element (i.e. `ctx.results[0]`). Contains the query results. This variable is empty if the trigger was unable to retrieve results. See `ctx.error`. +`ctx.last_update_time` | Milliseconds | Unix epoch time of when the monitor was last updated. +`ctx.periodStart` | String | Unix timestamp for the beginning of the period during which the alert triggered. For example, if a monitor runs every ten minutes, a period might begin at 10:40 and end at 10:50. +`ctx.periodEnd` | String | The end of the period during which the alert triggered. +`ctx.error` | String | The error message if the trigger was unable to retrieve results or unable to evaluate the trigger, typically due to a compile error or null pointer exception. Null otherwise. +`ctx.alert` | Object | The current, active alert (if it exists). Includes `ctx.alert.id`, `ctx.alert.version`, and `ctx.alert.isAcknowledged`. Null if no alert is active. Only available with query-level monitors. +`ctx.dedupedAlerts` | Object | Alerts that have already been triggered. OpenSearch keeps the existing alert to prevent the plugin from creating endless amounts of the same alerts. Only available with bucket-level monitors. +`ctx.newAlerts` | Object | Newly created alerts. Only available with bucket-level monitors. +`ctx.completedAlerts` | Object | Alerts that are no longer ongoing. Only available with bucket-level monitors. +`bucket_keys` | String | Comma-separated list of the monitor's bucket key values. Available only for `ctx.dedupedAlerts`, `ctx.newAlerts`, and `ctx.completedAlerts`. Accessed through `ctx.dedupedAlerts[0].bucket_keys`. +`parent_bucket_path` | String | The parent bucket path of the bucket that triggered the alert. Accessed through `ctx.dedupedAlerts[0].parent_bucket_path`. -### Searching document findings -When a per document monitor runs a query that matches a document in an index, a finding is created. OpenSearch provides a findings index, `.opensearch-alerting-finding*`, that contains findings data for all per document monitor queries. You can search the findings index with the Alerting API search operation. See [Search the findings index]({{site.url}}{{site.baseurl}}/monitoring-plugins/alerting/api/#search-the-findings-index) for more information. +## Add actions -The following metadata is provided for each document findings entry: +The final step in creating a monitor is to add one or more actions. Actions send notifications when trigger conditions are met. See the [Notifications plugin]({{site.url}}{{site.baseurl}}/notifications-plugin/index) to see what communication channels are supported. -* **Document** – The document ID and index name. For example: `Re5akdirhj3fl | test-logs-index`. -* **Query** – The query name that matched the document. -* **Time found** – The timestamp that indicates when the document was found during the runtime. +If you don't want to receive notifications for alerts, you don't have to add actions to your triggers. Instead, you can periodically check OpenSearch Dashboards. +{: .tip } -To prevent a large volume of findings in a high-ingestion cluster, configuring alert notifications for each finding is not recommended unless rules are well defined. -{: .important} +1. Specify a name for the action. +1. Choose a [notification channel]({{site.url}}{{site.baseurl}}/notifications-plugin/index). +1. Add a subject and body for the message. -## Creating cluster metrics monitors + You can add variables to your messages using [Mustache templates](https://mustache.github.io/mustache.5.html). You have access to `ctx.action.name`, the name of the current action, as well as all [trigger variables](#available-variables). + + If your destination is a custom webhook that expects a particular data format, you might need to include JSON (or even XML) directly in the message body: + + ```json + {% raw %}{ "text": "Monitor {{ctx.monitor.name}} just entered alert status. Please investigate the issue. - Trigger: {{ctx.trigger.name}} - Severity: {{ctx.trigger.severity}} - Period start: {{ctx.periodStart}} - Period end: {{ctx.periodEnd}}" }{% endraw %} + ``` + + In this case, the message content must conform to the `Content-Type` header in the [custom webhook]({{site.url}}{{site.baseurl}}/notifications-plugin/index). +1. If you're using a bucket-level monitor, you can choose whether the monitor should perform an action for each execution or for each alert. + +1. (Optional) Use action throttling to limit the number of notifications you receive within a given span of time. + + For example, if a monitor checks a trigger condition every minute, you could receive one notification per minute. If you set action throttling to 60 minutes, you receive no more than one notification per hour, even if the trigger condition is met dozens of times in that hour. + +1. Choose **Create**. + +After an action sends a message, the content of that message has left the purview of the Security plugin. Securing access to the message (e.g. access to the Slack channel) is your responsibility. + + +#### Sample message + +```mustache +{% raw %}Monitor {{ctx.monitor.name}} just entered an alert state. Please investigate the issue. +- Trigger: {{ctx.trigger.name}} +- Severity: {{ctx.trigger.severity}} +- Period start: {{ctx.periodStart}} +- Period end: {{ctx.periodEnd}}{% endraw %} +``` + +If you want to use the `ctx.results` variable in a message, use `{% raw %}{{ctx.results.0}}{% endraw %}` rather than `{% raw %}{{ctx.results[0]}}{% endraw %}`. This difference is due to how Mustache handles bracket notation. +{: .note } + +### Questions about destinations + +Q: What plugins do I need installed besides Alerting? + +A: To continue using the notification action in the Alerting plugin, you need to install the backend plugins `notifications-core` and `notifications`. You can also install the Notifications Dashboards plugin to manage Notification channels via OpenSearch Dashboards. + +Q: Can I still create destinations? +A: No, destinations have been deprecated and can no longer be created/edited. + +Q: Will I need to move my destinations to the Notifications plugin? +A: No. To upgrade users, a background process will automatically move destinations to notification channels. These channels will have the same ID as the destinations, and monitor execution will choose the correct ID, so you don't have to make any changes to the monitor's definition. The migrated destinations will be deleted. + +Q: What happens if any destinations fail to migrate? +A: If a destination failed to migrate, the monitor will continue using it until the monitor is migrated to a notification channel. You don't need to do anything in this case. + +Q: Do I need to install the Notifications plugins if monitors can still use destinations? +A: Yes. The fallback on destination is to prevent failures in sending messages if migration fails; however, the Notification plugin is what actually sends the message. Not having the Notification plugin installed will lead to the action failing. + + +## Work with alerts + +Alerts persist until you resolve the root cause and have the following states: + +State | Description +:--- | :--- +Active | The alert is ongoing and unacknowledged. Alerts remain in this state until you acknowledge them, delete the trigger associated with the alert, or delete the monitor entirely. +Acknowledged | Someone has acknowledged the alert, but not fixed the root cause. +Completed | The alert is no longer ongoing. Alerts enter this state after the corresponding trigger evaluates to false. +Error | An error occurred while executing the trigger---usually the result of a a bad trigger or destination. +Deleted | Someone deleted the monitor or trigger associated with this alert while the alert was ongoing. + + +## Create cluster metrics monitor In addition to monitoring conditions for indexes, the Alerting plugin allows monitoring conditions for clusters. Alerts can be set by cluster metrics to watch for the following conditions: -- The cluster health status is yellow or red. -- Cluster-level metrics, such as CPU usage and JVM memory usage, reach a specified threshold. -- Node-level metrics, such as available disk space, JVM memory usage, and CPU usage, reach a specified threshold. -- The total number of documents stores reaches a specified threshold. +- The health of your cluster reaches a status of yellow or red +- Cluster-level metrics, such as CPU usage and JVM memory usage, reach specified thresholds +- Node-level metrics, such as available disk space, JVM memory usage, and CPU usage, reach a specified threshold +- The total number of documents stores reaches a specified amount To create a cluster metrics monitor: -1. In the **OpenSearch Plugins** main menu, select **Alerting**. -1. Select **Monitors**, then **Create monitor**. -1. Select **Per cluster metrics monitor**. -1. In the Query section, select **Request type** from the dropdown. -1. To filter the API response to use only certain path parameters, enter those parameters in the **Query parameters** field. Most APIs that can be used to monitor cluster status support path parameters, as described in their respective documentation (for example, comma-separated lists of index names). -1. In the Triggers section, define the conditions that will trigger an alert. The trigger condition auto-populates a Painless `ctx` variable. For example, a cluster monitor watching for cluster stats uses the trigger condition `ctx.results[0].indices.count <= 0`, which triggers an alert based on the number of indexes returned by the query. For more specificity, add any additional Painless conditions supported by the API. To preview the condition response, select **Preview condition response**. -1. In the Actions section, indicate how users are to be notified when a trigger condition is met. -1. Select **Create**. The new monitor is listed under **Monitors**. +1. Select **Alerting** > **Monitors** > **Create monitor**. +2. Select the **Per cluster metrics monitor** option. +3. In the Query section, pick the **Request type** from the dropdown. +4. (Optional) If you want to filter the API response to use only certain path parameters, enter those parameters under **Query parameters**. Most APIs that can be used to monitor cluster status support path parameters as described in their documentation (e.g., comma-separated lists of index names). +5. In the Triggers section, indicate what conditions trigger an alert. The trigger condition autopopulates a painless ctx variable. For example, a cluster monitor watching for Cluster Stats uses the trigger condition `ctx.results[0].indices.count <= 0`, which triggers an alert based on the number of indexes returned by the query. For more specificity, add any additional painless conditions supported by the API. To see an example of the condition response, select **Preview condition response**. +6. In the Actions section, indicate how you want your users to be notified when a trigger condition is met. +7. Select **Create**. Your new monitor appears in the **Monitors** list. ### Supported APIs -Trigger conditions use responses from the following API endpoints. Most APIs that can be used to monitor cluster status support path parameters (for example, comma-separated lists of index names). They do not support query parameters. +Trigger conditions use responses from the following cat API endpoints. Most APIs that can be used to monitor cluster status support path parameters as described in their documentation (e.g., comma-separated lists of index names). However, they do not support query parameters. -- [_cluster/health]({{site.url}}{{site.baseurl}}/api-reference/cluster-health/) -- [_cluster/stats]({{site.url}}{{site.baseurl}}/api-reference/cluster-stats/) -- [_cluster/settings]({{site.url}}{{site.baseurl}}/api-reference/cluster-settings/) -- [_nodes/stats]({{site.url}}{{site.baseurl}}/opensearch/popular-api/#get-node-statistics) -- [_cat/indices]({{site.url}}{{site.baseurl}}/api-reference/cat/cat-indices/) -- [_cat/pending_tasks]({{site.url}}{{site.baseurl}}/api-reference/cat/cat-pending-tasks/) -- [_cat/recovery]({{site.url}}{{site.baseurl}}/api-reference/cat/cat-recovery/) -- [_cat/shards]({{site.url}}{{site.baseurl}}/api-reference/cat/cat-shards/) -- [_cat/snapshots]({{site.url}}{{site.baseurl}}/api-reference/cat/cat-snapshots/) -- [_cat/tasks]({{site.url}}{{site.baseurl}}/api-reference/cat/cat-tasks/) +1. [_cluster/health]({{site.url}}{{site.baseurl}}/api-reference/cluster-health/) +2. [_cluster/stats]({{site.url}}{{site.baseurl}}/api-reference/cluster-stats/) +3. [_cluster/settings]({{site.url}}{{site.baseurl}}/api-reference/cluster-settings/) +4. [_nodes/stats]({{site.url}}{{site.baseurl}}/opensearch/popular-api/#get-node-statistics) +5. [_cat/pending_tasks]({{site.url}}{{site.baseurl}}/api-reference/cat/cat-pending-tasks/) +6. [_cat/recovery]({{site.url}}{{site.baseurl}}/api-reference/cat/cat-recovery/) +7. [_cat/snapshots]({{site.url}}{{site.baseurl}}/api-reference/cat/cat-snapshots/) +8. [_cat/tasks]({{site.url}}{{site.baseurl}}/api-reference/cat/cat-tasks/) -### Restricting API fields +### Restrict API fields -To hide fields from being exposed in the API response, reconfigure the [supported_json_payloads.json](https://github.com/opensearch-project/alerting/blob/main/alerting/src/main/resources/org/opensearch/alerting/settings/supported_json_payloads.json) file inside the Alerting plugin. The file functions as an allow list for the API fields you want to use in an alert. By default, all APIs and their parameters can be used for monitors and trigger conditions. +If you want to hide fields from the API response that you do not want exposed for alerting, reconfigure the [supported_json_payloads.json](https://github.com/opensearch-project/alerting/blob/main/alerting/src/main/resources/org/opensearch/alerting/settings/supported_json_payloads.json) file inside the Alerting plugin. The file functions as an allow list for the API fields you want to use in an alert. By default, all APIs and their parameters can be used for monitors and trigger conditions. -You can modify the file so that cluster metric monitors can only be created for referenced APIs. Only fields referenced in the supported files can create trigger conditions. The `supported_json_payloads.json` file allows for a cluster metrics monitor to be created for the `_cluster/stats` API and triggers conditions for the `indices.shards.total` and `indices.shards.index.shards.min` fields. - -#### Example +However, you can modify the file so that cluster metric monitors can only be created for APIs referenced. Furthermore, only fields referenced in the supported files can create trigger conditions. This `supported_json_payloads.json` allows for a cluster metrics monitor to be created for the `_cluster/stats` API, and triggers conditions for the `indices.shards.total` and `indices.shards.index.shards.min` fields. ```json "/_cluster/stats": { @@ -142,11 +532,11 @@ You can modify the file so that cluster metric monitors can only be created for ### Painless triggers -Painless scripts define triggers for cluster metrics monitors, similar to query- or bucket-level monitors that are defined using the extraction query definition option. Painless scripts comprise at least one statement and any additional functions you want to run. The cluster metrics monitor supports up to **10** triggers. +Painless scripts define triggers for cluster metrics monitors, similar to query or bucket-level monitors that are defined using the extraction query definition option. Painless scripts are comprised of at least one statement and any additional functions you wish to execute. -In the following example, a JSON object defines a trigger that sends an alert when the cluster health is yellow. `script` points the `source` to the Painless script `ctx.results[0].status == \"yellow\`. +The cluster metrics monitor supports up to **ten** triggers. -#### Example +In this example, a JSON object creates a trigger that sends an alert when the Cluster Health is yellow. `script` points the `source` to the painless script `ctx.results[0].status == \"yellow\`. ```json { @@ -189,17 +579,13 @@ In the following example, a JSON object defines a trigger that sends an alert wh } ``` +See [trigger variables](#trigger-variables) for more painless ctx options. + ### Limitations -The cluster metrics monitor has the following limitations: +Currently, the cluster metrics monitor has the following limitations: -- Monitors cannot be created for remote clusters. -- The OpenSearch cluster must be in a state where an index's conditions can be monitored and actions can be run against the index. -- Removing resource permissions from a user does not prevent that user’s preexisting monitors for that resource from running. -- Users with permissions to create monitors are not blocked from creating monitors for resources for which they do not have permissions. While the monitors will run, they will not be able to run the API calls, and a permissions alert will be generated, for example, `no permissions for [cluster:monitor/health]`. - -## Next steps - -- Learn about [Triggers]({{site.url}}{{site.baseurl}}/observing-your-data/alerting/triggers/). -- Learn about [Actions]({{site.url}}{{site.baseurl}}/observing-your-data/alerting/actions/). -- Learn about [Notifications]({{site.url}}{{site.baseurl}}/observing-your-data/notifications/index/). +- You cannot create monitors for remote clusters. +- The OpenSearch cluster must be in a state where an index's conditions can be monitored and actions can be executed against the index. +- Removing resource permissions from a user will not prevent that user’s preexisting monitors for that resource from executing. +- Users with permissions to create monitors are not blocked from creating monitors for resources for which they do not have permissions; however, those monitors will not execute. diff --git a/images/alerting/comp-details-alerts.png b/images/alerting/comp-details-alerts.png new file mode 100644 index 00000000..b94c9de1 Binary files /dev/null and b/images/alerting/comp-details-alerts.png differ diff --git a/images/alerting/extract-q-editor.png b/images/alerting/extract-q-editor.png new file mode 100644 index 00000000..5806d9af Binary files /dev/null and b/images/alerting/extract-q-editor.png differ diff --git a/images/alerting/plus-sign-icon.png b/images/alerting/plus-sign-icon.png new file mode 100644 index 00000000..c7350aa7 Binary files /dev/null and b/images/alerting/plus-sign-icon.png differ diff --git a/images/alerting/trash-can-icon.png b/images/alerting/trash-can-icon.png new file mode 100644 index 00000000..186439fd Binary files /dev/null and b/images/alerting/trash-can-icon.png differ diff --git a/images/alerting/trigger1.png b/images/alerting/trigger1.png new file mode 100644 index 00000000..185d1aef Binary files /dev/null and b/images/alerting/trigger1.png differ diff --git a/images/alerting/vis-editor.png b/images/alerting/vis-editor.png new file mode 100644 index 00000000..5ac73dc0 Binary files /dev/null and b/images/alerting/vis-editor.png differ diff --git a/images/dashboards/view-monitor-icon.png b/images/dashboards/view-monitor-icon.png new file mode 100644 index 00000000..8af4e123 Binary files /dev/null and b/images/dashboards/view-monitor-icon.png differ