ad feedback
This commit is contained in:
parent
64252a261f
commit
64a91b7b12
|
@ -2733,8 +2733,6 @@ GET _plugins/_anomaly_detection/detectors/<detectorId>/_profile?_all=true
|
|||
|
||||
```json
|
||||
{
|
||||
"category_field": "host",
|
||||
"value": "i-00f28ec1eb8997686",
|
||||
"is_active": true,
|
||||
"last_active_timestamp": 1604026394879,
|
||||
"last_sample_timestamp": 1604026394879,
|
||||
|
|
|
@ -17,12 +17,10 @@ Anomaly detection automatically detects anomalies in your OpenSearch data in ne
|
|||
|
||||
You can pair the anomaly detection plugin with the [alerting plugin]({{site.url}}{{site.baseurl}}/monitoring-plugins/alerting/) to notify you as soon as an anomaly is detected.
|
||||
|
||||
## Get started with Anomaly Detection
|
||||
|
||||
To get started, choose **Anomaly Detection** in OpenSearch Dashboards.
|
||||
To first test with sample streaming data, you can try out one of the preconfigured detectors with one of the sample datasets.
|
||||
|
||||
### Step 1: Define a detector
|
||||
## Step 1: Define a detector
|
||||
|
||||
A detector is an individual anomaly detection task. You can define multiple detectors, and all the detectors can run simultaneously, with each analyzing data from different sources.
|
||||
|
||||
|
@ -45,7 +43,7 @@ Setting the window delay to 1 minute shifts the interval window to 1:49 - 1:59,
|
|||
|
||||
After you define the detector, the next step is to configure the model.
|
||||
|
||||
### Step 2: Configure the model
|
||||
## Step 2: Configure the model
|
||||
|
||||
#### Add features to your detector
|
||||
|
||||
|
@ -100,7 +98,7 @@ Examine the sample preview and use it to fine-tune your feature configurations (
|
|||
- If you don't see any sample anomaly result, check the detector interval and make sure you have more than 400 data points for some entities during the preview date range.
|
||||
1. Choose **Next**.
|
||||
|
||||
### Step 3: Set up detector jobs
|
||||
## Step 3: Set up detector jobs
|
||||
|
||||
To start a real-time detector to find anomalies in your data in near real-time, check **Start real-time detector automatically (recommended)**.
|
||||
|
||||
|
@ -110,11 +108,11 @@ Analyzing historical data helps you get familiar with the anomaly detection plug
|
|||
|
||||
We recommend experimenting with historical analysis with different feature sets and checking the precision before moving on to real-time detectors.
|
||||
|
||||
### Step 4: Review and create
|
||||
## Step 4: Review and create
|
||||
|
||||
Review your model configuration and select **Create detector**.
|
||||
|
||||
### Step 5: Observe the results
|
||||
## Step 5: Observe the results
|
||||
|
||||
Choose the **Real-time results** or **Historical analysis** tab. For real-time results, you need to wait for some time to see the anomaly results. If the detector interval is 10 minutes, the detector might take more than an hour to start, as it's waiting for sufficient data to generate anomalies.
|
||||
|
||||
|
@ -141,14 +139,13 @@ If you set the category field, you see an additional **Heat map** chart. The hea
|
|||
Choose and drag over the anomaly line chart to zoom in and see a more detailed view of an anomaly.
|
||||
{: .note }
|
||||
|
||||
|
||||
### Step 4: Set up alerts
|
||||
## Step 6: Set up alerts
|
||||
|
||||
Under **Real-time results**, choose **Set up alerts** and configure a monitor to notify you when anomalies are detected. For steps to create a monitor and set up notifications based on your anomaly detector, see [Monitors]({{site.url}}{{site.baseurl}}/monitoring-plugins/alerting/monitors/).
|
||||
|
||||
If you stop or delete a detector, make sure to delete any monitors associated with it.
|
||||
|
||||
### Step 5: Adjust the model
|
||||
## Step 7: Adjust the model
|
||||
|
||||
To see all the configuration settings for a detector, choose the **Detector configuration** tab.
|
||||
|
||||
|
@ -156,7 +153,7 @@ To see all the configuration settings for a detector, choose the **Detector conf
|
|||
- You need to stop real-time and historical analysis to change its configuration. Confirm that you want to stop the detector and proceed.
|
||||
1. To enable or disable features, in the **Features** section, choose **Edit** and adjust the feature settings as needed. After you make your changes, choose **Save and start detector**.
|
||||
|
||||
### Step 8: Manage your detectors
|
||||
## Step 8: Manage your detectors
|
||||
|
||||
To start, stop, or delete a detector, go to the **Detectors** page.
|
||||
|
||||
|
|
|
@ -39,7 +39,7 @@ Setting | Default | Description
|
|||
`plugins.anomaly_detection.batch_task_piece_size` | 1,000 | The date range for a historical task is split into smaller pieces and the anomaly detection plugin runs the task piece by piece. Each piece contains 1,000 detection intervals by default. For example, if detector interval is 1 minute and one piece is 1,000 minutes, the feature data is queried every 1,000 minutes. You can change this setting from 1 to 10,000.
|
||||
`plugins.anomaly_detection.batch_task_piece_interval_seconds` | 5 | Add a time interval between two pieces of the same historical analysis task. This interval prevents the task from consuming too much of the available resources and starving other operations like search and bulk index. You can change this setting from 1 to 600 seconds.
|
||||
`plugins.anomaly_detection.max_top_entities_for_historical_analysis` | 1,000 | The maximum number of top entities that you run for a high cardinality detector historical analysis. The range is from 1 to 10,000.
|
||||
`plugins.anomaly_detection.max_running_entities_per_detector_for_historical_analysis` | 10 | The number of entity tasks that you can run in parallel for a single high cardinality detector. The task slots available on your cluster also impact how many entities run in parallel. If a cluster has 3 data nodes, each data node has 10 task slots by default. Say you already have two high cardinality detectors and each of them run 10 entities. If you start a single-entity detector that takes 1 task slot, the number of task slots available is 10 * 3 - 10 * 2 - 1 = 9. If you now start a new high cardinality detector, the detector can only run 9 entities in parallel and not 10. You can tune this value from 1 to 1,000 based on your cluster's capability. If you set a higher value, the anomaly detection plugin runs historical analysis faster but also consumes more resources.
|
||||
`plugins.anomaly_detection.max_running_entities_per_detector_for_historical_analysis` | 10 | The number of entity tasks that you can run in parallel for a high cardinality detector analysis. The task slots available on your cluster also impact how many entities run in parallel. If a cluster has 3 data nodes, each data node has 10 task slots by default. Say you already have two high cardinality detectors and each of them run 10 entities. If you start a single-entity detector that takes 1 task slot, the number of task slots available is 10 * 3 - 10 * 2 - 1 = 9. If you now start a new high cardinality detector, the detector can only run 9 entities in parallel and not 10. You can tune this value from 1 to 1,000 based on your cluster's capability. If you set a higher value, the anomaly detection plugin runs historical analysis faster but also consumes more resources.
|
||||
`plugins.anomaly_detection.max_cached_deleted_tasks` | 1,000 | You can rerun historical analysis for a single detector as many times as you like. The anomaly detection plugin only keeps a limited number of old tasks, by default 1 old task. If you run historical analysis three times for a detector, the oldest task is deleted. Because historical analysis generates a number of anomaly results in a short span of time, it's necessary to clean up anomaly results for a deleted task. With this field, you can configure how many deleted tasks you can cache at most. The plugin cleans up a task's results when it's deleted. If the plugin fails to do this cleanup, it adds the task's results into a cache and an hourly cron job performs the cleanup. You can use this setting to limit how many old tasks are put into cache to avoid a DDoS attack. After an hour, if still you find an old task result in the cache, use the [delete detector results API]({{site.url}}{{site.baseurl}}/monitoring-plugins/ad/api/#delete-detector-results) to delete the task result manually. You can tune this setting from 1 to 10,000.
|
||||
`plugins.anomaly_detection.delete_anomaly_result_when_delete_detector` | False | Whether the anomaly detection plugin deletes the anomaly result when you delete a detector. If you want to save some disk space, especially if you've high cardinality detectors generating a lot of results, set this field to true. Alternatively, you can use the [delete detector results API]({{site.url}}{{site.baseurl}}/monitoring-plugins/ad/api/#delete-detector-results) to manually delete the results.
|
||||
`plugins.anomaly_detection.dedicated_cache_size` | 10 | If the real-time analysis of a high cardinality detector starts successfully, the anomaly detection plugin guarantees keeping 10 (dynamically adjustable via this setting) entities' models in memory per node. If the number of entities exceeds this limit, the plugin puts the extra entities' models in a memory space shared by all detectors. The actual number of entities varies based on the memory that you've available and the frequencies of the entities. If you'd like the plugin to guarantee keeping more entities' models in memory and if you're cluster has sufficient memory, you can increase this setting value.
|
||||
|
|
Loading…
Reference in New Issue