Merge branch 'main' into 475-doc-level-alerts

This commit is contained in:
Alice Williams 2022-05-26 13:58:32 -07:00 committed by GitHub
commit 95a614ebb5
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
27 changed files with 1293 additions and 43 deletions

View File

@ -12,7 +12,14 @@ redirect_from:
Historically, many multiple popular agents and ingestion tools have worked with Elasticsearch OSS, such as Beats, Logstash, Fluentd, FluentBit, and OpenTelemetry. OpenSearch aims to continue to support a broad set of agents and ingestion tools, but not all have been tested or have explicitly added OpenSearch compatibility. Historically, many multiple popular agents and ingestion tools have worked with Elasticsearch OSS, such as Beats, Logstash, Fluentd, FluentBit, and OpenTelemetry. OpenSearch aims to continue to support a broad set of agents and ingestion tools, but not all have been tested or have explicitly added OpenSearch compatibility.
As an intermediate compatibility solution, OpenSearch has a setting that instructs the cluster to return version 7.10.2 rather than its actual version. Previously, an intermediate compatibility solution was available. OpenSearch had a setting that instructed the cluster to return version 7.10.2 rather than its actual version.
The override main response setting `compatibility.override_main_response_version` is deprecated from OpenSearch version 1.x and removed from OpenSearch 2.0.0. This setting is no longer supported for compatibility with legacy clients.
{: .note}
<!--
{: .note}
If you use clients that include a version check, such as versions of Logstash OSS or Filebeat OSS between 7.x - 7.12.x, enable the setting: If you use clients that include a version check, such as versions of Logstash OSS or Filebeat OSS between 7.x - 7.12.x, enable the setting:
@ -32,7 +39,7 @@ PUT _cluster/settings
```yml ```yml
compatibility.override_main_response_version: true compatibility.override_main_response_version: true
``` ```
-->
Logstash OSS 8.0 introduces a breaking change where all plugins run in ECS compatibility mode by default. If you use a compatible [OSS client](#compatibility-matrices) you must override the default value to maintain legacy behavior: Logstash OSS 8.0 introduces a breaking change where all plugins run in ECS compatibility mode by default. If you use a compatible [OSS client](#compatibility-matrices) you must override the default value to maintain legacy behavior:
```yml ```yml

View File

@ -26,6 +26,10 @@ For example, a 1.0.0 client works with an OpenSearch 1.1.0 cluster, but might no
Most clients that work with Elasticsearch OSS 7.10.2 *should* work with OpenSearch, but the latest versions of those clients might include license or version checks that artificially break compatibility. This page includes recommendations around which versions of those clients to use for best compatibility with OpenSearch. Most clients that work with Elasticsearch OSS 7.10.2 *should* work with OpenSearch, but the latest versions of those clients might include license or version checks that artificially break compatibility. This page includes recommendations around which versions of those clients to use for best compatibility with OpenSearch.
{: .note}
OpenSearch 2.0.0 no longer supports compatibility with legacy clients. Due to breaking changes with REST APIs, some features are not supported when using OpenSearch 1.x clients to connect to OpenSearch 2.0.
Client | Recommended version Client | Recommended version
:--- | :--- :--- | :---
[Java low-level REST client](https://search.maven.org/artifact/org.elasticsearch.client/elasticsearch-rest-client/7.13.4/jar) | 7.13.4 [Java low-level REST client](https://search.maven.org/artifact/org.elasticsearch.client/elasticsearch-rest-client/7.13.4/jar) | 7.13.4

View File

@ -6,7 +6,7 @@ nav_order: 100
# JavaScript client # JavaScript client
The OpenSearch JavaScript client provides a safer and easier way to interact with your OpenSearch cluster. Rather than using OpenSearch from the browser and potentially exposing your data to the public, you can build an OpenSearch client that takes care of sending requests to your cluster. The OpenSearch JavaScript (JS) client provides a safer and easier way to interact with your OpenSearch cluster. Rather than using OpenSearch from the browser and potentially exposing your data to the public, you can build an OpenSearch client that takes care of sending requests to your cluster.
The client contains a library of APIs that let you perform different operations on your cluster and return a standard response body. The example here demonstrates some basic operations like creating an index, adding documents, and searching your data. The client contains a library of APIs that let you perform different operations on your cluster and return a standard response body. The example here demonstrates some basic operations like creating an index, adding documents, and searching your data.
@ -139,3 +139,21 @@ async function search() {
search().catch(console.log); search().catch(console.log);
``` ```
## Circuit Breaker
The `memoryCircuitBreaker` parameter in the [Cluster Settings API]({{site.url}}{{site.baseurl}}/opensearch/rest-api/cluster-settings/) gives you the ability to reject large query responses where the size of the response could crash OpenSearch Dashboards. To set the Circuit Breaker setting, use the `POST _cluster/settings` API operation on your active JS OpenSearch cluster.
`memoryCircuitBreaker` contains two fields:
- `enabled`: A Boolean used to turn the Circuit Breaker on or off. Defaults to `false`.
- `maxPercentage`: The threshold that determines whether the Circuit Breaker engages. The input range must be between `[0 ,1]`. Any number that exceeds that range will correct to `1.0`.
The following example turns on the Circuit Breaker and sets the maximum percentage of a query response to 80% of the cluster's storage. You can customize this example for use in the `POST _cluster/settings` request body.
```json
memoryCircuitBreaker: {
enabled: true,
maxPercentage: 0.8
}
```

View File

@ -5,9 +5,9 @@ baseurl: "/docs/latest" # the subpath of your site, e.g. /blog
url: "https://opensearch.org" # the base hostname & protocol for your site, e.g. http://example.com url: "https://opensearch.org" # the base hostname & protocol for your site, e.g. http://example.com
permalink: /:path/ permalink: /:path/
opensearch_version: 2.0.0-rc1 opensearch_version: 2.0.0
opensearch_dashboards_version: 2.0.0-rc1 opensearch_dashboards_version: 2.0.0
opensearch_major_minor_version: 2.0-rc1 opensearch_major_minor_version: 2.0
lucene_version: 9_1_0 lucene_version: 9_1_0
# Build settings # Build settings
@ -58,6 +58,9 @@ collections:
monitoring-plugins: monitoring-plugins:
permalink: /:collection/:path/ permalink: /:collection/:path/
output: true output: true
notifications-plugin:
permalink: /:collection/:path/
output: true
clients: clients:
permalink: /:collection/:path/ permalink: /:collection/:path/
output: true output: true
@ -103,6 +106,9 @@ just_the_docs:
monitoring-plugins: monitoring-plugins:
name: Monitoring plugins name: Monitoring plugins
nav_fold: true nav_fold: true
notifications-plugin:
name: Notifications plugin
nav_fold: true
clients: clients:
name: Clients and tools name: Clients and tools
nav_fold: true nav_fold: true

View File

@ -28,6 +28,35 @@ If you don't want to use the all-in-one installation options, you can install th
</tr> </tr>
</thead> </thead>
<tbody> <tbody>
<tr>
<td>2.0.0.0</td>
<td>
<pre>alertingDashboards 2.0.0.0
anomalyDetectionDashboards 2.0.0.0
ganttChartDashboards 2.0.0.0
indexManagementDashboards 2.0.0.0
notificationsDashboards 2.0.0.0
observabilityDashboards 2.0.0.0
queryWorkbenchDashboards 2.0.0.0
reportsDashboards 2.0.0.0
securityDashboards 2.0.0.0
</pre>
</td>
</tr>
<tr>
<td>1.3.2</td>
<td>
<pre>alertingDashboards 1.3.2.0
anomalyDetectionDashboards 1.3.2.0
ganttChartDashboards 1.3.2.0
indexManagementDashboards 1.3.2.0
observabilityDashboards 1.3.2.0
queryWorkbenchDashboards 1.3.2.0
reportsDashboards 1.3.2.0
securityDashboards 1.3.2.0
</pre>
</td>
</tr>
<tr> <tr>
<td>1.3.1</td> <td>1.3.1</td>
<td> <td>

View File

@ -0,0 +1,44 @@
---
layout: default
title: Search telemetry
nav_order: 30
---
# About search telemetry
You can use search telemetry to analyze search request performance by success or failure in OpenSearch Dashboards. OpenSearch stores telemetry data in the `.kibana_1` index.
Because there are thousands of concurrent search requests from OpenSearch Dashboards, the heavy traffic can cause significant load in an OpenSearch cluster.
OpenSearch clusters perform better with search telemetry turned off.
{: .tip }
## Turn on search telemetry
Search usage telemetry is turned off by default. To turn it on, you need to set `data.search.usageTelemetry.enabled` to `true` in the `opensearch_dashboards.yml` file.
You can find the [OpenSearch Dashboards YAML file](https://github.com/opensearch-project/OpenSearch-Dashboards/blob/main/config/opensearch_dashboards.yml) in the opensearch-project repository on GitHub.
Turning on telemetry in the `opensearch_dashboards.yml` file overrides the default search telemetry setting of `false` in the [Data plugin configuration file](https://github.com/opensearch-project/OpenSearch-Dashboards/blob/main/src/plugins/data/config.ts).
{: .note }
### Turn search telemetry on or off
The following table shows the `data.search.usageTelemetry.enabled` values you can set in `opensearch_dashboards.yml` to turn search telemetry on or off.
OpenSearch Dashboards YAML value | Search telemetry status: on or off
:--- | :---
`true` | On
`false` | Off
`none` | Off
#### Sample opensearch_dashboards.yml with telemetry enabled
This OpenSearch Dashboards YAML file excerpt shows the telemetry setting set to `true` to turn on search telemetry:
```json
# Set the value of this setting to false to suppress
# search usage telemetry to reduce the load of the OpenSearch cluster.
data.search.usageTelemetry.enabled: true
```

View File

@ -1 +1 @@
message: "This is a pre-release version of OpenSearch 2.0.0. Feel free to try it out and provide feedback. If you are looking for the most recent production-ready release, see the [1.x line](https://opensearch.org/lines/1x.html)" message: "[OpenSearch 2.0 is live 🍾!](/downloads.html)"

View File

@ -7,5 +7,5 @@
"1.1", "1.1",
"1.0" "1.0"
], ],
"latest": "1.3" "latest": "2.0"
} }

View File

@ -254,14 +254,17 @@ Deletes a managed index.
Rolls an alias over to a new index when the managed index meets one of the rollover conditions. Rolls an alias over to a new index when the managed index meets one of the rollover conditions.
**Important**: ISM checks the conditions for operations on **every execution of the policy** based on the **set interval**, _not_ continuously. The rollover will be performed if the value **has reached** or _exceeded_ the configured limit **when the check is performed**. For example with `min_size` configured to a value of 100GiB, ISM might check the index at 99 GiB and not perform the rollover. However, if the index has grown past the limit (e.g. 105GiB) by the next check, the operation is performed.
The index format must match the pattern: `^.*-\d+$`. For example, `(logs-000001)`. The index format must match the pattern: `^.*-\d+$`. For example, `(logs-000001)`.
Set `index.plugins.index_state_management.rollover_alias` as the alias to rollover. Set `index.plugins.index_state_management.rollover_alias` as the alias to rollover.
Parameter | Description | Type | Example | Required Parameter | Description | Type | Example | Required
:--- | :--- |:--- |:--- | :--- | :--- |:--- |:--- |
`min_size` | The minimum size of the total primary shard storage (not counting replicas) required to roll over the index. For example, if you set `min_size` to 100 GiB and your index has 5 primary shards and 5 replica shards of 20 GiB each, the total size of all primary shards is 100 GiB, so the rollover occurs. ISM doesn't check indexes continually, so it doesn't roll over indexes at exactly 100 GiB. Instead, if an index is continuously growing, ISM might check it at 99 GiB, not perform the rollover, check again when the shards reach 105 GiB, and then perform the operation. | `string` | `20gb` or `5mb` | No `min_size` | The minimum size of the total primary shard storage (not counting replicas) required to roll over the index. For example, if you set `min_size` to 100 GiB and your index has 5 primary shards and 5 replica shards of 20 GiB each, the total size of all primary shards is 100 GiB, so the rollover occurs. See **Important** note above. | `string` | `20gb` or `5mb` | No
`min_doc_count` | The minimum number of documents required to roll over the index. | `number` | `2000000` | No `min_primary_shard_size` | The minimum storage size of a **single primary shard** required to roll over the index. For example, if you set `min_primary_shard_size` to 30 GiB and **one of** the primary shards in the index has a size greater than the condition, the rollover occurs. See **Important** note above. | `string` | `20gb` or `5mb` | No
`min_index_age` | The minimum age required to roll over the index. Index age is the time between its creation and the present. | `string` | `5d` or `7h` | No `min_doc_count` | The minimum number of documents required to roll over the index. See **Important** note above. | `number` | `2000000` | No
`min_index_age` | The minimum age required to roll over the index. Index age is the time between its creation and the present. See **Important** note above. | `string` | `5d` or `7h` | No
```json ```json
{ {
@ -271,6 +274,14 @@ Parameter | Description | Type | Example | Required
} }
``` ```
```json
{
"rollover": {
"min_primary_shard_size": "30gb"
}
}
```
```json ```json
{ {
"rollover": { "rollover": {
@ -497,7 +508,7 @@ For information on writing cron expressions, see [Cron expression reference]({{s
## Error notifications ## Error notifications
The `error_notification` operation sends you a notification if your managed index fails. The `error_notification` operation sends you a notification if your managed index fails.
It notifies a single destination with a custom message. It notifies a single destination or [notification channel]({{site.url}}{{site.baseurl}}/notifications-plugin/index) with a custom message.
Set up error notifications at the policy level: Set up error notifications at the policy level:
@ -515,7 +526,8 @@ Set up error notifications at the policy level:
Parameter | Description | Type | Required Parameter | Description | Type | Required
:--- | :--- |:--- |:--- | :--- | :--- |:--- |:--- |
`destination` | The destination URL. | `Slack, Amazon Chime, or webhook URL` | Yes `destination` | The destination URL. | `Slack, Amazon Chime, or webhook URL` | Yes if `channel` isn't specified
`channel` | A notification channel's ID | `string` | Yes if `destination` isn't specified
`message_template` | The text of the message. You can add variables to your messages using [Mustache templates](https://mustache.github.io/mustache.5.html). | `object` | Yes `message_template` | The text of the message. You can add variables to your messages using [Mustache templates](https://mustache.github.io/mustache.5.html). | `object` | Yes
The destination system **must** return a response otherwise the `error_notification` operation throws an error. The destination system **must** return a response otherwise the `error_notification` operation throws an error.
@ -571,6 +583,21 @@ The destination system **must** return a response otherwise the `error_notificat
} }
``` ```
#### Example 4: Using a notification channel
```json
{
"error_notification": {
"channel": {
"id": "some-channel-config-id"
},
"message_template": {
"source": "The index {% raw %}{{ctx.index}}{% endraw %} failed during policy execution."
}
}
}
```
You can use the same options for `ctx` variables as the [notification](#notification) operation. You can use the same options for `ctx` variables as the [notification](#notification) operation.
## Sample policy with ISM template for auto rollover ## Sample policy with ISM template for auto rollover
@ -672,7 +699,8 @@ After 30 days, the policy moves this index into a `delete` state. The service se
"actions": [ "actions": [
{ {
"rollover": { "rollover": {
"min_index_age": "1d" "min_index_age": "1d",
"min_primary_shard_size": "30gb"
} }
} }
], ],
@ -720,7 +748,11 @@ After 30 days, the policy moves this index into a `delete` state. The service se
} }
] ]
} }
] ],
"ism_template": {
"index_patterns": ["log*"],
"priority": 100
}
} }
} }
``` ```

View File

@ -0,0 +1,319 @@
---
layout: default
title: Supported Algorithms
has_children: false
nav_order: 100
---
# Supported Algorithms
ML Commons supports various algorithms to help train and predict machine learning (ML) models or test data-driven predictions without a model. This page outlines the algorithms supported by the ML Commons plugin and the API operations they support.
## Common limitation
Except for the Localization algorithm, all of the following algorithms can only support retrieving 10,000 documents from an index as an input.
## K-Means
K-Means is a simple and popular unsupervised clustering ML algorithm built on top of [Tribuo](https://tribuo.org/) library. K-Means will randomly choose centroids, then calculate iteratively to optimize the position of the centroids until each observation belongs to the cluster with the nearest mean.
### Parameters
Parameter | Type | Description | Default Value
:--- |:--- | :--- | :---
centroids | integer | The number of clusters in which to group the generated data | `2`
iterations | integer | The number of iterations to perform against the data until a mean generates | `10`
distance_type | enum, such as `EUCLIDEAN`, `COSINE`, or `L1` | The type of measurement from which to measure the distance between centroids | `EUCLIDEAN`
### APIs
* [Train]({{site.url}}{{site.baseurl}}/ml-commons-plugin/api/#train-model)
* [Predict]({{site.url}}{{site.baseurl}}/ml-commons-plugin/api/#predict)
* [Train and predict]({{site.url}}{{site.baseurl}}/ml-commons-plugin/api/#train-and-predict)
### Example
The following example uses the Iris Data index to train K-Means synchronously.
```json
POST /_plugins/_ml/_train/kmeans
{
"parameters": {
"centroids": 3,
"iterations": 10,
"distance_type": "COSINE"
},
"input_query": {
"_source": ["petal_length_in_cm", "petal_width_in_cm"],
"size": 10000
},
"input_index": [
"iris_data"
]
}
```
### Limitations
The training process supports multi-threads, but the number of threads should be less than half of the number of CPUs.
## Linear regression
Linear regression maps the linear relationship between inputs and outputs. In ML Commons, the linear regression algorithm is adopted from the public machine learning library [Tribuo](https://tribuo.org/), which offers multidimensional linear regression models. The model supports the linear optimizer in training, including popular approaches like Linear Decay, SQRT_DECAY, [ADA](http://chrome-extension//gphandlahdpffmccakmbngmbjnjiiahp/https://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf), [ADAM](https://tribuo.org/learn/4.1/javadoc/org/tribuo/math/optimisers/Adam.html), and [RMS_DROP](https://tribuo.org/learn/4.1/javadoc/org/tribuo/math/optimisers/RMSProp.html).
### Parameters
Parameter | Type | Description | Default Value
:--- |:--- | :--- | :---
learningRate | Double | The rate of speed at which the gradient moves during descent | 0.01
momentumFactor | Double | The medium-term from which the regressor rises or falls | 0
epsilon | Double | The criteria used to identify a linear model | 1.00E-06
beta1 | Double | The estimated exponential decay for the moment | 0.9
beta2 | Double | The estimated exponential decay for the moment | 0.99
decayRate | Double | The rate at which the model decays exponentially | 0.9
momentumType | MomentumType | The defined Stochastic Gradient Descent (SDG) momentum type that helps accelerate gradient vectors in the right directions, leading to a fast convergence| STANDARD
optimizerType | OptimizerType | The optimizer used in the model | SIMPLE_SGD
### APIs
* [Train]({{site.url}}{{site.baseurl}}/ml-commons-plugin/api/#train-model)
* [Predict]({{site.url}}{{site.baseurl}}/ml-commons-plugin/api/#predict)
### Example
The following example creates a new prediction based on the previously trained linear regression model.
**Request**
```json
POST _plugins/_ml/_predict/LINEAR_REGRESSION/ROZs-38Br5eVE0lTsoD9
{
"parameters": {
"target": "price"
},
"input_data": {
"column_metas": [
{
"name": "A",
"column_type": "DOUBLE"
},
{
"name": "B",
"column_type": "DOUBLE"
}
],
"rows": [
{
"values": [
{
"column_type": "DOUBLE",
"value": 3
},
{
"column_type": "DOUBLE",
"value": 5
}
]
}
]
}
}
```
**Response**
```json
{
"status": "COMPLETED",
"prediction_result": {
"column_metas": [
{
"name": "price",
"column_type": "DOUBLE"
}
],
"rows": [
{
"values": [
{
"column_type": "DOUBLE",
"value": 17.25701855310131
}
]
}
]
}
}
```
### Limitations
ML Commons only supports the linear Stochastic gradient trainer or optimizer, which cannot effectively map the non-linear relationships in trained data. When used with complicated datasets, the linear Stochastic trainer might cause some convergence problems and inaccurate results.
## RCF
[Random Cut Forest](https://github.com/aws/random-cut-forest-by-aws) (RCF) is a probabilistic data structure used primarily for unsupervised anomaly detection. Its use also extends to density estimation and forecasting. OpenSearch leverages RCF for anomaly detection. ML Commons supports two new variants of RCF for different use cases:
* Batch RCF: Detects anomalies in non-time series data.
* Fixed in time (FIT) RCF: Detects anomalies in time series data.
### Parameters
#### Batch RCF
Parameter | Type | Description | Default Value
:--- |:--- | :--- | :---
number_of_trees | integer | The number of trees in the forest | 30
sample_size | integer | The same size used by the stream samplers in the forest | 256
output_after | integer | The number of points required by stream samplers before results return | 32
training_data_size | integer | The size of your training data | Dataset size
anomaly_score_threshold | double | The threshold of the anomaly score | 1.0
#### Fit RCF
All parameters are optional except `time_field`.
Parameter | Type | Description | Default Value
:--- |:--- | :--- | :---
number_of_trees | integer | The number of trees in the forest | 30
shingle_size | integer | A shingle, or a consecutive sequence of the most recent records | 8
sample_size | integer | The sample size used by stream samplers in the forest | 256
output_after | integer | The number of points required by stream samplers before results return | 32
time_decay | double | The decay factor used by stream samplers in the forest | 0.0001
anomaly_rate | double | The anomaly rate | 0.005
time_field | string | (**Required**) The time filed for RCF to use as time series data | N/A
date_format | string | The date and time format for the time_field field | "yyyy-MM-ddHH:mm:ss"
time_zone | string | The time zone for the time_field field | "UTC"
### APIs
* [Train]({{site.url}}{{site.baseurl}}/ml-commons-plugin/api/#train-model)
* [Predict]({{site.url}}{{site.baseurl}}/ml-commons-plugin/api/#predict)
* [Train and predict]({{site.url}}{{site.baseurl}}/ml-commons-plugin/api/#train-and-predict)
### Limitations
For FIT RCF, you can train the model with historical data and store the trained model in your index. The model will be deserialized and predict new data points when using the Predict API. However, the model in the index will not be refreshed with new data, because the model is fixed in time.
## Anomaly Localization
The Anomaly Localization algorithm finds subset level-information for aggregate data (for example, aggregated over time) that demonstrates the activity of interest, such as spikes, drops, changes, or anomalies. Localization can be applied in different scenarios, such as data exploration or root cause analysis, to expose the contributors driving the activity of interest in the aggregate data.
### Parameters
All parameters are required except `filter_query` and `anomaly_start`.
Parameter | Type | Description | Default Value
:--- | :--- | :--- | :---
index_name | String | The data collection to analyze | N/A
attribute_field_names | List<String> | The fields for entity keys | N/A
aggregations | List<AggregationBuilder> | The fields and aggregation for values | N/A
time_field_name | String | The timestamp field | null
start_time | Long | The beginning of the time range | 0
end_time | Long | The end of the time range | 0
min_time_interval | Long | The minimum time interval/scale for analysis | 0
num_outputs | integer | The maximum number of values from localization/slicing | 0
filter_query | Long | (Optional) Reduces the collection of data for analysis | Optional.empty()
anomaly_star | QueryBuilder | (Optional) The time after which the data will be analyzed | Optional.empty()
### Example
The following example executes Anomaly Localization against an RCA index.
**Request**
```bash
POST /_plugins/_ml/_execute/anomaly_localization
{
"index_name": "rca-index",
"attribute_field_names": [
"attribute"
],
"aggregations": [
{
"sum": {
"sum": {
"field": "value"
}
}
}
],
"time_field_name": "timestamp",
"start_time": 1620630000000,
"end_time": 1621234800000,
"min_time_interval": 86400000,
"num_outputs": 10
}
```
**Response**
The API responds with the sum of the contribution and base values per aggregation, every time the algorithm executes in the specified time interval.
```json
{
"results" : [
{
"name" : "sum",
"result" : {
"buckets" : [
{
"start_time" : 1620630000000,
"end_time" : 1620716400000,
"overall_aggregate_value" : 65.0
},
{
"start_time" : 1620716400000,
"end_time" : 1620802800000,
"overall_aggregate_value" : 75.0,
"entities" : [
{
"key" : [
"attr0"
],
"contribution_value" : 1.0,
"base_value" : 2.0,
"new_value" : 3.0
},
{
"key" : [
"attr1"
],
"contribution_value" : 1.0,
"base_value" : 3.0,
"new_value" : 4.0
},
{
...
},
{
"key" : [
"attr8"
],
"contribution_value" : 6.0,
"base_value" : 10.0,
"new_value" : 16.0
},
{
"key" : [
"attr9"
],
"contribution_value" : 6.0,
"base_value" : 11.0,
"new_value" : 17.0
}
]
}
]
}
}
]
}
```
### Limitations
The Localization algorithm can only be executed directly. Therefore, it cannot be used with the ML Commons Train and Predict APIs.

View File

@ -232,7 +232,7 @@ The API returns the following:
## Predict ## Predict
ML commons can predict new data with your trained model either from indexed data or a data frame. The model_id is required to use the Predict API. ML Commons can predict new data with your trained model either from indexed data or a data frame. To use the Predict API, the `model_id` is required.
```json ```json
POST /_plugins/_ml/_predict/<algorithm_name>/<model_id> POST /_plugins/_ml/_predict/<algorithm_name>/<model_id>

View File

@ -10,7 +10,7 @@ has_toc: false
ML Commons for OpenSearch eases the development of machine learning features by providing a set of common machine learning (ML) algorithms through transport and REST API calls. Those calls choose the right nodes and resources for each ML request and monitors ML tasks to ensure uptime. This allows you to leverage existing open-source ML algorithms and reduce the effort required to develop new ML features. ML Commons for OpenSearch eases the development of machine learning features by providing a set of common machine learning (ML) algorithms through transport and REST API calls. Those calls choose the right nodes and resources for each ML request and monitors ML tasks to ensure uptime. This allows you to leverage existing open-source ML algorithms and reduce the effort required to develop new ML features.
Interaction with the ML commons plugin occurs through either the [REST API]({{site.url}}{{site.baseurl}}/ml-commons-plugin/api) or [AD]({{site.url}}{{site.baseurl}}/observability-plugin/ppl/commands#ad) and [kmeans]({{site.url}}{{site.baseurl}}/observability-plugin/ppl/commands#kmeans) PPL commands. Interaction with the ML Commons plugin occurs through either the [REST API]({{site.url}}{{site.baseurl}}/ml-commons-plugin/api) or [AD]({{site.url}}{{site.baseurl}}/observability-plugin/ppl/commands#ad) and [kmeans]({{site.url}}{{site.baseurl}}/observability-plugin/ppl/commands#kmeans) Piped Processing Language (PPL) commands.
Models [trained]({{site.url}}{{site.baseurl}}/ml-commons-plugin/api#train-model) through the ML Commons plugin support model-based algorithms such as kmeans. After you've trained a model enough so that it meets your precision requirements, you can apply the model to [predict]({{site.url}}{{site.baseurl}}/ml-commons-plugin/api#predict) new data safely. Models [trained]({{site.url}}{{site.baseurl}}/ml-commons-plugin/api#train-model) through the ML Commons plugin support model-based algorithms such as kmeans. After you've trained a model enough so that it meets your precision requirements, you can apply the model to [predict]({{site.url}}{{site.baseurl}}/ml-commons-plugin/api#predict) new data safely.

View File

@ -28,7 +28,7 @@ Term | Definition
:--- | :--- :--- | :---
Monitor | A job that runs on a defined schedule and queries OpenSearch indexes. The results of these queries are then used as input for one or more *triggers*. Monitor | A job that runs on a defined schedule and queries OpenSearch indexes. The results of these queries are then used as input for one or more *triggers*.
Trigger | Conditions that, if met, generate *alerts*. Trigger | Conditions that, if met, generate *alerts*.
Tag | A label that can be applied to multiple queries to combine them with the logical OR operation in a per document monitor. You can't use tags with other monitor types. Tag | A label that can be applied to multiple queries to combine them with the logical OR operation in a per document monitor. You cannot use tags with other monitor types.
Alert | An event associated with a trigger. When an alert is created, the trigger performs *actions*, which can include sending a notification. Alert | An event associated with a trigger. When an alert is created, the trigger performs *actions*, which can include sending a notification.
Action | The information that you want the monitor to send out after being triggered. Actions have a *destination*, a message subject, and a message body. Action | The information that you want the monitor to send out after being triggered. Actions have a *destination*, a message subject, and a message body.
Destination | A reusable location for an action. Supported locations are Amazon Chime, Email, Slack, or custom webhook. Destination | A reusable location for an action. Supported locations are Amazon Chime, Email, Slack, or custom webhook.
@ -362,7 +362,6 @@ Variable | Data Type | Description
:--- | :--- | : --- :--- | :--- | : ---
`ctx.trigger.actions.id` | String | The action's ID. `ctx.trigger.actions.id` | String | The action's ID.
`ctx.trigger.actions.name` | String | The action's name. `ctx.trigger.actions.name` | String | The action's name.
`ctx.trigger.actions.destination_id`| String | The alert destination's ID.
`ctx.trigger.actions.message_template.source` | String | The message to send in the alert. `ctx.trigger.actions.message_template.source` | String | The message to send in the alert.
`ctx.trigger.actions.message_template.lang` | String | The scripting language used to define the message. Must be Mustache. `ctx.trigger.actions.message_template.lang` | String | The scripting language used to define the message. Must be Mustache.
`ctx.trigger.actions.throttle_enabled` | Boolean | Whether throttling is enabled for this trigger. See [adding actions](#add-actions) for more information about throttling. `ctx.trigger.actions.throttle_enabled` | Boolean | Whether throttling is enabled for this trigger. See [adding actions](#add-actions) for more information about throttling.
@ -391,13 +390,13 @@ Variable | Data Type | Description
## Add actions ## Add actions
The final step in creating a monitor is to add one or more actions. Actions send notifications when trigger conditions are met and support [Slack](https://slack.com/), [Amazon Chime](https://aws.amazon.com/chime/), and webhooks. The final step in creating a monitor is to add one or more actions. Actions send notifications when trigger conditions are met. See the [Notifications plugin]({{site.url}}{{site.baseurl}}/notifications-plugin/index) to see what communication channels are supported.
If you don't want to receive notifications for alerts, you don't have to add actions to your triggers. Instead, you can periodically check OpenSearch Dashboards. If you don't want to receive notifications for alerts, you don't have to add actions to your triggers. Instead, you can periodically check OpenSearch Dashboards.
{: .tip } {: .tip }
1. Specify a name for the action. 1. Specify a name for the action.
1. Choose a destination. 1. Choose a [notification channel]({{site.url}}{{site.baseurl}}/notifications-plugin/index).
1. Add a subject and body for the message. 1. Add a subject and body for the message.
You can add variables to your messages using [Mustache templates](https://mustache.github.io/mustache.5.html). You have access to `ctx.action.name`, the name of the current action, as well as all [trigger variables](#available-variables). You can add variables to your messages using [Mustache templates](https://mustache.github.io/mustache.5.html). You have access to `ctx.action.name`, the name of the current action, as well as all [trigger variables](#available-variables).
@ -408,7 +407,7 @@ If you don't want to receive notifications for alerts, you don't have to add act
{% raw %}{ "text": "Monitor {{ctx.monitor.name}} just entered alert status. Please investigate the issue. - Trigger: {{ctx.trigger.name}} - Severity: {{ctx.trigger.severity}} - Period start: {{ctx.periodStart}} - Period end: {{ctx.periodEnd}}" }{% endraw %} {% raw %}{ "text": "Monitor {{ctx.monitor.name}} just entered alert status. Please investigate the issue. - Trigger: {{ctx.trigger.name}} - Severity: {{ctx.trigger.severity}} - Period start: {{ctx.periodStart}} - Period end: {{ctx.periodEnd}}" }{% endraw %}
``` ```
In this case, the message content must conform to the `Content-Type` header in the [custom webhook](#create-destinations). In this case, the message content must conform to the `Content-Type` header in the [custom webhook]({{site.url}}{{site.baseurl}}/notifcations-plugin/index).
1. If you're using a bucket-level monitor, you can choose whether the monitor should perform an action for each execution or for each alert. 1. If you're using a bucket-level monitor, you can choose whether the monitor should perform an action for each execution or for each alert.
1. (Optional) Use action throttling to limit the number of notifications you receive within a given span of time. 1. (Optional) Use action throttling to limit the number of notifications you receive within a given span of time.
@ -433,6 +432,24 @@ After an action sends a message, the content of that message has left the purvie
If you want to use the `ctx.results` variable in a message, use `{% raw %}{{ctx.results.0}}{% endraw %}` rather than `{% raw %}{{ctx.results[0]}}{% endraw %}`. This difference is due to how Mustache handles bracket notation. If you want to use the `ctx.results` variable in a message, use `{% raw %}{{ctx.results.0}}{% endraw %}` rather than `{% raw %}{{ctx.results[0]}}{% endraw %}`. This difference is due to how Mustache handles bracket notation.
{: .note } {: .note }
### Questions about destinations
Q: What plugins do I need installed besides Alerting?
A: To continue using the notification action in the Alerting plugin, you need to install the backend plugins `notifications-core` and `notifications`. You can also install the Notifications Dashboards plugin to manage Notification channels via OpenSearch Dashboards.
Q: Can I still create destinations?
A: No, destinations have been deprecated and can no longer be created/edited.
Q: Will I need to move my destinations to the Notifications plugin?
A: No. To upgrade users, a background process will automatically move destinations to notification channels. These channels will have the same ID as the destinations, and monitor execution will choose the correct ID, so you don't have to make any changes to the monitor's definition. The migrated destinations will be deleted.
Q: What happens if any destinations fail to migrate?
A: If a destination failed to migrate, the monitor will continue using it until the monitor is migrated to a notification channel. You don't need to do anything in this case.
Q: Do I need to install the Notifications plugins if monitors can still use destinations?
A: Yes. The fallback on destination is to prevent failures in sending messages if migration fails; however, the Notification plugin is what actually sends the message. Not having the Notification plugin installed will lead to the action failing.
--- ---

View File

@ -0,0 +1,401 @@
---
layout: default
title: API
nav_order: 50
has_children: false
redirect_from:
---
# Notifications API
If you want to programmatically define your notification channels and sources for versioning and reuse, you can use the Notifications REST API to define, configure, and delete notification channels and send test messages.
---
#### Table of contents
1. TOC
{:toc}
---
## List supported channel configurations
To retrieve a list of all supported notification configuration types, send a GET request to the `features` resource.
#### Sample Request
```json
GET /_plugins/_notifications/features
```
#### Sample Response
```json
{
"allowed_config_type_list" : [
"slack",
"chime",
"webhook",
"email",
"sns",
"ses_account",
"smtp_account",
"email_group"
],
"plugin_features" : {
"tooltip_support" : "true"
}
}
```
## List all notification configurations
To retrieve a list of all notification configurations, send a GET request to the `configs` resource.
#### Sample Request
```json
GET _plugins/_notifications/configs
```
#### Sample Response
```json
{
"start_index" : 0,
"total_hits" : 2,
"total_hit_relation" : "eq",
"config_list" : [
{
"config_id" : "sample-id",
"last_updated_time_ms" : 1652760532774,
"created_time_ms" : 1652760532774,
"config" : {
"name" : "Sample Slack Channel",
"description" : "This is a Slack channel",
"config_type" : "slack",
"is_enabled" : true,
"slack" : {
"url" : "https://sample-slack-webhook"
}
}
},
{
"config_id" : "sample-id2",
"last_updated_time_ms" : 1652760735380,
"created_time_ms" : 1652760735380,
"config" : {
"name" : "Test chime channel",
"description" : "A test chime channel",
"config_type" : "chime",
"is_enabled" : true,
"chime" : {
"url" : "https://sample-chime-webhook"
}
}
}
]
}
```
To filter the notification configuration types this request returns, you can refine your query with the following optional path parameters.
Parameter | Description
:--- | :---
config_id | Specifies the channel identifier.
config_id_list | Specifies a comma-separated list of channel IDs.
from_index | The starting index to search from.
max_items | The maximum amount of items to return in your request.
sort_order | Specifies the direction to sort results in. Valid options are `asc` and `desc`.
sort_field | Field to sort results with.
last_updated_time_ms | The Unix time in milliseconds of when the channel was last updated.
created_time_ms | The Unix time in milliseconds of when the channel was created.
is_enabled | Indicates whether the channel is enabled.
config_type | The channel type. Valid options are `sns`, `slack`, `chime`, `webhook`, `smtp_account`, `ses_account`, `email_group`, and `email`.
name | The channel's name.
description | The channel's description.
email.email_account_id | The sender email addresses the channel uses.
email.email_group_id_list | The email groups the channel uses.
email.recipient_list | The channel's recipient list.
email_group.recipient_list | The channel's list of email recipient groups.
smtp_account.method | The email encryption method.
slack.url | The Slack channel's URL.
chime.url | The Amazon Chime connection's URL.
webhook.url | The webhook's URL.
smtp_account.host | The domain of the SMTP account.
smtp_account.from_address | The email account's sender address.
smtp_account.method | The SMTP account's encryption method.
sns.topic_arn | The Amazon Simple Notification Service (SNS) topic's ARN.
sns.role_arn | The Amazon SNS topic's role ARN.
ses_account.region | The Amazon Simple Email Service (SES) account's AWS Region.
ses_account.role_arn | The Amazon SES account's role ARN.
ses_account.from_address | The Amazon SES account's sender email address.
## Create channel configuration
To create a notification channel configuration, send a POST request to the `configs` resource.
#### Sample Request
```json
POST /_plugins/_notifications/configs/
{
"config_id": "sample-id",
"name": "sample-name",
"config": {
"name": "Sample Slack Channel",
"description": "This is a Slack channel",
"config_type": "slack",
"is_enabled": true,
"slack": {
"url": "https://sample-slack-webhook"
}
}
}
```
The create channel API operation accepts the following fields in its request body:
Field | Data Type | Description | Required
:--- | :--- | :--- | :---
config_id | String | The configuration's custom ID. | No
config | Object | Contains all relevant information, such as channel name, configuration type, and plugin source. | Yes
name | String | Name of the channel. | Yes
description | String | The channel's description. | No
config_type | String | The destination of your notification. Valid options are `sns`, `slack`, `chime`, `webhook`, `smtp_account`, `ses_account`, `email_group`, and `email`. | Yes
is_enabled | Boolean | Indicates whether the channel is enabled for sending and receiving notifications. Default is true. | No
The create channel operation accepts multiple `config_types` as possible notification destinations, so follow the format for your preferred `config_type`.
```json
"sns": {
"topic_arn": "<arn>",
"role_arn": "<arn>" //optional
}
"slack": {
"url": "https://sample-chime-webhoook"
}
"chime": {
"url": "https://sample-amazon-chime-webhoook"
}
"webhook": {
"url": "https://custom-webhook-test-url.com:8888/test-path?params1=value1&params2=value2"
}
"smtp_account": {
"host": "test-host.com",
"port": 123,
"method": "start_tls",
"from_address": "test@email.com"
}
"ses_account": {
"region": "us-east-1",
"role_arn": "arn:aws:iam::012345678912:role/NotificationsSESRole",
"from_address": "test@email.com"
}
"email_group": { //Email recipient group
"recipient_list": [
{
"recipient": "test-email1@test.com"
},
{
"recipient": "test-email2@test.com"
}
]
}
"email": { //The channel that sends emails
"email_account_id": "<smtp or ses account config id>",
"recipient_list": [
{
"recipient": "custom.email@test.com"
}
],
"email_group_id_list": []
}
```
The following example demonstrates how to create a channel using email as a `config_type`:
```json
POST /_plugins/_notifications/configs/
{
"id": "sample-email-id",
"name": "sample-name",
"config": {
"name": "Sample Email Channel",
"description": "Sample email description",
"config_type": "email",
"is_enabled": true,
"email": {
"email_account_id": "<email_account_id>",
"recipient_list": [
"sample@email.com"
]
}
}
}
```
#### Sample Response
```json
{
"config_id" : "<config_id>"
}
```
## Get channel configuration
To get a channel configuration by `config_id`, send a GET request and specify the `config_id` as a path parameter.
#### Sample Request
```json
GET _plugins/_notifications/configs/<config_id>
```
#### Sample Response
```json
{
"start_index" : 0,
"total_hits" : 1,
"total_hit_relation" : "eq",
"config_list" : [
{
"config_id" : "sample-id",
"last_updated_time_ms" : 1652760532774,
"created_time_ms" : 1652760532774,
"config" : {
"name" : "Sample Slack Channel",
"description" : "This is a Slack channel",
"config_type" : "slack",
"is_enabled" : true,
"slack" : {
"url" : "https://sample-slack-webhook"
}
}
}
]
}
```
## Update channel configuration
To update a channel configuration, send a POST request to the `configs` resource and specify the channel's `config_id` as a path parameter. Specify the new configuration details in the request body.
#### Sample Request
```json
PUT _plugins/_notifications/configs/<config_id>
{
"config": {
"name": "Slack Channel",
"description": "This is an updated channel configuration",
"config_type": "slack",
"is_enabled": true,
"slack": {
"url": "https://hooks.slack.com/sample-url"
}
}
}
```
#### Sample Response
```json
{
"config_id" : "<config_id>"
}
```
## Delete channel configuration
To delete a channel configuration, send a DELETE request to the `configs` resource and specify the `config_id` as a path parameter.
#### Sample Request
```json
DELETE /_plugins/_notifications/configs/<config_id>
```
#### Sample Response
```json
{
"delete_response_list" : {
"<config_id>" : "OK"
}
}
```
You can also submit a comma-separated list of channel IDs you want to delete, and OpenSearch deletes all of the specified notification channels.
#### Sample Request
```json
DELETE /_plugins/_notifications/configs/?config_id_list=<config_id1>,<config_id2>,<config_id3>...
```
#### Sample Response
```json
{
"delete_response_list" : {
"<config_id1>" : "OK",
"<config_id2>" : "OK",
"<config_id3>" : "OK"
}
}
```
## Send test notification
To send a test notification, send a GET request to `/feature/test/` and specify the channel configuration's `config_id` as a path parameter.
#### Sample Request
```json
GET _plugins/_notifications/feature/test/<config_id>
```
#### Sample Response
```json
{
"event_source" : {
"title" : "Test Message Title-0Jnlh4ABa4TCWn5C5H2G",
"reference_id" : "0Jnlh4ABa4TCWn5C5H2G",
"severity" : "info",
"tags" : [ ]
},
"status_list" : [
{
"config_id" : "0Jnlh4ABa4TCWn5C5H2G",
"config_type" : "slack",
"config_name" : "sample-id",
"email_recipient_status" : [ ],
"delivery_status" : {
"status_code" : "200",
"status_text" : """<!doctype html>
<html>
<head>
</head>
<body>
<div>
<h1>Example Domain</h1>
<p>Sample paragraph.</p>
<p><a href="sample.example.com">TO BE OR NOT TO BE, THAT IS THE QUESTION</a></p>
</div>
</body>
</html>
"""
}
}
]
}
```

View File

@ -0,0 +1,136 @@
---
layout: default
title: Notifications
nav_order: 1
has_children: false
redirect_from:
- /notifications-plugin/
---
# Notifications
The Notifications plugin provides a central location for all of your notifications from OpenSearch plugins. Using the plugin, you can configure which communication service you want to use and see relevant statistics and troubleshooting information. Currently, the Alerting and ISM plugins have integrated with the Notifications plugin.
You can use either OpenSearch Dashboards or the REST API to configure notifications. Dashboards offers a more organized way of selecting a channel type and selecting which OpenSearch plugin sources you want to use, whereas the REST API lets you programmatically define your notification channels for better versioning and reuse later on.
1. Use the Dashboards UI to first create a channel that receives notifications from other plugins. Supported communication channels include Amazon Chime, Amazon Simple Notification Service (Amazon SNS), Amazon Simple Email Service (Amazon SES), email through SMTP, Slack, and custom webhooks. After youve configured your channel and plugin sources, send messages and start tracking your notifications from the Notifications plugin's dashboard.
2. Use the Notifications REST API to configure all of your channel's settings. To use the API, you must have your notification's name, description, channel type, which OpenSearch plugins to use as sources, and other associated URLs or groups.
## Create a channel
In OpenSearch Dashboards, choose **Notifications**, **Channels**, and **Create channel**.
1. In the **Name and description** section, specify a name and optional description for your channel.
2. In the **Configurations** section, select the channel type and enter the necessary information for each type. For more information about configuring a channel that uses Amazon SNS or email, refer to the sections below. If you want to use Amazon Chime or Slack, you need to specify the webhook URL. For more information about using webhooks, see the documentation for [Slack](https://api.slack.com/messaging/webhooks) and [Amazon Chime](https://docs.aws.amazon.com/chime/latest/ug/webhooks.html).
If you want to use custom webhooks, you must specify more information: parameters and headers. For example, if your endpoint requires basic authentication, you might need to add a header with an authorization key and a value of `Basic <Base64-encoded-credential-string>`. You might also need to change `Content-Type` to whatever your webhook requires. Popular values are `application/json`, `application/xml`, and `text/plain`.
This information is stored in plain text in the OpenSearch cluster. We will improve this design in the future, but for now, the encoded credentials (which are neither encrypted nor hashed) might be visible to other OpenSearch users.
1. In the **Availability** section, select the OpenSearch plugins you want to use with the notification channel.
2. Choose **Create**.
### Amazon SNS as a channel type
OpenSearch supports Amazon SNS for notifications. This integration with Amazon SNS means that, in addition to the other channel types, the Notifications plugin can send email messages, text messages, and even run AWS Lambda functions using SNS topics. For more information about Amazon SNS, see the [Amazon Simple Notification Service Developer Guide](https://docs.aws.amazon.com/sns/latest/dg/welcome.html).
The Notifications plugin currently supports two ways to authenticate users:
1. Provide the user with full access to Amazon SNS.
2. Let the user assume an AWS Identity and Access Management (IAM) role that has permissions to access Amazon SNS. Once you configure the notification channel to use the right Amazon SNS permissions, select the OpenSearch plugins that can trigger notifications.
### Provide full Amazon SNS access permissions
If you want to provide full Amazon SNS access to the IAM user, ensure that the user has the following permissions:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"sns:*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
```
### Assuming an IAM role with Amazon SNS permissions
If you want to let the user send notifications without directly having full permissions to Amazon SNS, let the user assume a role that does have the necessary permissions.
The IAM user must have the following permissions to assume a role:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:Describe*",
"iam:ListRoles",
"sts:AssumeRole"
],
"Resource": "*"
}
]
}
```
Then add this policy into the IAM users trust relationship to actually assume the role:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<arn_number>:user/<iam_username>",
},
"Action": "sts:AssumeRole"
}
]
}
```
## Email as a channel type
To send or receive notifications with email, choose **Email** as the channel type. Next, select at least one sender and default recipient. To send notifications to more than a few people at a time, specify multiple email addresses or select a recipient group. If the Notifications plugin doesnt currently have the necessary senders or groups, you can add them by first selecting **SMTP sender** and then choosing **Create SMTP sender** or **Create recipient group**. Choose **SES sender** to use Amazon Simple Email Service (Amazon SES).
### Create email sender
1. Specify a unique name to associate with the sender.
2. Enter an email address and, if applicable, its host (for example, smtp.gmail.com) and the port. If you're using Amazon SES, enter the IAM role Amazon Resource Name (ARN) of the AWS account to send notifications from, along with the AWS Region.
3. Choose an encryption method. Most email providers require Secure Sockets Layer (SSL) or Transport Layer Security (TLS), which require a user name and password in the OpenSearch keystore. See [Authenticate sender account](#authenticate-sender-account) to learn more. Selecting an encryption method is only applicable if you're creating an SMTP sender.
4. Choose **Create** to save the configuration and create the sender. You can create a sender before you add your credentials to the OpenSearch keystore; however, you must [authenticate each sender account](#authenticate-sender-account) before you use the sender in your channel configuration.
### Create email recipient group
1. After choosing **Create recipient group**, enter a unique name to associate with the email group and an optional description.
2. Select or enter the email addresses you want to add to the recipient group.
3. Choose **Create**.
### Authenticate sender account
If your email provider requires SSL or TLS, you must authenticate each sender account before you can send an email. Enter the sender account credentials in the OpenSearch keystore using the command line interface (CLI). Run the following commands (in your OpenSearch directory) to enter your user name and password. The &lt;sender_name&gt; is the name you entered for **Sender** earlier.
```json
opensearch.notifications.core.email.<sender_name>.username
opensearch.notifications.core.email.<sender_name>.password
```
To change or update your credentials (after youve added them to the keystore on every node), call the reload API to automatically update those credentials without restarting OpenSearch.
```json
POST _nodes/reload_secure_settings
{
"secure_settings_password": "1234"
}
```

View File

@ -74,7 +74,6 @@ You don't mark settings in `opensearch.yml` as persistent or transient, and sett
```yml ```yml
cluster.name: my-application cluster.name: my-application
action.auto_create_index: true action.auto_create_index: true
compatibility.override_main_response_version: true
``` ```
The demo configuration includes a number of settings for the security plugin that you should modify before using OpenSearch for a production workload. To learn more, see [Security]({{site.url}}{{site.baseurl}}/security-plugin/). The demo configuration includes a number of settings for the security plugin that you should modify before using OpenSearch for a production workload. To learn more, see [Security]({{site.url}}{{site.baseurl}}/security-plugin/).

View File

@ -5,15 +5,18 @@ parent: Install OpenSearch
nav_order: 2 nav_order: 2
--- ---
# Operating system and JVM compatibility # Operating system compatibility
- We recommend installing OpenSearch on RHEL- or Debian-based Linux distributions that use [systemd](https://en.wikipedia.org/wiki/Systemd), such as CentOS, Amazon Linux 2, and Ubuntu (LTS). OpenSearch should work on many Linux distributions, but we only test a handful. We recommend installing OpenSearch on Red Hat Enterprise Linux (RHEL) or Debian-based Linux distributions that use [systemd](https://en.wikipedia.org/wiki/Systemd), such as CentOS, Amazon Linux 2, and Ubuntu Long-Term Support (LTS). OpenSearch should work on most Linux distributions, but we only test a handful. We recommend RHEL 7 or 8, CentOS 7 or 8, Amazon Linux 2, Ubuntu 16.04, 18.04, or 20.04 for any version of OpenSearch.
- The OpenSearch tarball ships with a compatible version of Java in the `jdk` directory. To find its version, run `./jdk/bin/java -version`. For example, the OpenSearch 1.0.0 tarball ships with Java 15.0.1+9 (non-LTS), while OpenSearch 1.3.0 includes Java 11.0.14.1+1 (LTS).
- OpenSearch 1.0 to 1.2.4 is built and tested with Java 15, while OpenSearch 1.3.0 is built and tested with Java 8, 11 and 14.
To use a different Java installation, set the `OPENSEARCH_JAVA_HOME` or `JAVA_HOME` environment variable to the Java install location. We recommend Java 11 (LTS), but OpenSearch also works with Java 8. # Java compatibility
OpenSearch version | Compatible Java versions | Recommended operating systems The OpenSearch distribution for Linux ships with a compatible [Adoptium JDK](https://adoptium.net/) version of Java in the `jdk` directory. To find the JDK version, run `./jdk/bin/java -version`. For example, the OpenSearch 1.0.0 tarball ships with Java 15.0.1+9 (non-LTS), OpenSearch 1.3.0 ships with Java 11.0.14.1+1 (LTS), and OpenSearch 2.0.0 ships with Java 17.0.2+8 (LTS). OpenSearch is tested with all compatible Java versions.
:--- | :--- | :---
1.0 - 1.2.x | 11, 15 | Red Hat Enterprise Linux 7, 8; CentOS 7, 8; Amazon Linux 2; Ubuntu 16.04, 18.04, 20.04 OpenSearch Version | Compatible Java Versions | Bundled Java Version
1.3.x | 8, 11, 14 | Red Hat Enterprise Linux 7, 8; CentOS 7, 8; Amazon Linux 2; Ubuntu 16.04, 18.04, 20.04 :---------- | :-------- | :-----------
1.0 - 1.2.x | 11, 15 | 15.0.1+9
1.3.x | 8, 11, 14 | 11.0.14.1+1
2.0.0 | 11, 17 | 17.0.2+8
To use a different Java installation, set the `OPENSEARCH_JAVA_HOME` or `JAVA_HOME` environment variable to the Java install location.

View File

@ -64,6 +64,27 @@ bin/opensearch-plugin list
</tr> </tr>
</thead> </thead>
<tbody> <tbody>
<tr>
<tr>
<td>2.0.0.0</td>
<td>
<pre>opensearch-alerting 2.0.0.0
opensearch-anomaly-detection 2.0.0.0
opensearch-asynchronous-search 2.0.0.0
opensearch-cross-cluster-replication 2.0.0.0
opensearch-index-management 2.0.0.0
opensearch-job-scheduler 2.0.0.0
opensearch-knn 2.0.0.0
opensearch-ml 2.0.0.0
opensearch-notifications 2.0.0.0
opensearch-observability 2.0.0.0
opensearch-performance-analyzer 2.0.0.0
opensearch-reports-scheduler 2.0.0.0
opensearch-security 2.0.0.0
opensearch-sql 2.0.0.0
</pre>
</td>
</tr>
<tr> <tr>
<td>2.0.0.0-rc1</td> <td>2.0.0.0-rc1</td>
<td> <td>
@ -82,6 +103,25 @@ opensearch-security 2.0.0.0-rc1
opensearch-sql 2.0.0.0-rc1 opensearch-sql 2.0.0.0-rc1
</pre> </pre>
</td> </td>
</tr>
<tr>
<td>1.3.2</td>
<td>
<pre>opensearch-alerting 1.3.2.0
opensearch-anomaly-detection 1.3.2.0
opensearch-asynchronous-search 1.3.2.0
opensearch-cross-cluster-replication 1.3.2.0
opensearch-index-management 1.3.2.0
opensearch-job-scheduler 1.3.2.0
opensearch-knn 1.3.2.0
opensearch-ml 1.3.2.0
opensearch-observability 1.3.2.0
opensearch-performance-analyzer 1.3.2.0
opensearch-reports-scheduler 1.3.2.0
opensearch-security 1.3.2.0
opensearch-sql 1.3.2.0
</pre>
</td>
</tr> </tr>
<tr> <tr>
<td>1.3.1</td> <td>1.3.1</td>

157
_opensearch/install/rpm.md Normal file
View File

@ -0,0 +1,157 @@
---
layout: default
title: RPM
parent: Install OpenSearch
nav_order: 51
---
# RPM
The RPM Package Manager (RPM) installation provides everything you need to run OpenSearch inside Red Hat or Red Hat-based Linux Distributions.
RPM supports CentOS 7 and 8, and Amazon Linux 2. If you have your own Java installation and set `JAVA_HOME` in your terminal application, macOS works, as well.
There are two methods for installing OpenSearch on RPM:
## Manual method
1. Download the RPM package directly from the [OpenSearch downloads page](https://opensearch.org/downloads.html){:target='\_blank'}. The RPM package can be download both as `x64` and `arm64`.
2. Import the public GPG key. This key verifies that the your OpenSearch instance is signed.
```bash
sudo rpm --import https://artifacts.opensearch.org/publickeys/opensearch.pgp
```
3. On your host, use `sudo yum install` or `sudo rpm -ivh` to install the package.
**x64**
```bash
sudo yum install opensearch-{{site.opensearch_version}}-linux-x64.rpm
sudo yum install opensearch-dashboards-{{site.opensearch_version}}-linux-x64.rpm
```
```bash
sudo rpm -ivh opensearch-{{site.opensearch_version}}-linux-x64.rpm
sudo rpm -ivh opensearch-dashboards-{{site.opensearch_version}}-linux-x64.rpm
```
**arm64**
```bash
sudo yum install opensearch-{{site.opensearch_version}}-linux-x64.rpm
sudo yum install opensearch-dashboards-{{site.opensearch_version}}-linux-arm64.rpm
```
```bash
sudo rpm -ivh opensearch-{{site.opensearch_version}}-linux-x64.rpm
sudo rpm -ivh opensearch-dashboards-{{site.opensearch_version}}-linux-arm64.rpm
```
Once complete, you can run OpenSearch inside your distribution.
## YUM method
YUM, an RPM package management tool, allows you to pull the RPM package from the YUM repository library.
1. Create a repository file for both OpenSearch and OpenSearch Dashboards:
```bash
sudo curl -SL https://artifacts.opensearch.org/releases/bundle/opensearch/2.x/opensearch-{{site.opensearch_version}}.repo -o /etc/yum.repos.d/{{site.opensearch_version}}.repo
```
```bash
sudo curl -SL https://artifacts.opensearch.org/releases/bundle/opensearch-dashboards/{{site.opensearch_version}}/opensearch-dashboards-{{site.opensearch_version}}.repo -o /etc/yum.repos.d/{{site.opensearch_version}}.repo
```
To verify that the repos appear in your repo list, use `sudo yum repolist`.
2. Clean your YUM cache, to ensure a smooth installation:
```bash
sudo yum clean all
```
3. With the repository file downloaded, list all available versions of OpenSearch:
```bash
sudo yum list | grep opensearch
```
4. Choose the version of OpenSearch you want to install:
```bash
sudo yum install opensearch
sudo yum install opensearch-dashboards
```
Unless otherwise indicated, the highest minor version of OpenSearch installs.
To install a specific version of OpenSearch:
```bash
sudo yum install 'opensearch-{{site.opensearch_version}}'
```
5. During installation, the installer stops to see if the GPG key matches the OpenSearch project. Verify that the `Fingerprint` matches the following:
```bash
Fingerprint: c5b7 4989 65ef d1c2 924b a9d5 39d3 1987 9310 d3fc
```
If correct, enter `yes` or `y`. The OpenSearch installation continues.
Once complete, you can run OpenSearch inside your distribution.
## Run OpenSearch
1. Run OpenSearch and OpenSearch Dashboards using `systemctl`.
```bash
sudo systemctl start opensearch.service
sudo systemctl start opensearch-dashboards.service
```
2. Send requests to the server to verify that OpenSearch is running:
```bash
curl -XGET https://localhost:9200 -u 'admin:admin' --insecure
curl -XGET https://localhost:9200/_cat/config?v -u 'admin:admin' --insecure
```
3. To stop running OpenSearch, enter:
```bash
sudo systemctl stop opensearch.service
sudo systemctl stop opensearch-dashboards.service
```
## *(Optional)* Set up Performance Analyzer
When enabled, the Performance Analyzer plugin collects data related to the performance of your OpenSearch instance. To start the Performance Analyzer plugin, enter:
```bash
sudo systemctl start opensearch-performance-analyzer.service
```
To stop the Performance Analyzer, enter:
```bash
sudo systemctl stop opensearch-performance-analyzer.service
```
## Upgrade RPM
You can upgrade your RPM OpenSearch instance both manually and through YUM.
### Manual
Download the new version of OpenSearch you want to use, and then use `rmp -Uvh` to upgrade.
### YUM
To upgrade to the latest version of OpenSearch with YUM, use `sudo yum update`. You can also upgrade to a specific OpenSearch version by using `sudo yum update opensearch-<version-number>`.

View File

@ -42,13 +42,11 @@ local | Boolean | Whether to return information from the local node only instead
cluster_manager_timeout | Time | The amount of time to wait for a connection to the cluster manager node. Default is 30 seconds. cluster_manager_timeout | Time | The amount of time to wait for a connection to the cluster manager node. Default is 30 seconds.
timeout | Time | The amount of time to wait for a response. If the timeout expires, the request fails. Default is 30 seconds. timeout | Time | The amount of time to wait for a response. If the timeout expires, the request fails. Default is 30 seconds.
wait_for_active_shards | String | Wait until the specified number of shards is active before returning a response. `all` for all shards. Default is `0`. wait_for_active_shards | String | Wait until the specified number of shards is active before returning a response. `all` for all shards. Default is `0`.
wait_for_nodes | String | Wait for N number of nodes. Use `12` for exact match, `>12` and `<12` for range.
wait_for_events | Enum | Wait until all currently queued events with the given priority are processed. Supported values are `immediate`, `urgent`, `high`, `normal`, `low`, and `languid`. wait_for_events | Enum | Wait until all currently queued events with the given priority are processed. Supported values are `immediate`, `urgent`, `high`, `normal`, `low`, and `languid`.
wait_for_no_relocating_shards | Boolean | Whether to wait until there are no relocating shards in the cluster. Default is false. wait_for_no_relocating_shards | Boolean | Whether to wait until there are no relocating shards in the cluster. Default is false.
wait_for_no_initializing_shards | Boolean | Whether to wait until there are no initializing shards in the cluster. Default is false. wait_for_no_initializing_shards | Boolean | Whether to wait until there are no initializing shards in the cluster. Default is false.
wait_for_status | Enum | Wait until the cluster is in a specific state or better. Supported values are `green`, `yellow`, and `red`. wait_for_status | Enum | Wait until the cluster health reaches the specified status or better. Supported values are `green`, `yellow`, and `red`.
<!-- wait_for_nodes | string | Wait until the specified number of nodes is available. Also supports operators <=, >=, <, and >
# Not working properly when tested -->
## Response ## Response
@ -59,6 +57,7 @@ wait_for_status | Enum | Wait until the cluster is in a specific state or better
"timed_out" : false, "timed_out" : false,
"number_of_nodes" : 2, "number_of_nodes" : 2,
"number_of_data_nodes" : 2, "number_of_data_nodes" : 2,
"discovered_master" : true,
"active_primary_shards" : 6, "active_primary_shards" : 6,
"active_shards" : 12, "active_shards" : 12,
"relocating_shards" : 0, "relocating_shards" : 0,
@ -71,3 +70,8 @@ wait_for_status | Enum | Wait until the cluster is in a specific state or better
"active_shards_percent_as_number" : 100.0 "active_shards_percent_as_number" : 100.0
} }
``` ```
## Required permissions
If you use the security plugin, make sure you have the appropriate permissions:
`cluster:monitor/health`.

View File

@ -29,7 +29,7 @@ In addition, verify and add the distinguished names (DNs) of each follower clust
First, get the node's DN from each follower cluster: First, get the node's DN from each follower cluster:
```bash ```bash
curl -XGET -k -u 'admin:admin' 'https://localhost:9200/_plugins/_security/api/ssl/certs?pretty' curl -XGET -k -u 'admin:admin' 'https://localhost:9200/_opendistro/_security/api/ssl/certs?pretty'
{ {
"transport_certificates_list": [ "transport_certificates_list": [

View File

@ -1299,7 +1299,7 @@ Retrieves the cluster's security certificates.
#### Request #### Request
``` ```
GET _opendistro/_security/api/ssl/certs GET _plugins/_security/api/ssl/certs
``` ```
#### Sample response #### Sample response

View File

@ -85,7 +85,8 @@ This table lists substitutions.
Term | Replaced with Term | Replaced with
:--- | :--- :--- | :---
`${user.name}` | Username. `${user.name}` | Username.
`${user.roles}` | A comma-separated, quoted list of user roles. `${user.roles}` | A comma-separated, quoted list of user backend roles.
`${user.securityRoles}` | A comma-separated, quoted list of user security roles.
`${attr.<TYPE>.<NAME>}` | An attribute with name `<NAME>` defined for a user. `<TYPE>` is `internal`, `jwt`, `proxy` or `ldap` `${attr.<TYPE>.<NAME>}` | An attribute with name `<NAME>` defined for a user. `<TYPE>` is `internal`, `jwt`, `proxy` or `ldap`

View File

@ -281,6 +281,7 @@ Name | Description
`opensearch_security.openid.header` | HTTP header name of the JWT token. Optional. Default is `Authorization`. `opensearch_security.openid.header` | HTTP header name of the JWT token. Optional. Default is `Authorization`.
`opensearch_security.openid.logout_url` | The logout URL of your IdP. Optional. Only necessary if your IdP does not publish the logout URL in its metadata. `opensearch_security.openid.logout_url` | The logout URL of your IdP. Optional. Only necessary if your IdP does not publish the logout URL in its metadata.
`opensearch_security.openid.base_redirect_url` | The base of the redirect URL that will be sent to your IdP. Optional. Only necessary when OpenSearch Dashboards is behind a reverse proxy, in which case it should be different than `server.host` and `server.port` in `opensearch_dashboards.yml`. `opensearch_security.openid.base_redirect_url` | The base of the redirect URL that will be sent to your IdP. Optional. Only necessary when OpenSearch Dashboards is behind a reverse proxy, in which case it should be different than `server.host` and `server.port` in `opensearch_dashboards.yml`.
`opensearch_security.openid.trust_dynamic_headers` | Compute `base_redirect_url` from the reverse proxy HTTP headers (`X-Forwarded-Host` / `X-Forwarded-Proto`). Optional. Default is `false`.
### Configuration example ### Configuration example

View File

@ -223,25 +223,25 @@ Check [Upgrade paths]({{site.url}}{{site.baseurl}}/upgrade-to/upgrade-to/#upgrad
- `ES_HOME` - Path to the existing Elasticsearch installation home. - `ES_HOME` - Path to the existing Elasticsearch installation home.
```bash ```bash
export ES_HOME = /home/workspace/upgrade-demo/node1/elasticsearch-7.10.2 export ES_HOME=/home/workspace/upgrade-demo/node1/elasticsearch-7.10.2
``` ```
- `ES_PATH_CONF` - Path to the existing Elasticsearch config directory. - `ES_PATH_CONF` - Path to the existing Elasticsearch config directory.
```bash ```bash
export ES_PATH_CONF = /home/workspace/upgrade-demo/node1/os-config export ES_PATH_CONF=/home/workspace/upgrade-demo/node1/os-config
``` ```
- `OPENSEARCH_HOME` - Path to the OpenSearch installation home. - `OPENSEARCH_HOME` - Path to the OpenSearch installation home.
```bash ```bash
export OPENSEARCH_HOME = /home/workspace/upgrade-demo/node1/opensearch-1.0.0 export OPENSEARCH_HOME=/home/workspace/upgrade-demo/node1/opensearch-1.0.0
``` ```
- `OPENSEARCH_PATH_CONF` - Path to the OpenSearch config directory. - `OPENSEARCH_PATH_CONF` - Path to the OpenSearch config directory.
```bash ```bash
export OPENSEARCH_PATH_CONF = /home/workspace/upgrade-demo/node1/opensearch-config export OPENSEARCH_PATH_CONF=/home/workspace/upgrade-demo/node1/opensearch-config
``` ```
1. The `opensearch-upgrade` tool is in the `bin` directory of the distribution. Run the following command from the distribution home: 1. The `opensearch-upgrade` tool is in the `bin` directory of the distribution. Run the following command from the distribution home:

30
breaking-changes.md Normal file
View File

@ -0,0 +1,30 @@
---
layout: default
title: Breaking Changes
nav_order: 3
permalink: /breaking-changes/
---
## 2.0.0
### Remove mapping types parameter
The `type` parameter has been removed from all OpenSearch API endpoints. Instead, indexes can be categorized by document type. For more details, see issue [#1940](https://github.com/opensearch-project/opensearch/issues/1940).
### Deprecate outdated nomenclature
In order for OpenSearch to include more inclusive naming conventions, we've replaced the following terms in our code with more inclusive terms:
- "Whitelist" is now "Allow list"
- "Blacklist" is now "Deny list"
- "Master" is now "Cluster Manager"
If you are still using the outdated terms in the context of the security APIs or for node management, your calls and automation will continue to work until the terms are removed later in 2022.
### Deprecate Compatibility override
The override main response setting `compatibility.override_main_response_version` is deprecated from OpenSearch version 1.x and removed from OpenSearch 2.0.0. This setting is no longer supported for compatibility with legacy clients.
### Add OpenSearch Notifications plugins
In OpenSearch 2.0, the Alerting plugin is now integrated with new plugins for Notifications. If you want to continue to use the notification action in the Alerting plugin, install the new backend plugins `notifications-core` and `notifications`. If you want to manage notifications in OpenSearch Dashboards, use the new `notificationsDashboards` plugin. For more information, see [Questions about destinations]({{site.url}}{{site.baseurl}}/monitoring-plugins/alerting/monitors#questions-about-destinations) on the Monitors page.

View File

@ -9,7 +9,9 @@ permalink: /version-history/
OpenSearch version | Release highlights | Release date OpenSearch version | Release highlights | Release date
:--- | :--- | :--- | :--- :--- | :--- | :--- | :---
[2.0.0](https://github.com/opensearch-project/opensearch-build/blob/main/release-notes/opensearch-release-notes-2.0.0.md) | Includes document-level monitors for alerting, OpenSearch Notifications plugins, and Geo Map Tiles in OpenSearch Dashboards. Also adds support for Lucene 9 and bug fixes for all OpenSearch plugins. For a full list of release highlights, see the Release Notes. | 26 May 2022
[2.0.0-rc1](https://github.com/opensearch-project/opensearch-build/blob/main/release-notes/opensearch-release-notes-2.0.0-rc1.md) | The Release Candidate for 2.0.0. This version allows you to preview the upcoming 2.0.0 release before the GA release. The preview release adds document level alerting, support for Lucene 9, and the ability to use term lookup queries in document level security. | 3 May 2022 [2.0.0-rc1](https://github.com/opensearch-project/opensearch-build/blob/main/release-notes/opensearch-release-notes-2.0.0-rc1.md) | The Release Candidate for 2.0.0. This version allows you to preview the upcoming 2.0.0 release before the GA release. The preview release adds document level alerting, support for Lucene 9, and the ability to use term lookup queries in document level security. | 3 May 2022
[1.3.2](https://github.com/opensearch-project/opensearch-build/blob/main/release-notes/opensearch-release-notes-1.3.2.md) | Bug fixes for Anomaly Detection and the Security Dashboards Plugin, adds the option to install OpenSearch using RPM, as well as enhancements to the ML Commons execute task, and the removal of the job-scheduler zip in Anomaly Detection. | 5 May 2022
[1.3.1](https://github.com/opensearch-project/opensearch-build/blob/main/release-notes/opensearch-release-notes-1.3.1.md) | Bug fixes when using document-level security, and adjusted ML Commons to use the latest RCF jar and protostuff to RCF model serialization. | 30 March 2022 [1.3.1](https://github.com/opensearch-project/opensearch-build/blob/main/release-notes/opensearch-release-notes-1.3.1.md) | Bug fixes when using document-level security, and adjusted ML Commons to use the latest RCF jar and protostuff to RCF model serialization. | 30 March 2022
[1.3.0](https://github.com/opensearch-project/opensearch-build/blob/main/release-notes/opensearch-release-notes-1.3.0.md) | Adds Model Type Validation to Validate Detector API, continuous transforms, custom actions, applied policy parameter to Explain API, default action retries, and new rollover and transition conditions to Index Management, new ML Commons plugin, parse command to SQL, Application Analytics, Live Tail, Correlation, and Events Flyout to Observbility, and auto backport and support for OPENSEARCH_JAVA_HOME to Performance Analyzer. Bug fixes. | 17 March 2022 [1.3.0](https://github.com/opensearch-project/opensearch-build/blob/main/release-notes/opensearch-release-notes-1.3.0.md) | Adds Model Type Validation to Validate Detector API, continuous transforms, custom actions, applied policy parameter to Explain API, default action retries, and new rollover and transition conditions to Index Management, new ML Commons plugin, parse command to SQL, Application Analytics, Live Tail, Correlation, and Events Flyout to Observbility, and auto backport and support for OPENSEARCH_JAVA_HOME to Performance Analyzer. Bug fixes. | 17 March 2022
[1.2.4](https://github.com/opensearch-project/opensearch-build/blob/main/release-notes/opensearch-release-notes-1.2.4.md) | Updates Performance Analyzer, SQL, and Security plugins to Log4j 2.17.1, Alerting and Job Scheduler to cron-utils 9.1.6, and gson in Anomaly Detection and SQL. | 18 January 2022 [1.2.4](https://github.com/opensearch-project/opensearch-build/blob/main/release-notes/opensearch-release-notes-1.2.4.md) | Updates Performance Analyzer, SQL, and Security plugins to Log4j 2.17.1, Alerting and Job Scheduler to cron-utils 9.1.6, and gson in Anomaly Detection and SQL. | 18 January 2022