Add ML connector edits (#4636)

* Add ML connector edits

Signed-off-by: Naarcha-AWS <naarcha@amazon.com>

* Change integrators to blueprints

Signed-off-by: Naarcha-AWS <naarcha@amazon.com>

* Add PM feedback

Signed-off-by: Naarcha-AWS <naarcha@amazon.com>

* Change connector names

Signed-off-by: Naarcha-AWS <naarcha@amazon.com>

* Add note about which parameters are relevant to admins.

Signed-off-by: Naarcha-AWS <naarcha@amazon.com>

* Make seperation between personas more clear

Signed-off-by: Naarcha-AWS <naarcha@amazon.com>

* Fix typo

Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com>

* Add technical feedback

Signed-off-by: Naarcha-AWS <naarcha@amazon.com>

* Breakout connectors and blueprints into two pages.

Signed-off-by: Naarcha-AWS <naarcha@amazon.com>

* Fix blueprint links

Signed-off-by: Naarcha-AWS <naarcha@amazon.com>

* Address additional technical feedback

Signed-off-by: Naarcha-AWS <naarcha@amazon.com>

* Apply suggestions from code review

Co-authored-by: Yaliang Wu <ylwu@amazon.com>
Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: Heather Halter <HDHALTER@AMAZON.COM>
Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com>

* Add Doc review

Signed-off-by: Naarcha-AWS <naarcha@amazon.com>

* Apply suggestions from code review

Co-authored-by: Nathan Bower <nbower@amazon.com>
Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: Nathan Bower <nbower@amazon.com>
Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com>

* Apply suggestions from code review

Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com>

---------

Signed-off-by: Naarcha-AWS <naarcha@amazon.com>
Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com>
Co-authored-by: Yaliang Wu <ylwu@amazon.com>
Co-authored-by: Heather Halter <HDHALTER@AMAZON.COM>
Co-authored-by: Nathan Bower <nbower@amazon.com>
This commit is contained in:
Naarcha-AWS 2023-08-10 14:55:37 -05:00 committed by GitHub
parent b02dec8211
commit 62ab7416aa
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
3 changed files with 342 additions and 164 deletions

View File

@ -0,0 +1,79 @@
---
layout: default
title: Building blueprints
has_children: false
nav_order: 65
parent: ML extensibility
---
# Building blueprints
All connectors consist of a JSON blueprint created by machine learning (ML) developers. The blueprint allows administrators and data scientists to make connections between OpenSearch and an AI service or model-serving technology.
The following example shows a blueprint that connects to Amazon SageMaker:
```json
POST /_plugins/_ml/connectors/_create
{
"name": "<YOUR CONNECTOR NAME>",
"description": "<YOUR CONNECTOR DESCRIPTION>",
"version": "<YOUR CONNECTOR VERSION>",
"protocol": "aws_sigv4",
"credential": {
"access_key": "<ADD YOUR AWS ACCESS KEY HERE>",
"secret_key": "<ADD YOUR AWS SECRET KEY HERE>",
"session_token": "<ADD YOUR AWS SECURITY TOKEN HERE>"
},
"parameters": {
"region": "<ADD YOUR AWS REGION HERE>",
"service_name": "sagemaker"
},
"actions": [
{
"action_type": "predict",
"method": "POST",
"headers": {
"content-type": "application/json"
},
"url": "<ADD YOUR Sagemaker MODEL ENDPOINT URL>",
"request_body": "<ADD YOUR REQUEST BODY. Example: ${parameters.inputs}>"
}
]
}
```
## Example blueprints
You can find blueprints for each connector in the [ML Commons repository](https://github.com/opensearch-project/ml-commons/tree/2.x/docs/remote_inference_blueprints).
## Configuration options
The following configuration options are **required** in order to build a connector blueprint. These settings can be used for both external and local connectors.
| Field | Data type | Description |
| :--- | :--- | :--- |
| `name` | String | The name of the connector. |
| `description` | String | A description of the connector. |
| `version` | Integer | The version of the connector. |
| `protocol` | String | The protocol for the connection. For AWS services such as Amazon SageMaker and Amazon Bedrock, use `aws_sigv4`. For all other services, use `http`. |
| `parameters` | JSON object | The default connector parameters, including `endpoint` and `model`. Any parameters indicated in this field can be overridden by parameters specified in a predict request. |
| `credential` | `Map<string, string>` | Defines any credential variables required to connect to your chosen endpoint. ML Commons uses **AES/GCM/NoPadding** symmetric encryption to encrypt your credentials. When the connection to the cluster first starts, OpenSearch creates a random 32-byte encryption key that persists in OpenSearch's system index. Therefore, you do not need to manually set the encryption key. |
| `actions` | JSON array | Define what actions can run within the connector. If you're an administrator making a connection, add the [blueprint]({{site.url}}{{site.baseurl}}/ml-commons-plugin/extensibility/blueprints/) for your desired connection. |
| `backend_roles` | JSON array | A list of OpenSearch backend roles. For more information about setting up backend roles, see [Assigning backend roles to users]({{site.url}}{{site.baseurl}}/ml-commons-plugin/model-access-control#assigning-backend-roles-to-users). |
| `access_mode` | String | Sets the access mode for the model, either `public`, `restricted`, or `private`. Default is `private`. For more information about `access_mode`, see [Model groups]({{site.url}}{{site.baseurl}}/ml-commons-plugin/model-access-control#model-groups). |
| `add_all_backend_roles` | Boolean | When set to `true`, adds all `backend_roles` to the access list, which only a user with admin permissions can adjust. When set to `false`, non-admins can add `backend_roles`. |
The `action` parameter supports the following options.
| Field | Data type | Description |
| :--- | :--- | :--- |
| `action_type` | String | Required. Sets the ML Commons API operation to use upon connection. As of OpenSearch 2.9, only `predict` is supported. |
| `method` | String | Required. Defines the HTTP method for the API call. Supports `POST` and `GET`. |
| `url` | String | Required. Sets the connection endpoint at which the action takes place. This must match the regex expression for the connection used when [adding trusted endpoints]({{site.url}}{{site.baseurl}}/ml-commons-plugin/extensibility/index#adding-trusted-endpoints). |
| `headers` | JSON object | Sets the headers used inside the request or response body. Default is `ContentType: application/json`. If your third-party ML tool requires access control, define the required `credential` parameters in the `headers` parameter. |
| `request_body` | String | Required. Sets the parameters contained inside the request body of the action. The parameters must include `\"inputText\`, which specifies how users of the connector should construct the request payload for the `action_type`. |
## Next step
To see how system administrators and data scientists use blueprints for connectors, see [Creating connectors for third-party ML platforms]({{site.url}}{{site.baseurl}}/ml-commons-plugin/extensibility/connectors/).

View File

@ -1,135 +1,43 @@
---
layout: default
title: Connecting to other ML platforms
title: Creating connectors for third-party ML platforms
has_children: false
nav_order: 60
nav_order: 61
parent: ML extensibility
---
# Connecting to third-party ML platforms
# Creating connectors for third-party ML platforms
Machine Learning (ML) connectors provide the ability to integrate OpenSearch ML capabilities with third-party ML tools and platforms. Through connectors, OpenSearch can invoke these third-party endpoints to enrich query results and data pipelines.
You can provision connectors in two ways:
1. An [external connector](#external-connector), saved in a connector index, which can be reused and shared with multiple remote models but requires access to both the model, the connector inside of OpenSearch, and the third party being accessed by the connector, such as OpenAI or SageMaker.
2. A [local connector](#local-connector), saved in the model index, which can only be used with one remote model. Unlike a standalone connector, users only need access to the model itself to access an internal connector because the connection is established inside the model.
Machine Learning (ML) Connectors provides the ability to integrate OpenSearch ML capabilities with third-party ML tools and platforms. Through connectors, OpenSearch can invoke these third-party endpoints to enrich query results and data pipelines.
## Supported connectors
As of OpenSearch 2.9, connectors have been tested for the following ML tools, though it is possible to create connectors for other tools not listed here:
As of OpenSearch 2.9, connectors have been tested for the following ML services, though it is possible to create connectors for other platforms not listed here:
- [Amazon SageMaker](https://aws.amazon.com/sagemaker/) allows you to host and manage the lifecycle of text-embedding models, powering semantic search queries in OpenSearch. When connected, Amazon SageMaker hosts your models and OpenSearch is used to query inferences. This benefits Amazon SageMaker users who value its functionality, such as model monitoring, serverless hosting, and workflow automation for continuous training and deployment.
- [ChatGPT](https://openai.com/blog/chatgpt) enables you to run OpenSearch queries while invoking the ChatGPT API, helping you build on OpenSearch faster and improving the data retrieval speed for OpenSearch search functionality.
- [OpenAI ChatGPT](https://openai.com/blog/chatgpt) enables you to invoke an OpenAI chat model from inside an OpenSearch cluster.
- [Cohere](https://cohere.com/) allows you to use data from OpenSearch to power Cohere's large language models.
Additional connectors will be added to this page as they are tested and verified.
All connectors consist of a JSON blueprint created by machine learning (ML) developers. The blueprint allows administrators and data scientists to make connections between OpenSearch and an AI service or model-serving technology.
You can find blueprints for each connector in the [ML Commons repository](https://github.com/opensearch-project/ml-commons/tree/2.x/docs/remote_inference_blueprints).
If you want to build your own blueprint, see [Building blueprints]({{site.url}}{{site.baseurl}}/ml-commons-plugin/extensibility/blueprints/).
## Prerequisites
## External connector
If you are an admin deploying an ML connector, make sure that the target model of the connector has already been deployed on your chosen platform. Furthermore, make sure that you have permissions to send and receive data to the third-party API for your connector.
Admins are only required to enter their `credential` settings, such as `"openAI_key"`, for the service they are connecting to. All other parameters are defined within the [blueprint]({{site.url}}{{site.baseurl}}/ml-commons-plugin/extensibility/blueprints/).
{: .note}
When access control is enabled on your third-party platform, you can enter your security settings using the `authorization` or `credential` settings inside the connector API.
### Adding trusted endpoints
To configure connectors in OpenSearch, add the trusted endpoints to your cluster settings using the `plugins.ml_commons.trusted_connector_endpoints_regex` setting, which supports Java regex expressions, as shown in the following example:
```json
PUT /_cluster/settings
{
"persistent": {
"plugins.ml_commons.trusted_connector_endpoints_regex": [
"^https://runtime\\.sagemaker\\..*\\.amazonaws\\.com/.*$",
"^https://api\\.openai\\.com/.*$",
"^https://api\\.cohere\\.ai/.*$",
"^https://bedrock\\..*\\.amazonaws.com/.*$"
]
}
}
```
{% include copy-curl.html %}
### Enabling ML nodes
Most connectors require the use of dedicated ML nodes. To make sure you have ML nodes enabled, update the following cluster settings:
```json
PUT /_cluster/settings
{
"persistent": {
"plugins.ml_commons.only_run_on_ml_node": true,
}
}
```
{% include copy-curl.html %}
If you are running a remote inference or local model, you can set `"plugins.ml_commons.only_run_on_ml_node"` to `false` and use data nodes instead.
### Setting up connector access control
To enable access control on the connector API, use the following cluster setting:
```json
PUT /_cluster/settings
{
"persistent": {
"plugins.ml_commons.connector_access_control_enabled": true
}
}
```
{% include copy-curl.html %}
When enabled, the `backend_roles`, `add_all_backend_roles`, or `access_model` options are required in order to use the connector API. If successful, OpenSearch returns the following response:
```json
{
"acknowledged": true,
"persistent": {
"plugins": {
"ml_commons": {
"connector_access_control_enabled": "true"
}
}
},
"transient": {}
}
```
## Creating a connector
You can build connectors in two ways:
1. A **standalone connector**, saved in a connector index, can be reused and shared with multiple remote models but requires access to both the model and the third party being accessed by the connector, such as OpenAI.
2. An **internal connector**, saved in the model index, can only be used with one remote model. Unlike a standalone connector, users only need access to the model itself to access an internal connector because the connection is established inside the model.
## Configuration options
The following configuration options are **required** in order to create a connector. These settings can be used for both standalone and internal connectors.
| Field | Data type | Description |
| :--- | :--- | :--- |
| `name` | String | The name of the connector. |
| `description` | String | A description of the connector. |
| `version` | Integer | The version of the connector. |
| `protocol` | String | The protocol for the connection. For AWS services such as Amazon SageMaker and Amazon Bedrock, use `aws_sigv4`. For all other services, use `http`. |
| `parameter` | JSON array | The default connector parameters, including `endpoint` and `model`.
| `credential` | String | Defines any credential variables required to connect to your chosen endpoint. ML Commons uses **AES/GCM/NoPadding** symmetric encryption with a key length of 32 bytes. When a connection cluster first starts, the key persists in OpenSearch. Therefore, you do not need to manually encrypt the key.
| `action` | JSON array | Tells the connector what actions to run after a connection to ML Commons has been established.
| `backend_roles` | String | A list of OpenSearch backend roles. For more information about setting up backend roles, see [Assigning backend roles to users]({{site.url}}{{site.baseurl}}/ml-commons-plugin/model-access-control#assigning-backend-roles-to-users).
| `access_mode` | String | Sets the access mode for the model, either `public`, `restricted`, or `private`. Default is `private`. For more information about `access_mode`, see [Model groups]({{site.url}}{{site.baseurl}}/ml-commons-plugin/model-access-control#model-groups).
| `add_all_backend_roles` | Boolean | When set to `true`, adds all `backend_roles` to the access list, which only a user with admin permissions can adjust. When set to `false`, non-admins can add `backend_roles`.
When creating a connection, the `action` setting tells the connector what ML Commons API operation to run against the connection endpoint. You can configure actions using the following settings.
| Field | Data type | Description |
| :--- | :--- | :--- |
`action_type` | String | Required. Sets the ML Commons API operation to use upon connection. As of OpenSearch 2.9, only `predict` is supported.
`method` | String | Required. Defines the HTTP method for the API call. Supports `POST` and `GET`.
`url` | String | Required. Sets the connection endpoint at which the action takes place. This must match the regex expression for the connection used when [adding trusted endpoints](#adding-trusted-endpoints).
`headers` | String | Sets the headers used inside the request or response body. Default is `application/json`.
`request_body` | String | Required. Sets the parameters contained inside the request body of the action.
### Standalone connector
The connector creation API, `/_plugins/_ml/connectors/_create`, creates connections to third-party ML tools. Using the `endpoint` parameter, you can connect ML Commons to any supported ML tool using its specific API endpoint. For example, to connect to a ChatGPT completion model, you can connect using the `api.openai.com`, as shown in the following example:
The connector creation API, `/_plugins/_ml/connectors/_create`, creates connections that allow users to deploy and register external models through OpenSearch. Using the `endpoint` parameter, you can connect ML Commons to any supported ML tool using its specific API endpoint. For example, to connect to a ChatGPT model, you can connect using `api.openai.com`, as shown in the following example:
```json
POST /_plugins/_ml/connectors/_create
@ -160,7 +68,7 @@ POST /_plugins/_ml/connectors/_create
```
{% include copy-curl.html %}
If successful, the connector API responds with a `connector_id` and `status` for the connection:
If successful, the connector API responds with the `connector_id` for the connection:
```json
{
@ -168,6 +76,93 @@ If successful, the connector API responds with a `connector_id` and `status` for
}
```
```json
POST /_plugins/_ml/models/_register
{
"name": "openAI-gpt-3.5-turbo",
"function_name": "remote",
"model_group_id": "lEFGL4kB4ubqQRzegPo2",
"description": "test model",
"connector": {
"name": "OpenAI Connector",
"description": "The connector to public OpenAI model service for GPT 3.5",
"version": 1,
"protocol": "http",
"parameters": {
"endpoint": "api.openai.com",
"max_tokens": 7,
"temperature": 0,
"model": "text-davinci-003"
},
"credential": {
"openAI_key": "..."
},
"actions": [
{
"action_type": "predict",
"method": "POST",
"url": "https://${parameters.endpoint}/v1/completions",
"headers": {
"Authorization": "Bearer ${credential.openAI_key}"
},
"request_body": "{ \"model\": \"${parameters.model}\", \"prompt\": \"${parameters.prompt}\", \"max_tokens\": ${parameters.max_tokens}, \"temperature\": ${parameters.temperature} }"
}
]
}
}
```
## Local connector
Admins are only required to enter their `credential` settings, such as `"openAI_key"`, for the service they are connecting to. All other parameters are defined within the [blueprint]({{site.url}}{{site.baseurl}}/ml-commons-plugin/extensibility/blueprints/).
{: .note}
To create an internal connector, add the `connector` parameter to the Register model API, as shown in the following example:
```json
POST /_plugins/_ml/models/_register
{
"name": "openAI-GPT-3.5: internal connector",
"function_name": "remote",
"model_group_id": "lEFGL4kB4ubqQRzegPo2",
"description": "test model",
"connector": {
"name": "openAI-gpt-3.5-turbo",
"function_name": "remote",
"model_group_id": "lEFGL4kB4ubqQRzegPo2",
"description": "test model",
"connector": {
"name": "OpenAI Connector",
"description": "The connector to public OpenAI model service for GPT 3.5",
"version": 1,
"protocol": "http",
"parameters": {
"endpoint": "api.openai.com",
"max_tokens": 7,
"temperature": 0,
"model": "text-davinci-003"
},
"credential": {
"openAI_key": "..."
},
"actions": [
{
"action_type": "predict",
"method": "POST",
"url": "https://${parameters.endpoint}/v1/completions",
"headers": {
"Authorization": "Bearer ${credential.openAI_key}"
},
"request_body": "{ \"model\": \"${parameters.model}\", \"prompt\": \"${parameters.prompt}\", \"max_tokens\": ${parameters.max_tokens}, \"temperature\": ${parameters.temperature} }"
}
]
}
}
}
```
## Registering and deploying a connected model
After a connection has been created, use the `connector_id` from the response to register and deploy a connected model.
To register a model, you have the following options:
@ -175,7 +170,26 @@ To register a model, you have the following options:
- You can use `model_group_id` to register a model version to an existing model group.
- If you do not use `model_group_id`, ML Commons creates a model with a new model group.
The following example registers a model named `openAI-GPT-3.5 completions`:
If you want to create a new `model_group`, use the following example:
```json
POST /_plugins/_ml/model_groups/_register
{
"name": "remote_model_group",
"description": "This is an example description"
}
```
ML Commons returns the following response:
```json
{
"model_group_id": "wlcnb4kBJ1eYAeTMHlV6",
"status": "CREATED"
}
```
The following example registers a model named `openAI-gpt-3.5-turbo`:
```json
POST /_plugins/_ml/models/_register
@ -251,7 +265,7 @@ GET /_plugins/_ml/tasks/vVePb4kBJ1eYAeTM7ljG
**Verify deploy completion response**
```
```json
{
"model_id": "cleMb4kBJ1eYAeTMFFg4",
"task_type": "DEPLOY_MODEL",
@ -272,7 +286,6 @@ After a successful deployment, you can test the model using the Predict API set
POST /_plugins/_ml/models/cleMb4kBJ1eYAeTMFFg4/_predict
{
"parameters": {
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "system",
@ -324,46 +337,6 @@ The Predict API returns inference results for the connected model, as shown in t
}
```
### Internal connector
To create an internal connector, add the `connector` parameter to the Register model API, as shown in the following example:
```json
POST /_plugins/_ml/models/_register
{
"name": "openAI-GPT-3.5 completions: internal connector",
"function_name": "remote",
"model_group_id": "lEFGL4kB4ubqQRzegPo2",
"description": "test model",
"connector": {
"name": "OpenAI Connector",
"description": "The connector to public OpenAI model service for GPT 3.5",
"version": 1,
"protocol": "http",
"parameters": {
"endpoint": "api.openai.com",
"max_tokens": 7,
"temperature": 0,
"model": "text-davinci-003"
},
"credential": {
"openAI_key": "..."
},
"actions": [
{
"action_type": "predict",
"method": "POST",
"url": "https://${parameters.endpoint}/v1/completions",
"headers": {
"Authorization": "Bearer ${credential.openAI_key}"
},
"request_body": "{ \"model\": \"${parameters.model}\", \"prompt\": \"${parameters.prompt}\", \"max_tokens\": ${parameters.max_tokens}, \"temperature\": ${parameters.temperature} }"
}
]
}
}
```
## Examples
@ -403,10 +376,10 @@ POST /_plugins/_ml/connectors/_create
}
```
After creating the connector, you can retrieve the `task_id`, deploy the model, and use the Predict API, similar to a standalone connector.
After creating the connector, you can retrieve the `task_id` and `connector_id` to register and deploy the model and then use the Predict API, similarly to a standalone connector.
### AWS SageMaker
### Amazon SageMaker
The following example creates a standalone Amazon SageMaker connector. The same options can be used for an internal connector under the `connector` parameter:
@ -440,17 +413,46 @@ POST /_plugins/_ml/connectors/_create
}
```
The `credential` parameter contains the following options reserved for `aws-sigv4` authentication:
The `credential` parameter contains the following options reserved for `aws_sigv4` authentication:
- `access_key`: Required. Provides the access key for the AWS instance.
- `secret_key`: Required. Provides the secret key for the AWS instance.
- `session_token`: Optional. Provides a temporary set of credentials for the AWS instance.
The `paramaters` section requires the following options when using `aws-sigv4` authentication:
The `parameters` section requires the following options when using `aws_sigv4` authentication:
- `region`: The AWS Region in which the AWS instance is located.
- `service_name`: The name of the AWS service for the connector.
### Cohere
The following example request creates a standalone Cohere connection:
```json
POST /_plugins/_ml/connectors/_create
{
"name": "Cohere Connector: embedding",
"description": "The connector to cohere embedding model",
"version": 1,
"protocol": "http",
"credential": {
"cohere_key": "..."
},
"actions": [
{
"action_type": "predict",
"method": "POST",
"url": "https://api.cohere.ai/v1/embed",
"headers": {
"Authorization": "Bearer ${credential.cohere_key}"
},
"request_body": "{ \"texts\": ${parameters.prompt}, \"truncate\": \"END\" }"
}
]
}
```
{% include copy-curl.html %}
## Next steps

View File

@ -0,0 +1,97 @@
---
layout: default
title: ML extensibility
has_children: true
nav_order: 60
---
# ML extensibility
Machine learning (ML) extensibility enables ML developers to create integrations with other ML services, such as Amazon SageMaker or OpenAI. These integrations provide system administrators and data scientists the ability to run ML workloads outside of their OpenSearch cluster.
To get started with ML extensibility, choose from the following options:
- If you're an ML developer wanting to integrate with your specific ML services, see [Building blueprints]({{site.url}}{{site.baseurl}}/ml-commons-plugin/extensibility/blueprints/).
- If you're a system administrator or data scientist wanting to create a connection to an ML service, see [Creating connectors for third-party ML platforms]({{site.url}}{{site.baseurl}}/ml-commons-plugin/extensibility/connectors/).
## Prerequisites
If you're an admin deploying an ML connector, make sure that the target model of the connector has already been deployed on your chosen platform. Furthermore, make sure that you have permissions to send and receive data to the third-party API for your connector.
When access control is enabled on your third-party platform, you can enter your security settings using the `authorization` or `credential` settings inside the connector API.
### Adding trusted endpoints
To configure connectors in OpenSearch, add the trusted endpoints to your cluster settings using the `plugins.ml_commons.trusted_connector_endpoints_regex` setting, which supports Java regex expressions, as shown in the following example:
```json
PUT /_cluster/settings
{
"persistent": {
"plugins.ml_commons.trusted_connector_endpoints_regex": [
"^https://runtime\\.sagemaker\\..*[a-z0-9-]\\.amazonaws\\.com/.*$",
"^https://api\\.openai\\.com/.*$",
"^https://api\\.cohere\\.ai/.*$"
]
}
}
```
{% include copy-curl.html %}
### Setting up connector access control
If you plan on using a remote connector, make sure to use an OpenSearch cluster with the Security plugin enabled. Using the Security plugin gives you access to connector access control, which is required when using a remote connector.
{: .warning}
If you require granular access control for your connectors, use the following cluster setting:
```json
PUT /_cluster/settings
{
"persistent": {
"plugins.ml_commons.connector_access_control_enabled": true
}
}
```
{% include copy-curl.html %}
When access control is enabled, you can install the [Security plugin]({{site.url}}{{site.baseurl}}/security/index/). This makes the `backend_roles`, `add_all_backend_roles`, or `access_model` options required in order to use the connector API. If successful, OpenSearch returns the following response:
```json
{
"acknowledged": true,
"persistent": {
"plugins": {
"ml_commons": {
"connector_access_control_enabled": "true"
}
}
},
"transient": {}
}
```
### Node settings
Remote models based on external connectors consume fewer resources. Therefore, you can deploy any model from a standalone connector using data nodes. To make sure that your standalone connection uses data nodes, set `plugins.ml_commons.only_run_on_ml_node` to `false`, as shown in the following example:
```json
PUT /_cluster/settings
{
"persistent": {
"plugins.ml_commons.only_run_on_ml_node": false
}
}
```
{% include copy-curl.html %}
## Next steps
- For more information about managing ML models in OpenSearch, see [ML Framework]({{site.url}}{{site.baseurl}}/ml-commons-plugin/model-serving-framework/).
- For more information about interacting with ML models in OpenSearch, see [Managing ML models in OpenSearch Dashboards]({{site.url}}{{site.baseurl}}/ml-commons-plugin/ml-dashboard/)