Ml commons (#5017)

* Adding ML Node to cluster settings page

Signed-off-by: David Tippett <17506770+dtaivpp@users.noreply.github.com>

* Removed Permissions and Cluster Settings from index; added roles to model access control

Signed-off-by: David Tippett <17506770+dtaivpp@users.noreply.github.com>

* Referenced code sample was for local connector not external

Signed-off-by: David Tippett <17506770+dtaivpp@users.noreply.github.com>

* Updated ML index page to refrence the order to get started with ML Commons.

Signed-off-by: David Tippett <17506770+dtaivpp@users.noreply.github.com>

* Fixing style errors.

Signed-off-by: David Tippett <17506770+dtaivpp@users.noreply.github.com>

* Update _ml-commons-plugin/cluster-settings.md

Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com>
Signed-off-by: David Tippett <Dtaivpp@gmail.com>

* Update _ml-commons-plugin/index.md

Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com>
Signed-off-by: David Tippett <Dtaivpp@gmail.com>

* Update _ml-commons-plugin/index.md

Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com>
Signed-off-by: David Tippett <Dtaivpp@gmail.com>

* Update _ml-commons-plugin/index.md

Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com>
Signed-off-by: David Tippett <Dtaivpp@gmail.com>

* Update _ml-commons-plugin/index.md

Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com>
Signed-off-by: David Tippett <Dtaivpp@gmail.com>

* Update _ml-commons-plugin/extensibility/connectors.md

Co-authored-by: Nathan Bower <nbower@amazon.com>
Signed-off-by: David Tippett <Dtaivpp@gmail.com>

* Update _ml-commons-plugin/index.md

Co-authored-by: Nathan Bower <nbower@amazon.com>
Signed-off-by: David Tippett <Dtaivpp@gmail.com>

* Update _ml-commons-plugin/index.md

Co-authored-by: Nathan Bower <nbower@amazon.com>
Signed-off-by: David Tippett <Dtaivpp@gmail.com>

---------

Signed-off-by: David Tippett <17506770+dtaivpp@users.noreply.github.com>
Signed-off-by: David Tippett <Dtaivpp@gmail.com>
Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com>
Co-authored-by: Nathan Bower <nbower@amazon.com>
This commit is contained in:
David Tippett 2023-09-21 14:44:19 -04:00 committed by GitHub
parent 796076bb04
commit 2eb81a3e19
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
4 changed files with 23 additions and 41 deletions

View File

@ -9,6 +9,13 @@ nav_order: 160
To enhance and customize your OpenSearch cluster for machine learning (ML), you can add and modify several configuration settings for the ML Commons plugin in your 'opensearch.yml' file.
## ML node
By default, ML tasks and models only run on ML nodes. When configured without the `data` node role, ML nodes do not store any shards and instead calculate resource requirements at runtime. To use an ML node, create a node in your `opensearch.yml` file. Give your node a custom name and define the node role as `ml`:
```yml
node.roles: [ ml ]
```
## Run tasks and models on ML nodes only

View File

@ -16,7 +16,6 @@ You can provision connectors in two ways:
2. A [local connector](#local-connector), saved in the model index, which can only be used with one remote model. Unlike a standalone connector, users only need access to the model itself to access an internal connector because the connection is established inside the model.
## Supported connectors
As of OpenSearch 2.9, connectors have been tested for the following ML services, though it is possible to create connectors for other platforms not listed here:
@ -76,6 +75,8 @@ If successful, the connector API responds with the `connector_id` for the connec
}
```
With the returned `connector_id` we can register a model that uses that connector:
```json
POST /_plugins/_ml/models/_register
{
@ -83,32 +84,7 @@ POST /_plugins/_ml/models/_register
"function_name": "remote",
"model_group_id": "lEFGL4kB4ubqQRzegPo2",
"description": "test model",
"connector": {
"name": "OpenAI Connector",
"description": "The connector to public OpenAI model service for GPT 3.5",
"version": 1,
"protocol": "http",
"parameters": {
"endpoint": "api.openai.com",
"max_tokens": 7,
"temperature": 0,
"model": "text-davinci-003"
},
"credential": {
"openAI_key": "..."
},
"actions": [
{
"action_type": "predict",
"method": "POST",
"url": "https://${parameters.endpoint}/v1/completions",
"headers": {
"Authorization": "Bearer ${credential.openAI_key}"
},
"request_body": "{ \"model\": \"${parameters.model}\", \"prompt\": \"${parameters.prompt}\", \"max_tokens\": ${parameters.max_tokens}, \"temperature\": ${parameters.temperature} }"
}
]
}
"connector_id": "a1eMb4kBJ1eYAeTMAljY"
}
```

View File

@ -17,18 +17,10 @@ Models [trained]({{site.url}}{{site.baseurl}}/ml-commons-plugin/api#training-the
Should you not want to use a model, you can use the [Train and Predict]({{site.url}}{{site.baseurl}}/ml-commons-plugin/api#train-and-predict) API to test your model without having to evaluate the model's performance.
# Permissions
## Using ML Commons
The ML Commons plugin has two reserved roles:
- `ml_full_access`: Grants full access to all ML features, including starting new ML tasks and reading or deleting models.
- `ml_readonly_access`: Grants read-only access to ML tasks, trained models, and statistics relevant to the model's cluster. Does not grant permissions to start or delete ML tasks or models.
## ML node
To prevent your cluster from failing when running ML tasks, you configure a node with the `ml` node role. When configuring without the `data` node role, ML nodes will not store any shards and will calculate resource requirements at runtime. To use an ML node, create a node in your `opensearch.yml` file. Give your node a custom name and define the node role as `ml`:
```yml
node.name: ml-node
node.roles: [ ml ]
```
1. Ensure that you've appropriately set the cluster settings described in [Cluster Settings]({{site.url}}{{site.baseurl}}/ml-commons-plugin/cluster-settings/).
2. Set up model access as described in [Model Access Control]({{site.url}}{{site.baseurl}}/ml-commons-plugin/model-access-control/).
3. Start using models:
- [ML Framework]({{site.url}}{{site.baseurl}}/ml-commons-plugin/ml-framework/) allows you to run models within OpenSearch.
- [ML Extensibility]({{site.url}}{{site.baseurl}}/ml-commons-plugin/extensibility/index/) allows you to access remote models.

View File

@ -11,6 +11,13 @@ You can use the Security plugin with ML Commons to manage access to specific mod
To accomplish this, users are assigned one or more [_backend roles_]({{site.url}}{{site.baseurl}}/security/access-control/index/). Rather than assign individual roles to individual users during user configuration, backend roles provide a way to map a set of users to a role by assigning the backend role to users when they log in. For example, users may be assigned an `IT` backend role that includes the `ml_full_access` role and have full access to all ML Commons features. Alternatively, other users may be assigned an `HR` backend role that includes the `ml_readonly_access` role and be limited to read-only access to machine learning (ML) features. Given this flexibility, backend roles can provide finer-grained access to models and make it easier to assign multiple users to a role rather than mapping a user and role individually.
## ML Commons roles
The ML Commons plugin has two reserved roles:
- `ml_full_access`: Grants full access to all ML features, including starting new ML tasks and reading or deleting models.
- `ml_readonly_access`: Grants read-only access to ML tasks, trained models, and statistics relevant to the model's cluster. Does not grant permissions to start or delete ML tasks or models.
## Model groups
For access control, models are organized into _model groups_---collections of versions of a particular model. Like users, model groups can be assigned one or more backend roles. All versions of the same model share the same model name and have the same backend role or roles.