Add additional ML cluster settings (#2487)
* Add additional ML cluster settings Signed-off-by: Naarcha-AWS <naarcha@amazon.com> * Update cluster-settings.md * Fix other inaccuracies Signed-off-by: Naarcha-AWS <naarcha@amazon.com> * Update cluster-settings.md * Update cluster-settings.md * Update cluster-settings.md Signed-off-by: Naarcha-AWS <naarcha@amazon.com>
This commit is contained in:
parent
93433871c4
commit
985d31b8f7
|
@ -42,6 +42,35 @@ plugins.ml_commons.task_dispatch_policy: round_robin
|
|||
- Dafault value: `round_robin`
|
||||
- Value range: `round_robin` or `least_load`
|
||||
|
||||
## Set number of ML tasks per node
|
||||
|
||||
Sets the number of ML tasks that can run on each ML node. When set to `0`, no ML tasks run on any nodes.
|
||||
|
||||
### Setting
|
||||
|
||||
```
|
||||
plugins.ml_commons.max_ml_task_per_node: 10
|
||||
```
|
||||
|
||||
### Values
|
||||
|
||||
- Default value: `10`
|
||||
- Value range: [0, 10,000]
|
||||
|
||||
## Set number of ML models per node
|
||||
|
||||
Sets the number of ML models that can be loaded on to each ML node. When set to `0`, no ML models can load on any node.
|
||||
|
||||
### Setting
|
||||
|
||||
```
|
||||
plugins.ml_commons.max_model_on_node: 10
|
||||
```
|
||||
|
||||
### Values
|
||||
|
||||
- Default value: `10`
|
||||
- Value range: [0, 10,000]
|
||||
|
||||
## Set sync job intervals
|
||||
|
||||
|
@ -62,7 +91,7 @@ plugins.ml_commons.sync_up_job_interval_in_seconds: 10
|
|||
|
||||
## Predict monitoring requests
|
||||
|
||||
Controls how many upload model tasks can run in parallel on one node. If set to `0`, you cannot upload models to any node.
|
||||
Controls how many predict requests are monitored on one node. If set to `0`, OpenSearch clears all monitoring predict requests in cache and does not monitor for new predict requests.
|
||||
|
||||
### Setting
|
||||
|
||||
|
@ -73,7 +102,7 @@ plugins.ml_commons.monitoring_request_count: 100
|
|||
### Value range
|
||||
|
||||
- Default value: `100`
|
||||
- Value range: [0, 100,000,000]
|
||||
- Value range: [0, 10,000,000]
|
||||
|
||||
## Upload model tasks per node
|
||||
|
||||
|
@ -136,7 +165,7 @@ plugins.ml_commons.ml_task_timeout_in_seconds: 600
|
|||
### Values
|
||||
|
||||
- Default value: 600
|
||||
- Value range: [1, 86400]
|
||||
- Value range: [1, 86,400]
|
||||
|
||||
## Set native memory threshold
|
||||
|
||||
|
@ -153,4 +182,4 @@ plugins.ml_commons.native_memory_threshold: 90
|
|||
### Values
|
||||
|
||||
- Default value: 90
|
||||
- Value range: [0, 100]
|
||||
- Value range: [0, 100]
|
||||
|
|
|
@ -25,7 +25,7 @@ As of OpenSearch 2.4, the model-serving framework only supports text embedding m
|
|||
|
||||
### Model format
|
||||
|
||||
To use a model in OpenSearch, you'll need to export the model into a portable format. As of Version 2.5, OpenSearch only supports the [TorchScript](https://pytorch.org/docs/stable/jit.html) and [ONNX][https://onnx.ai/] formats.
|
||||
To use a model in OpenSearch, you'll need to export the model into a portable format. As of Version 2.5, OpenSearch only supports the [TorchScript](https://pytorch.org/docs/stable/jit.html) and [ONNX](https://onnx.ai/) formats.
|
||||
|
||||
Furthermore, files must be saved as zip files before upload. Therefore, to ensure that ML Commons can upload your model, compress your TorchScript file before uploading. You can download an example file [here](https://github.com/opensearch-project/ml-commons/blob/2.x/ml-algorithms/src/test/resources/org/opensearch/ml/engine/algorithms/text_embedding/all-MiniLM-L6-v2_torchscript_sentence-transformer.zip).
|
||||
|
||||
|
|
Loading…
Reference in New Issue