Add unsupported warning for local models (#6489)

* Add unsupported warning for local models

Signed-off-by: Fanit Kolchina <kolchfa@amazon.com>

* Reword to be more generic

Signed-off-by: Fanit Kolchina <kolchfa@amazon.com>

---------

Signed-off-by: Fanit Kolchina <kolchfa@amazon.com>
This commit is contained in:
kolchfa-aws 2024-02-23 14:45:25 -05:00 committed by GitHub
parent e646c276c1
commit 76c2ed03fc
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
4 changed files with 11 additions and 1 deletions

View File

@ -18,6 +18,11 @@ As of OpenSearch 2.6, OpenSearch supports local text embedding models.
As of OpenSearch 2.11, OpenSearch supports local sparse encoding models.
As of OpenSearch 2.12, OpenSearch supports local cross-encoder models.
Running local models on the CentOS 7 operating system is not supported. Moreover, not all local models can run on all hardware and operating systems.
{: .important}
## Preparing a model
For both text embedding and sparse encoding models, you must provide a tokenizer JSON file within the model zip file.

View File

@ -15,7 +15,7 @@ This is an experimental feature and is not recommended for use in a production e
The OpenSearch Assistant Toolkit helps you create AI-powered assistants for OpenSearch Dashboards. The toolkit includes the following elements:
- [**Agents and tools**]({{site.url}}{{site.baseurl}}/ml-commons-plugin/agents-tools/index/): _Agents_ interface with a large language model (LLM) and execute high-level tasks, such as summarization or generating Piped Processing Language (PPL) from natural language. The agent's high-level tasks consist of low-level tasks called _tools_, which can be reused by multiple agents.
- [**Agents and tools**]({{site.url}}{{site.baseurl}}/ml-commons-plugin/agents-tools/index/): _Agents_ interface with a large language model (LLM) and execute high-level tasks, such as summarization or generating Piped Processing Language (PPL) queries from natural language. The agent's high-level tasks consist of low-level tasks called _tools_, which can be reused by multiple agents.
- [**Configuration automation**]({{site.url}}{{site.baseurl}}/automating-configurations/index/): Uses templates to set up infrastructure for artificial intelligence and machine learning (AI/ML) applications. For example, you can automate configuring agents to be used for chat or generating PPL queries from natural language.
- [**OpenSearch Assistant for OpenSearch Dashboards**]({{site.url}}{{site.baseurl}}/dashboards/dashboards-assistant/index/): This is the OpenSearch Dashboards UI for the AI-powered assistant. The assistant's workflow is configured with various agents and tools.

View File

@ -256,6 +256,8 @@ To learn how to set up a vector index and use sparse encoding models for search,
OpenSearch supports the following models, categorized by type. Text embedding models are sourced from [Hugging Face](https://huggingface.co/). Sparse encoding models are trained by OpenSearch. Although models with the same type will have similar use cases, each model has a different model size and will perform differently depending on your cluster setup. For a performance comparison of some pretrained models, see the [SBERT documentation](https://www.sbert.net/docs/pretrained_models.html#model-overview).
Running local models on the CentOS 7 operating system is not supported. Moreover, not all local models can run on all hardware and operating systems.
{: .important}
### Sentence transformers

View File

@ -19,6 +19,9 @@ To integrate machine learning (ML) models into your OpenSearch cluster, you can
- **Custom models** such as PyTorch deep learning models: To learn more, see [Custom models]({{site.url}}{{site.baseurl}}/ml-commons-plugin/custom-local-models/).
Running local models on the CentOS 7 operating system is not supported. Moreover, not all local models can run on all hardware and operating systems.
{: .important}
## GPU acceleration
For better performance, you can take advantage of GPU acceleration on your ML node. For more information, see [GPU acceleration]({{site.url}}{{site.baseurl}}/ml-commons-plugin/gpu-acceleration/).