Add redirects from old links to ML documentation (#5707)
* Add redirects from old links to ML documentation Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> * Rename links Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> --------- Signed-off-by: Fanit Kolchina <kolchfa@amazon.com>
This commit is contained in:
parent
f999e0a8a8
commit
a8bd47e07b
|
@ -134,6 +134,6 @@ The response confirms that in addition to the `image_description` and `image_bin
|
||||||
## Next steps
|
## Next steps
|
||||||
|
|
||||||
- To learn how to use the `neural` query for a multimodal search, see [Neural query]({{site.url}}{{site.baseurl}}/query-dsl/specialized/neural/).
|
- To learn how to use the `neural` query for a multimodal search, see [Neural query]({{site.url}}{{site.baseurl}}/query-dsl/specialized/neural/).
|
||||||
- To learn more about multimodal neural search, see [Multimodal search]({{site.url}}{{site.baseurl}}/search-plugins/search-methods/multimodal-search/).
|
- To learn more about multimodal neural search, see [Multimodal search]({{site.url}}{{site.baseurl}}/search-plugins/multimodal-search/).
|
||||||
To learn more about using models in OpenSearch, see [Choosing a model]({{site.url}}{{site.baseurl}}/ml-commons-plugin/integrating-ml-models/#choosing-a-model).
|
To learn more about using models in OpenSearch, see [Choosing a model]({{site.url}}{{site.baseurl}}/ml-commons-plugin/integrating-ml-models/#choosing-a-model).
|
||||||
- For a comprehensive example, see [Neural search tutorial]({{site.url}}{{site.baseurl}}/search-plugins/neural-search-tutorial/).
|
- For a comprehensive example, see [Neural search tutorial]({{site.url}}{{site.baseurl}}/search-plugins/neural-search-tutorial/).
|
|
@ -266,9 +266,9 @@ The response contains the tokens and weights:
|
||||||
|
|
||||||
## Step 5: Use the model for search
|
## Step 5: Use the model for search
|
||||||
|
|
||||||
To learn how to set up a vector index and use text embedding models for search, see [Semantic search]({{site.url}}{{site.baseurl}}/search-plugins/search-methods/semantic-search/).
|
To learn how to set up a vector index and use text embedding models for search, see [Semantic search]({{site.url}}{{site.baseurl}}/search-plugins/semantic-search/).
|
||||||
|
|
||||||
To learn how to set up a vector index and use sparse encoding models for search, see [Sparse search]({{site.url}}{{site.baseurl}}/search-plugins/search-methods/sparse-search/).
|
To learn how to set up a vector index and use sparse encoding models for search, see [Sparse search]({{site.url}}{{site.baseurl}}/search-plugins/sparse-search/).
|
||||||
|
|
||||||
|
|
||||||
## Supported pretrained models
|
## Supported pretrained models
|
||||||
|
|
|
@ -6,7 +6,7 @@ nav_order: 65
|
||||||
parent: Connecting to remote models
|
parent: Connecting to remote models
|
||||||
grand_parent: Integrating ML models
|
grand_parent: Integrating ML models
|
||||||
redirect_from:
|
redirect_from:
|
||||||
- ml-commons-plugin/remote-models/blueprints/
|
- ml-commons-plugin/extensibility/blueprints/
|
||||||
---
|
---
|
||||||
|
|
||||||
# Connector blueprints
|
# Connector blueprints
|
||||||
|
|
|
@ -7,7 +7,7 @@ nav_order: 61
|
||||||
parent: Connecting to remote models
|
parent: Connecting to remote models
|
||||||
grand_parent: Integrating ML models
|
grand_parent: Integrating ML models
|
||||||
redirect_from:
|
redirect_from:
|
||||||
- ml-commons-plugin/remote-models/connectors/
|
- ml-commons-plugin/extensibility/connectors/
|
||||||
---
|
---
|
||||||
|
|
||||||
# Creating connectors for third-party ML platforms
|
# Creating connectors for third-party ML platforms
|
||||||
|
|
|
@ -6,7 +6,7 @@ has_children: true
|
||||||
has_toc: false
|
has_toc: false
|
||||||
nav_order: 60
|
nav_order: 60
|
||||||
redirect_from:
|
redirect_from:
|
||||||
- ml-commons-plugin/remote-models/index/
|
- ml-commons-plugin/extensibility/index/
|
||||||
---
|
---
|
||||||
|
|
||||||
# Connecting to remote models
|
# Connecting to remote models
|
||||||
|
|
|
@ -6,7 +6,7 @@ has_children: true
|
||||||
nav_order: 50
|
nav_order: 50
|
||||||
redirect_from:
|
redirect_from:
|
||||||
- /ml-commons-plugin/model-serving-framework/
|
- /ml-commons-plugin/model-serving-framework/
|
||||||
- /ml-commons-plugin/using-ml-models/
|
- /ml-commons-plugin/ml-framework/
|
||||||
---
|
---
|
||||||
|
|
||||||
# Using ML models within OpenSearch
|
# Using ML models within OpenSearch
|
||||||
|
|
|
@ -5,7 +5,6 @@ has_children: false
|
||||||
nav_order: 70
|
nav_order: 70
|
||||||
redirect_from:
|
redirect_from:
|
||||||
- /ml-commons-plugin/conversational-search/
|
- /ml-commons-plugin/conversational-search/
|
||||||
- /search-plugins/search-methods/conversational-search/
|
|
||||||
---
|
---
|
||||||
|
|
||||||
This is an experimental feature and is not recommended for use in a production environment. For updates on the progress of the feature or if you want to leave feedback, see the associated [GitHub issue](https://forum.opensearch.org/t/feedback-conversational-search-and-retrieval-augmented-generation-using-search-pipeline-experimental-release/16073).
|
This is an experimental feature and is not recommended for use in a production environment. For updates on the progress of the feature or if you want to leave feedback, see the associated [GitHub issue](https://forum.opensearch.org/t/feedback-conversational-search-and-retrieval-augmented-generation-using-search-pipeline-experimental-release/16073).
|
||||||
|
|
|
@ -3,8 +3,6 @@ layout: default
|
||||||
title: Hybrid search
|
title: Hybrid search
|
||||||
has_children: false
|
has_children: false
|
||||||
nav_order: 40
|
nav_order: 40
|
||||||
redirect_from:
|
|
||||||
- /search-plugins/search-methods/hybrid-search/
|
|
||||||
---
|
---
|
||||||
|
|
||||||
# Hybrid search
|
# Hybrid search
|
||||||
|
|
|
@ -3,8 +3,6 @@ layout: default
|
||||||
title: Keyword search
|
title: Keyword search
|
||||||
has_children: false
|
has_children: false
|
||||||
nav_order: 10
|
nav_order: 10
|
||||||
redirect_from:
|
|
||||||
- /search-plugins/search-methods/keyword-search/
|
|
||||||
---
|
---
|
||||||
|
|
||||||
# Keyword search
|
# Keyword search
|
||||||
|
|
|
@ -5,7 +5,6 @@ nav_order: 60
|
||||||
has_children: false
|
has_children: false
|
||||||
redirect_from:
|
redirect_from:
|
||||||
- /search-plugins/neural-multimodal-search/
|
- /search-plugins/neural-multimodal-search/
|
||||||
- /search-plugins/search-methods/multimodal-search/
|
|
||||||
---
|
---
|
||||||
|
|
||||||
# Multimodal search
|
# Multimodal search
|
||||||
|
|
|
@ -39,11 +39,11 @@ Semantic search uses dense retrieval based on text embedding models to search te
|
||||||
|
|
||||||
### Hybrid search
|
### Hybrid search
|
||||||
|
|
||||||
Hybrid search combines keyword and neural search to improve search relevance. For detailed setup instructions, see [Hybrid search]({{site.url}}{{site.baseurl}}/search-plugins/search-methods/hybrid-search/).
|
Hybrid search combines keyword and neural search to improve search relevance. For detailed setup instructions, see [Hybrid search]({{site.url}}{{site.baseurl}}/search-plugins/hybrid-search/).
|
||||||
|
|
||||||
### Multimodal search
|
### Multimodal search
|
||||||
|
|
||||||
Multimodal search uses neural search with multimodal embedding models to search text and image data. For detailed setup instructions, see [Multimodal search]({{site.url}}{{site.baseurl}}/search-plugins/search-methods/multimodal-search/).
|
Multimodal search uses neural search with multimodal embedding models to search text and image data. For detailed setup instructions, see [Multimodal search]({{site.url}}{{site.baseurl}}/search-plugins/multimodal-search/).
|
||||||
|
|
||||||
### Sparse search
|
### Sparse search
|
||||||
|
|
||||||
|
@ -51,4 +51,4 @@ Sparse search uses neural search with sparse retrieval based on sparse embedding
|
||||||
|
|
||||||
### Conversational search
|
### Conversational search
|
||||||
|
|
||||||
With conversational search, you can ask questions in natural language, receive a text response, and ask additional clarifying questions. For detailed setup instructions, see [Conversational search]({{site.url}}{{site.baseurl}}/search-plugins/search-methods/conversational-search/).
|
With conversational search, you can ask questions in natural language, receive a text response, and ask additional clarifying questions. For detailed setup instructions, see [Conversational search]({{site.url}}{{site.baseurl}}/search-plugins/conversational-search/).
|
||||||
|
|
|
@ -5,7 +5,6 @@ nav_order: 35
|
||||||
has_children: false
|
has_children: false
|
||||||
redirect_from:
|
redirect_from:
|
||||||
- /search-plugins/neural-text-search/
|
- /search-plugins/neural-text-search/
|
||||||
- /search-plugins/search-methods/semantic-search/
|
|
||||||
---
|
---
|
||||||
|
|
||||||
# Semantic search
|
# Semantic search
|
||||||
|
|
|
@ -6,14 +6,13 @@ nav_order: 50
|
||||||
has_children: false
|
has_children: false
|
||||||
redirect_from:
|
redirect_from:
|
||||||
- /search-plugins/neural-sparse-search/
|
- /search-plugins/neural-sparse-search/
|
||||||
- /search-plugins/search-methods/sparse-search/
|
|
||||||
---
|
---
|
||||||
|
|
||||||
# Sparse search
|
# Sparse search
|
||||||
Introduced 2.11
|
Introduced 2.11
|
||||||
{: .label .label-purple }
|
{: .label .label-purple }
|
||||||
|
|
||||||
[Semantic search]({{site.url}}{{site.baseurl}}/search-plugins/search-methods/semantic-search/) relies on dense retrieval that is based on text embedding models. However, dense methods use k-NN search, which consumes a large amount of memory and CPU resources. An alternative to semantic search, sparse search is implemented using an inverted index and is thus as efficient as BM25. Sparse search is facilitated by sparse embedding models. When you perform a sparse search, it creates a sparse vector (a list of `token: weight` key-value pairs representing an entry and its weight) and ingests data into a rank features index.
|
[Semantic search]({{site.url}}{{site.baseurl}}/search-plugins/semantic-search/) relies on dense retrieval that is based on text embedding models. However, dense methods use k-NN search, which consumes a large amount of memory and CPU resources. An alternative to semantic search, sparse search is implemented using an inverted index and is thus as efficient as BM25. Sparse search is facilitated by sparse embedding models. When you perform a sparse search, it creates a sparse vector (a list of `token: weight` key-value pairs representing an entry and its weight) and ingests data into a rank features index.
|
||||||
|
|
||||||
When selecting a model, choose one of the following options:
|
When selecting a model, choose one of the following options:
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue