Add redirects from old links to ML documentation (#5707)

* Add redirects from old links to ML documentation

Signed-off-by: Fanit Kolchina <kolchfa@amazon.com>

* Rename links

Signed-off-by: Fanit Kolchina <kolchfa@amazon.com>

---------

Signed-off-by: Fanit Kolchina <kolchfa@amazon.com>
This commit is contained in:
kolchfa-aws 2023-11-29 16:26:35 -05:00 committed by GitHub
parent f999e0a8a8
commit a8bd47e07b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
13 changed files with 11 additions and 19 deletions

View File

@ -134,6 +134,6 @@ The response confirms that in addition to the `image_description` and `image_bin
## Next steps
- To learn how to use the `neural` query for a multimodal search, see [Neural query]({{site.url}}{{site.baseurl}}/query-dsl/specialized/neural/).
- To learn more about multimodal neural search, see [Multimodal search]({{site.url}}{{site.baseurl}}/search-plugins/search-methods/multimodal-search/).
- To learn more about multimodal neural search, see [Multimodal search]({{site.url}}{{site.baseurl}}/search-plugins/multimodal-search/).
To learn more about using models in OpenSearch, see [Choosing a model]({{site.url}}{{site.baseurl}}/ml-commons-plugin/integrating-ml-models/#choosing-a-model).
- For a comprehensive example, see [Neural search tutorial]({{site.url}}{{site.baseurl}}/search-plugins/neural-search-tutorial/).

View File

@ -266,9 +266,9 @@ The response contains the tokens and weights:
## Step 5: Use the model for search
To learn how to set up a vector index and use text embedding models for search, see [Semantic search]({{site.url}}{{site.baseurl}}/search-plugins/search-methods/semantic-search/).
To learn how to set up a vector index and use text embedding models for search, see [Semantic search]({{site.url}}{{site.baseurl}}/search-plugins/semantic-search/).
To learn how to set up a vector index and use sparse encoding models for search, see [Sparse search]({{site.url}}{{site.baseurl}}/search-plugins/search-methods/sparse-search/).
To learn how to set up a vector index and use sparse encoding models for search, see [Sparse search]({{site.url}}{{site.baseurl}}/search-plugins/sparse-search/).
## Supported pretrained models

View File

@ -6,7 +6,7 @@ nav_order: 65
parent: Connecting to remote models
grand_parent: Integrating ML models
redirect_from:
- ml-commons-plugin/remote-models/blueprints/
- ml-commons-plugin/extensibility/blueprints/
---
# Connector blueprints

View File

@ -7,7 +7,7 @@ nav_order: 61
parent: Connecting to remote models
grand_parent: Integrating ML models
redirect_from:
- ml-commons-plugin/remote-models/connectors/
- ml-commons-plugin/extensibility/connectors/
---
# Creating connectors for third-party ML platforms

View File

@ -6,7 +6,7 @@ has_children: true
has_toc: false
nav_order: 60
redirect_from:
- ml-commons-plugin/remote-models/index/
- ml-commons-plugin/extensibility/index/
---
# Connecting to remote models

View File

@ -6,7 +6,7 @@ has_children: true
nav_order: 50
redirect_from:
- /ml-commons-plugin/model-serving-framework/
- /ml-commons-plugin/using-ml-models/
- /ml-commons-plugin/ml-framework/
---
# Using ML models within OpenSearch

View File

@ -5,7 +5,6 @@ has_children: false
nav_order: 70
redirect_from:
- /ml-commons-plugin/conversational-search/
- /search-plugins/search-methods/conversational-search/
---
This is an experimental feature and is not recommended for use in a production environment. For updates on the progress of the feature or if you want to leave feedback, see the associated [GitHub issue](https://forum.opensearch.org/t/feedback-conversational-search-and-retrieval-augmented-generation-using-search-pipeline-experimental-release/16073).

View File

@ -3,8 +3,6 @@ layout: default
title: Hybrid search
has_children: false
nav_order: 40
redirect_from:
- /search-plugins/search-methods/hybrid-search/
---
# Hybrid search

View File

@ -3,8 +3,6 @@ layout: default
title: Keyword search
has_children: false
nav_order: 10
redirect_from:
- /search-plugins/search-methods/keyword-search/
---
# Keyword search

View File

@ -5,7 +5,6 @@ nav_order: 60
has_children: false
redirect_from:
- /search-plugins/neural-multimodal-search/
- /search-plugins/search-methods/multimodal-search/
---
# Multimodal search

View File

@ -39,11 +39,11 @@ Semantic search uses dense retrieval based on text embedding models to search te
### Hybrid search
Hybrid search combines keyword and neural search to improve search relevance. For detailed setup instructions, see [Hybrid search]({{site.url}}{{site.baseurl}}/search-plugins/search-methods/hybrid-search/).
Hybrid search combines keyword and neural search to improve search relevance. For detailed setup instructions, see [Hybrid search]({{site.url}}{{site.baseurl}}/search-plugins/hybrid-search/).
### Multimodal search
Multimodal search uses neural search with multimodal embedding models to search text and image data. For detailed setup instructions, see [Multimodal search]({{site.url}}{{site.baseurl}}/search-plugins/search-methods/multimodal-search/).
Multimodal search uses neural search with multimodal embedding models to search text and image data. For detailed setup instructions, see [Multimodal search]({{site.url}}{{site.baseurl}}/search-plugins/multimodal-search/).
### Sparse search
@ -51,4 +51,4 @@ Sparse search uses neural search with sparse retrieval based on sparse embedding
### Conversational search
With conversational search, you can ask questions in natural language, receive a text response, and ask additional clarifying questions. For detailed setup instructions, see [Conversational search]({{site.url}}{{site.baseurl}}/search-plugins/search-methods/conversational-search/).
With conversational search, you can ask questions in natural language, receive a text response, and ask additional clarifying questions. For detailed setup instructions, see [Conversational search]({{site.url}}{{site.baseurl}}/search-plugins/conversational-search/).

View File

@ -5,7 +5,6 @@ nav_order: 35
has_children: false
redirect_from:
- /search-plugins/neural-text-search/
- /search-plugins/search-methods/semantic-search/
---
# Semantic search

View File

@ -6,14 +6,13 @@ nav_order: 50
has_children: false
redirect_from:
- /search-plugins/neural-sparse-search/
- /search-plugins/search-methods/sparse-search/
---
# Sparse search
Introduced 2.11
{: .label .label-purple }
[Semantic search]({{site.url}}{{site.baseurl}}/search-plugins/search-methods/semantic-search/) relies on dense retrieval that is based on text embedding models. However, dense methods use k-NN search, which consumes a large amount of memory and CPU resources. An alternative to semantic search, sparse search is implemented using an inverted index and is thus as efficient as BM25. Sparse search is facilitated by sparse embedding models. When you perform a sparse search, it creates a sparse vector (a list of `token: weight` key-value pairs representing an entry and its weight) and ingests data into a rank features index.
[Semantic search]({{site.url}}{{site.baseurl}}/search-plugins/semantic-search/) relies on dense retrieval that is based on text embedding models. However, dense methods use k-NN search, which consumes a large amount of memory and CPU resources. An alternative to semantic search, sparse search is implemented using an inverted index and is thus as efficient as BM25. Sparse search is facilitated by sparse embedding models. When you perform a sparse search, it creates a sparse vector (a list of `token: weight` key-value pairs representing an entry and its weight) and ingests data into a rank features index.
When selecting a model, choose one of the following options: