2.1 KiB
layout | title | nav_order | has_children | has_toc | redirect_from | |
---|---|---|---|---|---|---|
default | Neural search | 200 | true | false |
|
Neural search
Neural search transforms text into vectors and facilitates vector search both at ingestion time and at search time. During ingestion, neural search transforms document text into vector embeddings and indexes both the text and its vector embeddings in a vector index. When you use a neural query during search, neural search converts the query text into vector embeddings, uses vector search to compare the query and document embeddings, and returns the closest results.
Neural search supports the following search types:
- Text search: Uses dense retrieval based on text embedding models to search text data.
- Multimodal search: Uses vision-language embedding models to search text and image data.
- Sparse search: Uses sparse retrieval based on sparse embedding models to search text data.
Embedding models
Before using neural search, you must set up a machine learning (ML) model. You can either use a pretrained model provided by OpenSearch, upload your own model to OpenSearch, or connect to a foundation model hosted on an external platform. For more information about ML models, see Using custom models within OpenSearch and ML Extensibility. For a step-by-step tutorial, see Semantic search.
Before you ingest documents into an index, documents are passed through the ML model, which generates vector embeddings for the document fields. When you send a search request, the query text or image is also passed through the ML model, which generates the corresponding vector embeddings. Then neural search performs a vector search on the embeddings and returns matching documents.