* Add multimodal search documentation Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> * Text image embedding processor Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> * Add prerequisite Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> * Change query text Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> * Added bedrock connector tutorial and renamed ML TOC Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> * Name changes and rewording Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> * Change connector link Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> * Change link Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> * Implemented tech review comments Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> * Link fix and field name fix Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> * Add default text embedding preprocessing and post-processing functions Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> * Add sparse search documentation Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> * Fix links Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> * Pre/post processing function tech review comments Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> * Fix link Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> * Sparse search tech review comments Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> * Apply suggestions from code review Co-authored-by: Melissa Vagi <vagimeli@amazon.com> Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> * Implemented doc review comments Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> * Add actual test sparse pipeline response Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> * Added tested examples Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> * Added model choice for sparse search Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> * Remove Bedrock connector Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> * Implemented tech review feedback Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> * Add that the model must be deployed to neural search Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> * Apply suggestions from code review Co-authored-by: Nathan Bower <nbower@amazon.com> Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> * Link fix Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> * Add session token to sagemaker blueprint Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> * Formatted bullet points the same way Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> * Specified both model types in neural sparse query Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> * Added more explanation for default pre/post-processing functions Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> * Remove framework and extensibility references Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> * Minor rewording Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> --------- Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Melissa Vagi <vagimeli@amazon.com> Co-authored-by: Nathan Bower <nbower@amazon.com>
5.2 KiB
layout | title | nav_order | has_children | parent |
---|---|---|---|---|
default | Multimodal search | 20 | false | Neural search |
Multimodal search
Introduced 2.11 {: .label .label-purple }
Use multimodal search to search text and image data. In neural search, text search is facilitated by multimodal embedding models.
PREREQUISITE
Before using text search, you must set up a multimodal embedding model. For more information, see Using custom models within OpenSearch.
{: .note}
Using multimodal search
To use neural search with text and image embeddings, follow these steps:
- Create an ingest pipeline.
- Create an index for ingestion.
- Ingest documents into the index.
- Search the index using neural search.
Step 1: Create an ingest pipeline
To generate vector embeddings, you need to create an ingest pipeline that contains a text_image_embedding
processor, which will convert the text or image in a document field to vector embeddings. The processor's field_map
determines the text and image fields from which to generate vector embeddings and the output vector field in which to store the embeddings.
The following example request creates an ingest pipeline where the text from image_description
and an image from image_binary
will be converted into text embeddings and the embeddings will be stored in vector_embedding
:
PUT /_ingest/pipeline/nlp-ingest-pipeline
{
"description": "A text/image embedding pipeline",
"processors": [
{
"text_image_embedding": {
"model_id": "-fYQAosBQkdnhhBsK593",
"embedding": "vector_embedding",
"field_map": {
"text": "image_description",
"image": "image_binary"
}
}
}
]
}
{% include copy-curl.html %}
Step 2: Create an index for ingestion
In order to use the text embedding processor defined in your pipeline, create a k-NN index, adding the pipeline created in the previous step as the default pipeline. Ensure that the fields defined in the field_map
are mapped as correct types. Continuing with the example, the vector_embedding
field must be mapped as a k-NN vector with a dimension that matches the model dimension. Similarly, the image_description
field should be mapped as text
, and the image_binary
should be mapped as binary
.
The following example request creates a k-NN index that is set up with a default ingest pipeline:
PUT /my-nlp-index
{
"settings": {
"index.knn": true,
"default_pipeline": "nlp-ingest-pipeline",
"number_of_shards": 2
},
"mappings": {
"properties": {
"vector_embedding": {
"type": "knn_vector",
"dimension": 1024,
"method": {
"name": "hnsw",
"engine": "lucene",
"parameters": {}
}
},
"image_description": {
"type": "text"
},
"image_binary": {
"type": "binary"
}
}
}
}
{% include copy-curl.html %}
For more information about creating a k-NN index and its supported methods, see k-NN index.
Step 3: Ingest documents into the index
To ingest documents into the index created in the previous step, send the following request:
PUT /nlp-index/_doc/1
{
"image_description": "Orange table",
"image_binary": "iVBORw0KGgoAAAANSUI..."
}
{% include copy-curl.html %}
Before the document is ingested into the index, the ingest pipeline runs the text_image_embedding
processor on the document, generating vector embeddings for the image_description
and image_binary
fields. In addition to the original image_description
and image_binary
fields, the indexed document includes the vector_embedding
field, which contains the combined vector embeddings.
Step 4: Search the index using neural search
To perform vector search on your index, use the neural
query clause either in the k-NN plugin API or Query DSL queries. You can refine the results by using a k-NN search filter. You can search by text, image, or both text and image.
The following example request uses a neural query to search for text and image:
GET /my-nlp-index/_search
{
"size": 10,
"query": {
"neural": {
"vector_embedding": {
"query_text": "Orange table",
"query_image": "iVBORw0KGgoAAAANSUI...",
"model_id": "-fYQAosBQkdnhhBsK593",
"k": 5
}
}
}
}
{% include copy-curl.html %}
To eliminate passing the model ID with each neural query request, you can set a default model on a k-NN index or a field. To learn more, see Setting a default model on an index or field.