Add processor to processor names and links to further info (#5786)

Signed-off-by: Fanit Kolchina <kolchfa@amazon.com>
This commit is contained in:
kolchfa-aws 2023-12-05 14:49:46 -05:00 committed by GitHub
parent 21f8a61557
commit 4acbc30746
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
16 changed files with 29 additions and 23 deletions

View File

@ -30,7 +30,7 @@ OpenSearch.FailoverNoun = YES
OpenSearch.FailoverVerb = YES OpenSearch.FailoverVerb = YES
OpenSearch.FutureTense = NO OpenSearch.FutureTense = NO
OpenSearch.HeadingAcronyms = YES OpenSearch.HeadingAcronyms = YES
OpenSearch.HeadingCapitalization = NO OpenSearch.HeadingCapitalization = YES
OpenSearch.HeadingColon = YES OpenSearch.HeadingColon = YES
OpenSearch.HeadingPunctuation = YES OpenSearch.HeadingPunctuation = YES
OpenSearch.Inclusive = YES OpenSearch.Inclusive = YES

View File

@ -7,7 +7,7 @@ redirect_from:
- /api-reference/ingest-apis/processors/append/ - /api-reference/ingest-apis/processors/append/
--- ---
# Append # Append processor
**Introduced 1.0** **Introduced 1.0**
{: .label .label-purple } {: .label .label-purple }

View File

@ -7,7 +7,7 @@ redirect_from:
- /api-reference/ingest-apis/processors/bytes/ - /api-reference/ingest-apis/processors/bytes/
--- ---
# Bytes # Bytes processor
**Introduced 1.0** **Introduced 1.0**
{: .label .label-purple } {: .label .label-purple }

View File

@ -7,7 +7,7 @@ redirect_from:
- /api-reference/ingest-apis/processors/convert/ - /api-reference/ingest-apis/processors/convert/
--- ---
# Convert # Convert processor
**Introduced 1.0** **Introduced 1.0**
{: .label .label-purple } {: .label .label-purple }

View File

@ -7,7 +7,7 @@ redirect_from:
- /api-reference/ingest-apis/processors/csv/ - /api-reference/ingest-apis/processors/csv/
--- ---
# CSV # CSV processor
**Introduced 1.0** **Introduced 1.0**
{: .label .label-purple } {: .label .label-purple }

View File

@ -5,7 +5,7 @@ parent: Ingest processors
nav_order: 55 nav_order: 55
--- ---
# Date index name # Date index name processor
The `date_index_name` processor is used to point documents to the correct time-based index based on the date or timestamp field within the document. The processor sets the `_index` metadata field to a [date math]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/date/#date-math) index name expression. Then the processor fetches the date or timestamp from the `field` field in the document being processed and formats it into a date math index name expression. The extracted date, `index_name_prefix` value, and `date_rounding` value are then combined to create the date math index expression. For example, if the `field` field contains the value `2023-10-30T12:43:29.000Z` and `index_name_prefix` is set to `week_index-` and `date_rounding` is set to `w`, then the date math index name expression is `week_index-2023-10-30`. You can use the `date_formats` field to specify how the date in the date math index expression should be formatted. The `date_index_name` processor is used to point documents to the correct time-based index based on the date or timestamp field within the document. The processor sets the `_index` metadata field to a [date math]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/date/#date-math) index name expression. Then the processor fetches the date or timestamp from the `field` field in the document being processed and formats it into a date math index name expression. The extracted date, `index_name_prefix` value, and `date_rounding` value are then combined to create the date math index expression. For example, if the `field` field contains the value `2023-10-30T12:43:29.000Z` and `index_name_prefix` is set to `week_index-` and `date_rounding` is set to `w`, then the date math index name expression is `week_index-2023-10-30`. You can use the `date_formats` field to specify how the date in the date math index expression should be formatted.

View File

@ -7,7 +7,7 @@ redirect_from:
- /api-reference/ingest-apis/processors/date/ - /api-reference/ingest-apis/processors/date/
--- ---
# Date # Date processor
**Introduced 1.0** **Introduced 1.0**
{: .label .label-purple } {: .label .label-purple }

View File

@ -6,7 +6,7 @@ grand_parent: Ingest pipelines
nav_order: 140 nav_order: 140
--- ---
# Grok # Grok processor
The `grok` processor is used to parse and structure unstructured data using pattern matching. You can use the `grok` processor to extract fields from log messages, web server access logs, application logs, and other log data that follows a consistent format. The `grok` processor is used to parse and structure unstructured data using pattern matching. You can use the `grok` processor to extract fields from log messages, web server access logs, application logs, and other log data that follows a consistent format.

View File

@ -7,7 +7,7 @@ redirect_from:
- /api-reference/ingest-apis/processors/ip2geo/ - /api-reference/ingest-apis/processors/ip2geo/
--- ---
# IP2Geo # IP2Geo processor
**Introduced 2.10** **Introduced 2.10**
{: .label .label-purple } {: .label .label-purple }

View File

@ -7,7 +7,7 @@ redirect_from:
- /api-reference/ingest-apis/processors/lowercase/ - /api-reference/ingest-apis/processors/lowercase/
--- ---
# Lowercase # Lowercase processor
**Introduced 1.0** **Introduced 1.0**
{: .label .label-purple } {: .label .label-purple }

View File

@ -7,7 +7,7 @@ redirect_from:
- /api-reference/ingest-apis/processors/remove/ - /api-reference/ingest-apis/processors/remove/
--- ---
# Remove # Remove processor
**Introduced 1.0** **Introduced 1.0**
{: .label .label-purple } {: .label .label-purple }

View File

@ -7,9 +7,9 @@ redirect_from:
- /api-reference/ingest-apis/processors/sparse-encoding/ - /api-reference/ingest-apis/processors/sparse-encoding/
--- ---
# Sparse encoding # Sparse encoding processor
The `sparse_encoding` processor is used to generate a sparse vector/token and weights from text fields for [neural search]({{site.url}}{{site.baseurl}}/search-plugins/neural-search/) using sparse retrieval. The `sparse_encoding` processor is used to generate a sparse vector/token and weights from text fields for [neural sparse search]({{site.url}}{{site.baseurl}}/search-plugins/neural-sparse-search/) using sparse retrieval.
**PREREQUISITE**<br> **PREREQUISITE**<br>
Before using the `sparse_encoding` processor, you must set up a machine learning (ML) model. For more information, see [Choosing a model]({{site.url}}{{site.baseurl}}/ml-commons-plugin/integrating-ml-models/#choosing-a-model). Before using the `sparse_encoding` processor, you must set up a machine learning (ML) model. For more information, see [Choosing a model]({{site.url}}{{site.baseurl}}/ml-commons-plugin/integrating-ml-models/#choosing-a-model).
@ -140,6 +140,8 @@ The response confirms that in addition to the `passage_text` field, the processo
} }
``` ```
Once you have created an ingest pipeline, you need to create an index for ingestion and ingest documents into the index. To learn more, see [Step 2: Create an index for ingestion]({{site.url}}{{site.baseurl}}/search-plugins/neural-sparse-search/#step-2-create-an-index-for-ingestion) and [Step 3: Ingest documents into the index]({{site.url}}{{site.baseurl}}/search-plugins/neural-sparse-search/#step-3-ingest-documents-into-the-index) of [Neural sparse search]({{site.url}}{{site.baseurl}}/search-plugins/neural-sparse-search/).
--- ---
## Next steps ## Next steps

View File

@ -7,9 +7,9 @@ redirect_from:
- /api-reference/ingest-apis/processors/text-embedding/ - /api-reference/ingest-apis/processors/text-embedding/
--- ---
# Text embedding # Text embedding processor
The `text_embedding` processor is used to generate vector embeddings from text fields for [neural search]({{site.url}}{{site.baseurl}}/search-plugins/neural-search/). The `text_embedding` processor is used to generate vector embeddings from text fields for [semantic search]({{site.url}}{{site.baseurl}}/search-plugins/semantic-search/).
**PREREQUISITE**<br> **PREREQUISITE**<br>
Before using the `text_embedding` processor, you must set up a machine learning (ML) model. For more information, see [Choosing a model]({{site.url}}{{site.baseurl}}/ml-commons-plugin/integrating-ml-models/#choosing-a-model). Before using the `text_embedding` processor, you must set up a machine learning (ML) model. For more information, see [Choosing a model]({{site.url}}{{site.baseurl}}/ml-commons-plugin/integrating-ml-models/#choosing-a-model).
@ -121,9 +121,11 @@ The response confirms that in addition to the `passage_text` field, the processo
} }
``` ```
Once you have created an ingest pipeline, you need to create an index for ingestion and ingest documents into the index. To learn more, see [Step 2: Create an index for ingestion]({{site.url}}{{site.baseurl}}/search-plugins/semantic-search/#step-2-create-an-index-for-ingestion) and [Step 3: Ingest documents into the index]({{site.url}}{{site.baseurl}}/search-plugins/semantic-search/#step-3-ingest-documents-into-the-index) of [Semantic search]({{site.url}}{{site.baseurl}}/search-plugins/semantic-search/).
## Next steps ## Next steps
- To learn how to use the `neural` query for text search, see [Neural query]({{site.url}}{{site.baseurl}}/query-dsl/specialized/neural/). - To learn how to use the `neural` query for text search, see [Neural query]({{site.url}}{{site.baseurl}}/query-dsl/specialized/neural/).
- To learn more about neural text search, see [Semantic search]({{site.url}}{{site.baseurl}}/search-plugins/semantic-search/). - To learn more about semantic search, see [Semantic search]({{site.url}}{{site.baseurl}}/search-plugins/semantic-search/).
To learn more about using models in OpenSearch, see [Choosing a model]({{site.url}}{{site.baseurl}}/ml-commons-plugin/integrating-ml-models/#choosing-a-model). - To learn more about using models in OpenSearch, see [Choosing a model]({{site.url}}{{site.baseurl}}/ml-commons-plugin/integrating-ml-models/#choosing-a-model).
- For a comprehensive example, see [Neural search tutorial]({{site.url}}{{site.baseurl}}/search-plugins/neural-search-tutorial/). - For a comprehensive example, see [Neural search tutorial]({{site.url}}{{site.baseurl}}/search-plugins/neural-search-tutorial/).

View File

@ -7,9 +7,9 @@ redirect_from:
- /api-reference/ingest-apis/processors/text-image-embedding/ - /api-reference/ingest-apis/processors/text-image-embedding/
--- ---
# Text/image embedding # Text/image embedding processor
The `text_image_embedding` processor is used to generate combined vector embeddings from text and image fields for [multimodal neural search]({{site.url}}{{site.baseurl}}/search-plugins/neural-multimodal-search/). The `text_image_embedding` processor is used to generate combined vector embeddings from text and image fields for [multimodal neural search]({{site.url}}{{site.baseurl}}/search-plugins/multimodal-search/).
**PREREQUISITE**<br> **PREREQUISITE**<br>
Before using the `text_image_embedding` processor, you must set up a machine learning (ML) model. For more information, see [Choosing a model]({{site.url}}{{site.baseurl}}/ml-commons-plugin/integrating-ml-models/#choosing-a-model). Before using the `text_image_embedding` processor, you must set up a machine learning (ML) model. For more information, see [Choosing a model]({{site.url}}{{site.baseurl}}/ml-commons-plugin/integrating-ml-models/#choosing-a-model).
@ -131,9 +131,11 @@ The response confirms that in addition to the `image_description` and `image_bin
} }
``` ```
Once you have created an ingest pipeline, you need to create an index for ingestion and ingest documents into the index. To learn more, see [Step 2: Create an index for ingestion]({{site.url}}{{site.baseurl}}/search-plugins/multimodal-search/#step-2-create-an-index-for-ingestion) and [Step 3: Ingest documents into the index]({{site.url}}{{site.baseurl}}/search-plugins/multimodal-search/#step-3-ingest-documents-into-the-index) of [Multimodal search]({{site.url}}{{site.baseurl}}/search-plugins/multimodal-search/).
## Next steps ## Next steps
- To learn how to use the `neural` query for a multimodal search, see [Neural query]({{site.url}}{{site.baseurl}}/query-dsl/specialized/neural/). - To learn how to use the `neural` query for a multimodal search, see [Neural query]({{site.url}}{{site.baseurl}}/query-dsl/specialized/neural/).
- To learn more about multimodal neural search, see [Multimodal search]({{site.url}}{{site.baseurl}}/search-plugins/multimodal-search/). - To learn more about multimodal search, see [Multimodal search]({{site.url}}{{site.baseurl}}/search-plugins/multimodal-search/).
To learn more about using models in OpenSearch, see [Choosing a model]({{site.url}}{{site.baseurl}}/ml-commons-plugin/integrating-ml-models/#choosing-a-model). - To learn more about using models in OpenSearch, see [Choosing a model]({{site.url}}{{site.baseurl}}/ml-commons-plugin/integrating-ml-models/#choosing-a-model).
- For a comprehensive example, see [Neural search tutorial]({{site.url}}{{site.baseurl}}/search-plugins/neural-search-tutorial/). - For a comprehensive example, see [Neural search tutorial]({{site.url}}{{site.baseurl}}/search-plugins/neural-search-tutorial/).

View File

@ -7,7 +7,7 @@ redirect_from:
- /api-reference/ingest-apis/processors/uppercase/ - /api-reference/ingest-apis/processors/uppercase/
--- ---
# Uppercase # Uppercase processor
**Introduced 1.0** **Introduced 1.0**
{: .label .label-purple } {: .label .label-purple }

View File

@ -10,7 +10,7 @@ nav_order: 55
Introduced 2.11 Introduced 2.11
{: .label .label-purple } {: .label .label-purple }
Use the `neural_sparse` query for vector field search in [sparse neural search]({{site.url}}{{site.baseurl}}/search-plugins/neural-sparse-search/). Use the `neural_sparse` query for vector field search in [neural sparse search]({{site.url}}{{site.baseurl}}/search-plugins/neural-sparse-search/).
## Request fields ## Request fields