[DOC] Edits to parameter table headings (#5838)
* Edits to parameter table headings --------- Signed-off-by: Melissa Vagi <vagimeli@amazon.com>
This commit is contained in:
parent
365d536da7
commit
a2e69597c2
|
@ -8,15 +8,14 @@ redirect_from:
|
||||||
---
|
---
|
||||||
|
|
||||||
# Append processor
|
# Append processor
|
||||||
**Introduced 1.0**
|
|
||||||
{: .label .label-purple }
|
|
||||||
|
|
||||||
The `append` processor is used to add values to a field:
|
The `append` processor is used to add values to a field:
|
||||||
- If the field is an array, the `append` processor appends the specified values to that array.
|
- If the field is an array, the `append` processor appends the specified values to that array.
|
||||||
- If the field is a scalar field, the `append` processor converts it to an array and appends the specified values to that array.
|
- If the field is a scalar field, the `append` processor converts it to an array and appends the specified values to that array.
|
||||||
- If the field does not exist, the `append` processor creates an array with the specified values.
|
- If the field does not exist, the `append` processor creates an array with the specified values.
|
||||||
|
|
||||||
### Example
|
### Syntax
|
||||||
|
|
||||||
The following is the syntax for the `append` processor:
|
The following is the syntax for the `append` processor:
|
||||||
|
|
||||||
```json
|
```json
|
||||||
|
@ -33,21 +32,21 @@ The following is the syntax for the `append` processor:
|
||||||
|
|
||||||
The following table lists the required and optional parameters for the `append` processor.
|
The following table lists the required and optional parameters for the `append` processor.
|
||||||
|
|
||||||
Parameter | Required | Description |
|
Parameter | Required/Optional | Description |
|
||||||
|-----------|-----------|-----------|
|
|-----------|-----------|-----------|
|
||||||
`field` | Required | The name of the field to which the data should be appended. Supports template snippets.|
|
`field` | Required | The name of the field containing the data to be appended. Supports [template snippets]({{site.url}}{{site.baseurl}}/ingest-pipelines/create-ingest/#template-snippets).|
|
||||||
`value` | Required | The value to be appended. This can be a static value or a dynamic value derived from existing fields. Supports template snippets. |
|
`value` | Required | The value to be appended. This can be a static value or a dynamic value derived from existing fields. Supports [template snippets]({{site.url}}{{site.baseurl}}/ingest-pipelines/create-ingest/#template-snippets). |
|
||||||
`description` | Optional | A brief description of the processor. |
|
`description` | Optional | A brief description of the processor. |
|
||||||
`if` | Optional | A condition for running this processor. |
|
`if` | Optional | A condition for running the processor. |
|
||||||
`ignore_failure` | Optional | If set to `true`, failures are ignored. Default is `false`. |
|
`ignore_failure` | Optional | Specifies whether the processor continues execution even if it encounters errors. If set to `true`, failures are ignored. Default is `false`. |
|
||||||
`on_failure` | Optional | A list of processors to run if the processor fails. |
|
`on_failure` | Optional | A list of processors to run if the processor fails. |
|
||||||
`tag` | Optional | An identifier tag for the processor. Useful for debugging to distinguish between processors of the same type. |
|
`tag` | Optional | An identifier tag for the processor. Useful for debugging in order to distinguish between processors of the same type. |
|
||||||
|
|
||||||
## Using the processor
|
## Using the processor
|
||||||
|
|
||||||
Follow these steps to use the processor in a pipeline.
|
Follow these steps to use the processor in a pipeline.
|
||||||
|
|
||||||
**Step 1: Create a pipeline.**
|
**Step 1: Create a pipeline**
|
||||||
|
|
||||||
The following query creates a pipeline, named `user-behavior`, that has one append processor. It appends the `page_view` of each new document ingested into OpenSearch to an array field named `event_types`:
|
The following query creates a pipeline, named `user-behavior`, that has one append processor. It appends the `page_view` of each new document ingested into OpenSearch to an array field named `event_types`:
|
||||||
|
|
||||||
|
@ -67,7 +66,7 @@ PUT _ingest/pipeline/user-behavior
|
||||||
```
|
```
|
||||||
{% include copy-curl.html %}
|
{% include copy-curl.html %}
|
||||||
|
|
||||||
**Step 2 (Optional): Test the pipeline.**
|
**Step 2 (Optional): Test the pipeline**
|
||||||
|
|
||||||
It is recommended that you test your pipeline before you ingest documents.
|
It is recommended that you test your pipeline before you ingest documents.
|
||||||
{: .tip}
|
{: .tip}
|
||||||
|
@ -87,7 +86,7 @@ POST _ingest/pipeline/user-behavior/_simulate
|
||||||
```
|
```
|
||||||
{% include copy-curl.html %}
|
{% include copy-curl.html %}
|
||||||
|
|
||||||
#### Response
|
**Response**
|
||||||
|
|
||||||
The following response confirms that the pipeline is working as expected:
|
The following response confirms that the pipeline is working as expected:
|
||||||
|
|
||||||
|
@ -112,7 +111,7 @@ The following response confirms that the pipeline is working as expected:
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Step 3: Ingest a document.**
|
**Step 3: Ingest a document**
|
||||||
|
|
||||||
The following query ingests a document into an index named `testindex1`:
|
The following query ingests a document into an index named `testindex1`:
|
||||||
|
|
||||||
|
@ -123,7 +122,7 @@ PUT testindex1/_doc/1?pipeline=user-behavior
|
||||||
```
|
```
|
||||||
{% include copy-curl.html %}
|
{% include copy-curl.html %}
|
||||||
|
|
||||||
**Step 4 (Optional): Retrieve the document.**
|
**Step 4 (Optional): Retrieve the document**
|
||||||
|
|
||||||
To retrieve the document, run the following query:
|
To retrieve the document, run the following query:
|
||||||
|
|
||||||
|
|
|
@ -8,12 +8,11 @@ redirect_from:
|
||||||
---
|
---
|
||||||
|
|
||||||
# Bytes processor
|
# Bytes processor
|
||||||
**Introduced 1.0**
|
|
||||||
{: .label .label-purple }
|
|
||||||
|
|
||||||
The `bytes` processor converts a human-readable byte value to its equivalent value in bytes. The field can be a scalar or an array. If the field is a scalar, the value is converted and stored in the field. If the field is an array, all values of the array are converted.
|
The `bytes` processor converts a human-readable byte value to its equivalent value in bytes. The field can be a scalar or an array. If the field is a scalar, the value is converted and stored in the field. If the field is an array, all values of the array are converted.
|
||||||
|
|
||||||
### Example
|
### Syntax
|
||||||
|
|
||||||
The following is the syntax for the `bytes` processor:
|
The following is the syntax for the `bytes` processor:
|
||||||
|
|
||||||
```json
|
```json
|
||||||
|
@ -29,22 +28,22 @@ The following is the syntax for the `bytes` processor:
|
||||||
|
|
||||||
The following table lists the required and optional parameters for the `bytes` processor.
|
The following table lists the required and optional parameters for the `bytes` processor.
|
||||||
|
|
||||||
Parameter | Required | Description |
|
Parameter | Required/Optional | Description |
|
||||||
|-----------|-----------|-----------|
|
|-----------|-----------|-----------|
|
||||||
`field` | Required | The name of the field where the data should be converted. Supports template snippets. |
|
`field` | Required | The name of the field containing the data to be converted. Supports [template snippets]({{site.url}}{{site.baseurl}}/ingest-pipelines/create-ingest/#template-snippets). |
|
||||||
`description` | Optional | A brief description of the processor. |
|
`description` | Optional | A brief description of the processor. |
|
||||||
`if` | Optional | A condition for running this processor. |
|
`if` | Optional | A condition for running the processor. |
|
||||||
`ignore_failure` | Optional | If set to `true`, failures are ignored. Default is `false`. |
|
`ignore_failure` | Optional | Specifies whether the processor continues execution even if it encounters errors. If set to `true`, failures are ignored. Default is `false`. |
|
||||||
`ignore_missing` | Optional | If set to `true`, the processor does not modify the document if the field does not exist or is `null`. Default is `false`. |
|
`ignore_missing` | Optional | Specifies whether the processor should ignore documents that do not contain the specified field. If set to `true`, the processor does not modify the document if the field does not exist or is `null`. Default is `false`. |
|
||||||
`on_failure` | Optional | A list of processors to run if the processor fails. |
|
`on_failure` | Optional | A list of processors to run if the processor fails. |
|
||||||
`tag` | Optional | An identifier tag for the processor. Useful for debugging to distinguish between processors of the same type. |
|
`tag` | Optional | An identifier tag for the processor. Useful for debugging in order to distinguish between processors of the same type. |
|
||||||
`target_field` | Optional | The name of the field in which to store the parsed data. If not specified, the value will be stored in place in the `field` field. Default is `field`. |
|
`target_field` | Optional | The name of the field in which to store the parsed data. If not specified, the value will be stored in place in the `field` field. Default is `field`. |
|
||||||
|
|
||||||
## Using the processor
|
## Using the processor
|
||||||
|
|
||||||
Follow these steps to use the processor in a pipeline.
|
Follow these steps to use the processor in a pipeline.
|
||||||
|
|
||||||
**Step 1: Create a pipeline.**
|
**Step 1: Create a pipeline**
|
||||||
|
|
||||||
The following query creates a pipeline, named `file_upload`, that has one `bytes` processor. It converts the `file_size` to its byte equivalent and stores it in a new field named `file_size_bytes`:
|
The following query creates a pipeline, named `file_upload`, that has one `bytes` processor. It converts the `file_size` to its byte equivalent and stores it in a new field named `file_size_bytes`:
|
||||||
|
|
||||||
|
@ -64,7 +63,7 @@ PUT _ingest/pipeline/file_upload
|
||||||
```
|
```
|
||||||
{% include copy-curl.html %}
|
{% include copy-curl.html %}
|
||||||
|
|
||||||
**Step 2 (Optional): Test the pipeline.**
|
**Step 2 (Optional): Test the pipeline**
|
||||||
|
|
||||||
It is recommended that you test your pipeline before you ingest documents.
|
It is recommended that you test your pipeline before you ingest documents.
|
||||||
{: .tip}
|
{: .tip}
|
||||||
|
@ -89,7 +88,7 @@ POST _ingest/pipeline/file_upload/_simulate
|
||||||
```
|
```
|
||||||
{% include copy-curl.html %}
|
{% include copy-curl.html %}
|
||||||
|
|
||||||
#### Response
|
**Response**
|
||||||
|
|
||||||
The following response confirms that the pipeline is working as expected:
|
The following response confirms that the pipeline is working as expected:
|
||||||
|
|
||||||
|
@ -116,7 +115,7 @@ The following response confirms that the pipeline is working as expected:
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Step 3: Ingest a document.**
|
**Step 3: Ingest a document**
|
||||||
|
|
||||||
The following query ingests a document into an index named `testindex1`:
|
The following query ingests a document into an index named `testindex1`:
|
||||||
|
|
||||||
|
@ -128,7 +127,7 @@ PUT testindex1/_doc/1?pipeline=file_upload
|
||||||
```
|
```
|
||||||
{% include copy-curl.html %}
|
{% include copy-curl.html %}
|
||||||
|
|
||||||
**Step 4 (Optional): Retrieve the document.**
|
**Step 4 (Optional): Retrieve the document**
|
||||||
|
|
||||||
To retrieve the document, run the following query:
|
To retrieve the document, run the following query:
|
||||||
|
|
||||||
|
|
|
@ -8,12 +8,11 @@ redirect_from:
|
||||||
---
|
---
|
||||||
|
|
||||||
# Convert processor
|
# Convert processor
|
||||||
**Introduced 1.0**
|
|
||||||
{: .label .label-purple }
|
|
||||||
|
|
||||||
The `convert` processor converts a field in a document to a different type, for example, a string to an integer or an integer to a string. For an array field, all values in the array are converted.
|
The `convert` processor converts a field in a document to a different type, for example, a string to an integer or an integer to a string. For an array field, all values in the array are converted.
|
||||||
|
|
||||||
## Example
|
## Syntax
|
||||||
|
|
||||||
The following is the syntax for the `convert` processor:
|
The following is the syntax for the `convert` processor:
|
||||||
|
|
||||||
```json
|
```json
|
||||||
|
@ -30,23 +29,23 @@ The following is the syntax for the `convert` processor:
|
||||||
|
|
||||||
The following table lists the required and optional parameters for the `convert` processor.
|
The following table lists the required and optional parameters for the `convert` processor.
|
||||||
|
|
||||||
Parameter | Required | Description |
|
Parameter | Required/Optional | Description |
|
||||||
|-----------|-----------|-----------|
|
|-----------|-----------|-----------|
|
||||||
`field` | Required | The name of the field that contains the data to be converted. Supports template snippets. |
|
`field` | Required | The name of the field containing the data to be converted. Supports [template snippets]({{site.url}}{{site.baseurl}}/ingest-pipelines/create-ingest/#template-snippets). |
|
||||||
`type` | Required | The type to convert the field value to. The supported types are `integer`, `long`, `float`, `double`, `string`, `boolean`, `ip`, and `auto`. If the `type` is `boolean`, the value is set to `true` if the field value is a string `true` (ignoring case) and to `false` if the field value is a string `false` (ignoring case). If the value is not one of the allowed values, an error will occur. |
|
`type` | Required | The type to convert the field value to. The supported types are `integer`, `long`, `float`, `double`, `string`, `boolean`, `ip`, and `auto`. If the `type` is `boolean`, the value is set to `true` if the field value is a string `true` (ignoring case) and to `false` if the field value is a string `false` (ignoring case). If the value is not one of the allowed values, an error will occur. |
|
||||||
`description` | Optional | A brief description of the processor. |
|
`description` | Optional | A brief description of the processor. |
|
||||||
`if` | Optional | A condition for running this processor. |
|
`if` | Optional | A condition for running the processor. |
|
||||||
`ignore_failure` | Optional | If set to `true`, failures are ignored. Default is `false`. |
|
`ignore_failure` | Optional | Specifies whether the processor continues execution even if it encounters errors. If set to `true`, failures are ignored. Default is `false`. |
|
||||||
`ignore_missing` | Optional | If set to `true`, the processor does not modify the document if the field does not exist or is `null`. Default is `false`. |
|
`ignore_missing` | Optional | Specifies whether the processor should ignore documents that do not contain the specified field. If set to `true`, the processor does not modify the document if the field does not exist or is `null`. Default is `false`. |
|
||||||
`on_failure` | Optional | A list of processors to run if the processor fails. |
|
`on_failure` | Optional | A list of processors to run if the processor fails. |
|
||||||
`tag` | Optional | An identifier tag for the processor. Useful for debugging to distinguish between processors of the same type. |
|
`tag` | Optional | An identifier tag for the processor. Useful for debugging in order to distinguish between processors of the same type. |
|
||||||
`target_field` | Optional | The name of the field in which to store the parsed data. If not specified, the value will be stored in the `field` field. Default is `field`. |
|
`target_field` | Optional | The name of the field in which to store the parsed data. If not specified, the value will be stored in the `field` field. Default is `field`. |
|
||||||
|
|
||||||
## Using the processor
|
## Using the processor
|
||||||
|
|
||||||
Follow these steps to use the processor in a pipeline.
|
Follow these steps to use the processor in a pipeline.
|
||||||
|
|
||||||
**Step 1: Create a pipeline.**
|
**Step 1: Create a pipeline**
|
||||||
|
|
||||||
The following query creates a pipeline, named `convert-price`, that converts `price` to a floating-point number, stores the converted value in the `price_float` field, and sets the value to `0` if it is less than `0`:
|
The following query creates a pipeline, named `convert-price`, that converts `price` to a floating-point number, stores the converted value in the `price_float` field, and sets the value to `0` if it is less than `0`:
|
||||||
|
|
||||||
|
@ -74,7 +73,7 @@ PUT _ingest/pipeline/convert-price
|
||||||
```
|
```
|
||||||
{% include copy-curl.html %}
|
{% include copy-curl.html %}
|
||||||
|
|
||||||
**Step 2 (Optional): Test the pipeline.**
|
**Step 2 (Optional): Test the pipeline**
|
||||||
|
|
||||||
It is recommended that you test your pipeline before you ingest documents.
|
It is recommended that you test your pipeline before you ingest documents.
|
||||||
{: .tip}
|
{: .tip}
|
||||||
|
@ -97,7 +96,7 @@ POST _ingest/pipeline/convert-price/_simulate
|
||||||
```
|
```
|
||||||
{% include copy-curl.html %}
|
{% include copy-curl.html %}
|
||||||
|
|
||||||
#### Response
|
**Response**
|
||||||
|
|
||||||
The following example response confirms that the pipeline is working as expected:
|
The following example response confirms that the pipeline is working as expected:
|
||||||
|
|
||||||
|
@ -121,7 +120,7 @@ The following example response confirms that the pipeline is working as expected
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Step 3: Ingest a document.**
|
**Step 3: Ingest a document**
|
||||||
|
|
||||||
The following query ingests a document into an index named `testindex1`:
|
The following query ingests a document into an index named `testindex1`:
|
||||||
|
|
||||||
|
@ -133,7 +132,7 @@ PUT testindex1/_doc/1?pipeline=convert-price
|
||||||
```
|
```
|
||||||
{% include copy-curl.html %}
|
{% include copy-curl.html %}
|
||||||
|
|
||||||
**Step 4 (Optional): Retrieve the document.**
|
**Step 4 (Optional): Retrieve the document**
|
||||||
|
|
||||||
To retrieve the document, run the following query:
|
To retrieve the document, run the following query:
|
||||||
|
|
||||||
|
|
|
@ -8,12 +8,11 @@ redirect_from:
|
||||||
---
|
---
|
||||||
|
|
||||||
# CSV processor
|
# CSV processor
|
||||||
**Introduced 1.0**
|
|
||||||
{: .label .label-purple }
|
|
||||||
|
|
||||||
The `csv` processor is used to parse CSVs and store them as individual fields in a document. The processor ignores empty fields.
|
The `csv` processor is used to parse CSVs and store them as individual fields in a document. The processor ignores empty fields.
|
||||||
|
|
||||||
## Example
|
## Syntax
|
||||||
|
|
||||||
The following is the syntax for the `csv` processor:
|
The following is the syntax for the `csv` processor:
|
||||||
|
|
||||||
```json
|
```json
|
||||||
|
@ -30,26 +29,26 @@ The following is the syntax for the `csv` processor:
|
||||||
|
|
||||||
The following table lists the required and optional parameters for the `csv` processor.
|
The following table lists the required and optional parameters for the `csv` processor.
|
||||||
|
|
||||||
Parameter | Required | Description |
|
Parameter | Required/Optional | Description |
|
||||||
|-----------|-----------|-----------|
|
|-----------|-----------|-----------|
|
||||||
`field` | Required | The name of the field that contains the data to be converted. Supports template snippets. |
|
`field` | Required | The name of the field containing the data to be converted. Supports template snippets. |
|
||||||
`target_fields` | Required | The name of the field in which to store the parsed data. |
|
`target_fields` | Required | The name of the field in which to store the parsed data. |
|
||||||
`description` | Optional | A brief description of the processor. |
|
`description` | Optional | A brief description of the processor. |
|
||||||
`empty_value` | Optional | Represents optional parameters that are not required or are not applicable. |
|
`empty_value` | Optional | Represents optional parameters that are not required or are not applicable. |
|
||||||
`if` | Optional | A condition for running this processor. |
|
`if` | Optional | A condition for running the processor. |
|
||||||
`ignore_failure` | Optional | If set to `true`, failures are ignored. Default is `false`. |
|
`ignore_failure` | Optional | Specifies whether the processor continues execution even if it encounters errors. If set to `true`, failures are ignored. Default is `false`. |
|
||||||
`ignore_missing` | Optional | If set to `true`, the processor will not fail if the field does not exist. Default is `true`. |
|
`ignore_missing` | Optional | Specifies whether the processor should ignore documents that do not contain the specified field. If set to `true`, the processor does not modify the document if the field does not exist or is `null`. Default is `false`. |
|
||||||
`on_failure` | Optional | A list of processors to run if the processor fails. |
|
`on_failure` | Optional | A list of processors to run if the processor fails. |
|
||||||
`quote` | Optional | The character used to quote fields in the CSV data. Default is `"`. |
|
`quote` | Optional | The character used to quote fields in the CSV data. Default is `"`. |
|
||||||
`separator` | Optional | The delimiter used to separate the fields in the CSV data. Default is `,`. |
|
`separator` | Optional | The delimiter used to separate the fields in the CSV data. Default is `,`. |
|
||||||
`tag` | Optional | An identifier tag for the processor. Useful for debugging to distinguish between processors of the same type. |
|
`tag` | Optional | An identifier tag for the processor. Useful for debugging in order to distinguish between processors of the same type. |
|
||||||
`trim` | Optional | If set to `true`, the processor trims white space from the beginning and end of the text. Default is `false`. |
|
`trim` | Optional | If set to `true`, the processor trims white space from the beginning and end of the text. Default is `false`. |
|
||||||
|
|
||||||
## Using the processor
|
## Using the processor
|
||||||
|
|
||||||
Follow these steps to use the processor in a pipeline.
|
Follow these steps to use the processor in a pipeline.
|
||||||
|
|
||||||
**Step 1: Create a pipeline.**
|
**Step 1: Create a pipeline**
|
||||||
|
|
||||||
The following query creates a pipeline, named `csv-processor`, that splits `resource_usage` into three new fields named `cpu_usage`, `memory_usage`, and `disk_usage`:
|
The following query creates a pipeline, named `csv-processor`, that splits `resource_usage` into three new fields named `cpu_usage`, `memory_usage`, and `disk_usage`:
|
||||||
|
|
||||||
|
@ -70,7 +69,7 @@ PUT _ingest/pipeline/csv-processor
|
||||||
```
|
```
|
||||||
{% include copy-curl.html %}
|
{% include copy-curl.html %}
|
||||||
|
|
||||||
**Step 2 (Optional): Test the pipeline.**
|
**Step 2 (Optional): Test the pipeline**
|
||||||
|
|
||||||
It is recommended that you test your pipeline before you ingest documents.
|
It is recommended that you test your pipeline before you ingest documents.
|
||||||
{: .tip}
|
{: .tip}
|
||||||
|
@ -96,7 +95,7 @@ POST _ingest/pipeline/csv-processor/_simulate
|
||||||
```
|
```
|
||||||
{% include copy-curl.html %}
|
{% include copy-curl.html %}
|
||||||
|
|
||||||
#### Response
|
**Response**
|
||||||
|
|
||||||
The following example response confirms that the pipeline is working as expected:
|
The following example response confirms that the pipeline is working as expected:
|
||||||
|
|
||||||
|
@ -122,7 +121,7 @@ The following example response confirms that the pipeline is working as expected
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Step 3: Ingest a document.**
|
**Step 3: Ingest a document**
|
||||||
|
|
||||||
The following query ingests a document into an index named `testindex1`:
|
The following query ingests a document into an index named `testindex1`:
|
||||||
|
|
||||||
|
@ -134,7 +133,7 @@ PUT testindex1/_doc/1?pipeline=csv-processor
|
||||||
```
|
```
|
||||||
{% include copy-curl.html %}
|
{% include copy-curl.html %}
|
||||||
|
|
||||||
**Step 4 (Optional): Retrieve the document.**
|
**Step 4 (Optional): Retrieve the document**
|
||||||
|
|
||||||
To retrieve the document, run the following query:
|
To retrieve the document, run the following query:
|
||||||
|
|
||||||
|
|
|
@ -8,12 +8,11 @@ redirect_from:
|
||||||
---
|
---
|
||||||
|
|
||||||
# Date processor
|
# Date processor
|
||||||
**Introduced 1.0**
|
|
||||||
{: .label .label-purple }
|
|
||||||
|
|
||||||
The `date` processor is used to parse dates from document fields and to add the parsed data to a new field. By default, the parsed data is stored in the `@timestamp` field.
|
The `date` processor is used to parse dates from document fields and to add the parsed data to a new field. By default, the parsed data is stored in the `@timestamp` field.
|
||||||
|
|
||||||
## Example
|
## Syntax
|
||||||
|
|
||||||
The following is the syntax for the `date` processor:
|
The following is the syntax for the `date` processor:
|
||||||
|
|
||||||
```json
|
```json
|
||||||
|
@ -30,25 +29,25 @@ The following is the syntax for the `date` processor:
|
||||||
|
|
||||||
The following table lists the required and optional parameters for the `date` processor.
|
The following table lists the required and optional parameters for the `date` processor.
|
||||||
|
|
||||||
Parameter | Required | Description |
|
Parameter | Required/Optional | Description |
|
||||||
|-----------|-----------|-----------|
|
|-----------|-----------|-----------|
|
||||||
`field` | Required | The name of the field to which the data should be converted. Supports template snippets. |
|
`field` | Required | The name of the field containing the data to be converted. Supports [template snippets]({{site.url}}{{site.baseurl}}/ingest-pipelines/create-ingest/#template-snippets). |
|
||||||
`formats` | Required | An array of the expected date formats. Can be a [date format]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/date/#formats) or one of the following formats: ISO8601, UNIX, UNIX_MS, or TAI64N. |
|
`formats` | Required | An array of the expected date formats. Can be a [date format]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/date/#formats) or one of the following formats: ISO8601, UNIX, UNIX_MS, or TAI64N. |
|
||||||
`description` | Optional | A brief description of the processor. |
|
`description` | Optional | A brief description of the processor. |
|
||||||
`if` | Optional | A condition for running this processor. |
|
`if` | Optional | A condition for running the processor. |
|
||||||
`ignore_failure` | Optional | If set to `true`, failures are ignored. Default is `false`. |
|
`ignore_failure` | Optional | Specifies whether the processor continues execution even if it encounters errors. If set to `true`, failures are ignored. Default is `false`. |
|
||||||
`locale` | Optional | The locale to use when parsing the date. Default is `ENGLISH`. Supports template snippets. |
|
`locale` | Optional | The locale to use when parsing the date. Default is `ENGLISH`. Supports [template snippets]({{site.url}}{{site.baseurl}}/ingest-pipelines/create-ingest/#template-snippets). |
|
||||||
`on_failure` | Optional | A list of processors to run if the processor fails. |
|
`on_failure` | Optional | A list of processors to run if the processor fails. |
|
||||||
`output_format` | Optional | The [date format]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/date/#formats) to use for the target field. Default is `yyyy-MM-dd'T'HH:mm:ss.SSSZZ`. |
|
`output_format` | Optional | The [date format]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/date/#formats) to use for the target field. Default is `yyyy-MM-dd'T'HH:mm:ss.SSSZZ`. |
|
||||||
`tag` | Optional | An identifier tag for the processor. Useful for debugging to distinguish between processors of the same type. |
|
`tag` | Optional | An identifier tag for the processor. Useful for debugging in order to distinguish between processors of the same type. |
|
||||||
`target_field` | Optional | The name of the field in which to store the parsed data. Default target field is `@timestamp`. |
|
`target_field` | Optional | The name of the field in which to store the parsed data. Default target field is `@timestamp`. |
|
||||||
`timezone` | Optional | The time zone to use when parsing the date. Default is `UTC`. Supports template snippets. |
|
`timezone` | Optional | The time zone to use when parsing the date. Default is `UTC`. Supports [template snippets]({{site.url}}{{site.baseurl}}/ingest-pipelines/create-ingest/#template-snippets). |
|
||||||
|
|
||||||
## Using the processor
|
## Using the processor
|
||||||
|
|
||||||
Follow these steps to use the processor in a pipeline.
|
Follow these steps to use the processor in a pipeline.
|
||||||
|
|
||||||
**Step 1: Create a pipeline.**
|
**Step 1: Create a pipeline**
|
||||||
|
|
||||||
The following query creates a pipeline, named `date-output-format`, that uses the `date` processor to convert from European date format to US date format, adding the new field `date_us` with the desired `output_format`:
|
The following query creates a pipeline, named `date-output-format`, that uses the `date` processor to convert from European date format to US date format, adding the new field `date_us` with the desired `output_format`:
|
||||||
|
|
||||||
|
@ -71,7 +70,7 @@ PUT /_ingest/pipeline/date-output-format
|
||||||
```
|
```
|
||||||
{% include copy-curl.html %}
|
{% include copy-curl.html %}
|
||||||
|
|
||||||
**Step 2 (Optional): Test the pipeline.**
|
**Step 2 (Optional): Test the pipeline**
|
||||||
|
|
||||||
It is recommended that you test your pipeline before you ingest documents.
|
It is recommended that you test your pipeline before you ingest documents.
|
||||||
{: .tip}
|
{: .tip}
|
||||||
|
@ -95,7 +94,7 @@ POST _ingest/pipeline/date-output-format/_simulate
|
||||||
```
|
```
|
||||||
{% include copy-curl.html %}
|
{% include copy-curl.html %}
|
||||||
|
|
||||||
#### Response
|
**Response**
|
||||||
|
|
||||||
The following example response confirms that the pipeline is working as expected:
|
The following example response confirms that the pipeline is working as expected:
|
||||||
|
|
||||||
|
@ -119,7 +118,7 @@ The following example response confirms that the pipeline is working as expected
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Step 3: Ingest a document.**
|
**Step 3: Ingest a document**
|
||||||
|
|
||||||
The following query ingests a document into an index named `testindex1`:
|
The following query ingests a document into an index named `testindex1`:
|
||||||
|
|
||||||
|
@ -131,7 +130,7 @@ PUT testindex1/_doc/1?pipeline=date-output-format
|
||||||
```
|
```
|
||||||
{% include copy-curl.html %}
|
{% include copy-curl.html %}
|
||||||
|
|
||||||
**Step 4 (Optional): Retrieve the document.**
|
**Step 4 (Optional): Retrieve the document**
|
||||||
|
|
||||||
To retrieve the document, run the following query:
|
To retrieve the document, run the following query:
|
||||||
|
|
||||||
|
|
|
@ -16,7 +16,7 @@ The `grok` processor uses a set of predefined patterns to match parts of the inp
|
||||||
|
|
||||||
The `grok` processor is built on the [Oniguruma regular expression library](https://github.com/kkos/oniguruma/blob/master/doc/RE) and supports all the patterns from that library. You can use the [Grok Debugger](https://grokdebugger.com/) tool to test and debug your grok expressions.
|
The `grok` processor is built on the [Oniguruma regular expression library](https://github.com/kkos/oniguruma/blob/master/doc/RE) and supports all the patterns from that library. You can use the [Grok Debugger](https://grokdebugger.com/) tool to test and debug your grok expressions.
|
||||||
|
|
||||||
## Grok processor syntax
|
## Syntax
|
||||||
|
|
||||||
The following is the basic syntax for the `grok` processor:
|
The following is the basic syntax for the `grok` processor:
|
||||||
|
|
||||||
|
@ -34,24 +34,24 @@ The following is the basic syntax for the `grok` processor:
|
||||||
|
|
||||||
To configure the `grok` processor, you have various options that allow you to define patterns, match specific keys, and control the processor's behavior. The following table lists the required and optional parameters for the `grok` processor.
|
To configure the `grok` processor, you have various options that allow you to define patterns, match specific keys, and control the processor's behavior. The following table lists the required and optional parameters for the `grok` processor.
|
||||||
|
|
||||||
Parameter | Required | Description |
|
Parameter | Required/Optional | Description |
|
||||||
|-----------|-----------|-----------|
|
|-----------|-----------|-----------|
|
||||||
`field` | Required | The name of the field containing the text that should be parsed. |
|
`field` | Required | The name of the field containing the text to be parsed. |
|
||||||
`patterns` | Required | A list of grok expressions used to match and extract named captures. The first matching expression in the list is returned. |
|
`patterns` | Required | A list of grok expressions used to match and extract named captures. The first matching expression in the list is returned. |
|
||||||
`pattern_definitions` | Optional | A dictionary of pattern names and pattern tuples used to define custom patterns for the current processor. If a pattern matches an existing name, it overrides the pre-existing definition. |
|
`pattern_definitions` | Optional | A dictionary of pattern names and pattern tuples used to define custom patterns for the current processor. If a pattern matches an existing name, it overrides the pre-existing definition. |
|
||||||
`trace_match` | Optional | When the parameter is set to `true`, the processor adds a field named `_grok_match_index` to the processed document. This field contains the index of the pattern within the `patterns` array that successfully matched the document. This information can be useful for debugging and understanding which pattern was applied to the document. Default is `false`. |
|
`trace_match` | Optional | When the parameter is set to `true`, the processor adds a field named `_grok_match_index` to the processed document. This field contains the index of the pattern within the `patterns` array that successfully matched the document. This information can be useful for debugging and understanding which pattern was applied to the document. Default is `false`. |
|
||||||
`description` | Optional | A brief description of the processor. |
|
`description` | Optional | A brief description of the processor. |
|
||||||
`if` | Optional | A condition for running this processor. |
|
`if` | Optional | A condition for running the processor. |
|
||||||
`ignore_failure` | Optional | If set to `true`, failures are ignored. Default is `false`. |
|
`ignore_failure` | Optional | Specifies whether the processor continues execution even if it encounters errors. If set to `true`, failures are ignored. Default is `false`. |
|
||||||
`ignore_missing` | Optional | If set to `true`, the processor does not modify the document if the field does not exist or is `null`. Default is `false`. |
|
`ignore_missing` | Optional | Specifies whether the processor should ignore documents that do not contain the specified field. If set to `true`, the processor does not modify the document if the field does not exist or is `null`. Default is `false`. |
|
||||||
`on_failure` | Optional | A list of processors to run if the processor fails. |
|
`on_failure` | Optional | A list of processors to run if the processor fails. |
|
||||||
`tag` | Optional | An identifier tag for the processor. Useful for debugging to distinguish between processors of the same type. |
|
`tag` | Optional | An identifier tag for the processor. Useful for debugging in order to distinguish between processors of the same type. |
|
||||||
|
|
||||||
## Creating a pipeline
|
## Creating a pipeline
|
||||||
|
|
||||||
The following steps guide you through creating an [ingest pipeline]({{site.url}}{{site.baseurl}}/ingest-pipelines/index/) with the `grok` processor.
|
The following steps guide you through creating an [ingest pipeline]({{site.url}}{{site.baseurl}}/ingest-pipelines/index/) with the `grok` processor.
|
||||||
|
|
||||||
**Step 1: Create a pipeline.**
|
**Step 1: Create a pipeline**
|
||||||
|
|
||||||
The following query creates a pipeline, named `log_line`. It extracts fields from the `message` field of the document using the specified pattern. In this case, it extracts the `clientip`, `timestamp`, and `response_status` fields:
|
The following query creates a pipeline, named `log_line`. It extracts fields from the `message` field of the document using the specified pattern. In this case, it extracts the `clientip`, `timestamp`, and `response_status` fields:
|
||||||
|
|
||||||
|
@ -92,7 +92,7 @@ POST _ingest/pipeline/log_line/_simulate
|
||||||
```
|
```
|
||||||
{% include copy-curl.html %}
|
{% include copy-curl.html %}
|
||||||
|
|
||||||
#### Response
|
**Response**
|
||||||
|
|
||||||
The following response confirms that the pipeline is working as expected:
|
The following response confirms that the pipeline is working as expected:
|
||||||
|
|
||||||
|
@ -118,7 +118,7 @@ The following response confirms that the pipeline is working as expected:
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Step 3: Ingest a document.**
|
**Step 3: Ingest a document**
|
||||||
|
|
||||||
The following query ingests a document into an index named `testindex1`:
|
The following query ingests a document into an index named `testindex1`:
|
||||||
|
|
||||||
|
@ -130,7 +130,7 @@ PUT testindex1/_doc/1?pipeline=log_line
|
||||||
```
|
```
|
||||||
{% include copy-curl.html %}
|
{% include copy-curl.html %}
|
||||||
|
|
||||||
**Step 4 (Optional): Retrieve the document.**
|
**Step 4 (Optional): Retrieve the document**
|
||||||
|
|
||||||
To retrieve the document, run the following query:
|
To retrieve the document, run the following query:
|
||||||
|
|
||||||
|
|
|
@ -44,7 +44,7 @@ OpenSearch provides the following endpoints for GeoLite2 City, GeoLite2 Country,
|
||||||
|
|
||||||
If an OpenSearch cluster cannot update a data source from the endpoints within 30 days, the cluster does not add GeoIP data to the documents and instead adds `"error":"ip2geo_data_expired"`.
|
If an OpenSearch cluster cannot update a data source from the endpoints within 30 days, the cluster does not add GeoIP data to the documents and instead adds `"error":"ip2geo_data_expired"`.
|
||||||
|
|
||||||
#### Data source options
|
### Data source options
|
||||||
|
|
||||||
The following table lists the data source options for the `ip2geo` processor.
|
The following table lists the data source options for the `ip2geo` processor.
|
||||||
|
|
||||||
|
@ -66,7 +66,7 @@ PUT /_plugins/geospatial/ip2geo/datasource/my-datasource
|
||||||
|
|
||||||
A `true` response means that the request was successful and that the server was able to process the request. A `false` response indicates that you should check the request to make sure it is valid, check the URL to make sure it is correct, or try again.
|
A `true` response means that the request was successful and that the server was able to process the request. A `false` response indicates that you should check the request to make sure it is valid, check the URL to make sure it is correct, or try again.
|
||||||
|
|
||||||
#### Sending a GET request
|
### Sending a GET request
|
||||||
|
|
||||||
To get information about one or more IP2Geo data sources, send a GET request:
|
To get information about one or more IP2Geo data sources, send a GET request:
|
||||||
|
|
||||||
|
@ -113,7 +113,7 @@ You'll receive the following response:
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Updating an IP2Geo data source
|
### Updating an IP2Geo data source
|
||||||
|
|
||||||
See the Creating the IP2Geo data source section for a list of endpoints and request field descriptions.
|
See the Creating the IP2Geo data source section for a list of endpoints and request field descriptions.
|
||||||
|
|
||||||
|
@ -128,7 +128,7 @@ PUT /_plugins/geospatial/ip2geo/datasource/my-datasource/_settings
|
||||||
```
|
```
|
||||||
{% include copy-curl.html %}
|
{% include copy-curl.html %}
|
||||||
|
|
||||||
#### Deleting the IP2Geo data source
|
### Deleting the IP2Geo data source
|
||||||
|
|
||||||
To delete the IP2Geo data source, you must first delete all processors associated with the data source. Otherwise, the request fails.
|
To delete the IP2Geo data source, you must first delete all processors associated with the data source. Otherwise, the request fails.
|
||||||
|
|
||||||
|
@ -141,7 +141,11 @@ DELETE /_plugins/geospatial/ip2geo/datasource/my-datasource
|
||||||
|
|
||||||
## Creating the pipeline
|
## Creating the pipeline
|
||||||
|
|
||||||
Once the data source is created, you can create the pipeline. The following is the syntax for the `ip2geo` processor:
|
Once the data source is created, you can create the pipeline.
|
||||||
|
|
||||||
|
## Syntax
|
||||||
|
|
||||||
|
The following is the syntax for the `ip2geo` processor:
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
|
@ -153,23 +157,23 @@ Once the data source is created, you can create the pipeline. The following is t
|
||||||
```
|
```
|
||||||
{% include copy-curl.html %}
|
{% include copy-curl.html %}
|
||||||
|
|
||||||
#### Configuration parameters
|
## Configuration parameters
|
||||||
|
|
||||||
The following table lists the required and optional parameters for the `ip2geo` processor.
|
The following table lists the required and optional parameters for the `ip2geo` processor.
|
||||||
|
|
||||||
| Name | Required | Default | Description |
|
| Parameter | Required/Optional | Description |
|
||||||
|------|----------|---------|-------------|
|
|------|----------|---------|-------------|
|
||||||
| `datasource` | Required | - | The data source name to use to retrieve geographical information. |
|
| `datasource` | Required | The data source name to use to retrieve geographical information. |
|
||||||
| `field` | Required | - | The field that contains the IP address for geographical lookup. |
|
| `field` | Required | The field containing the IP address for geographical lookup. |
|
||||||
| `ignore_missing` | Optional | false | If set to `true`, the processor does not modify the document if the field does not exist or is `null`. Default is `false`. |
|
| `ignore_missing` | Optional | Specifies whether the processor should ignore documents that do not contain the specified field. If set to `true`, the processor does not modify the document if the field does not exist or is `null`. Default is `false`. |
|
||||||
| `properties` | Optional | All fields in `datasource` | The field that controls which properties are added to `target_field` from `datasource`. |
|
| `properties` | Optional | The field that controls which properties are added to `target_field` from `datasource`. Default is all the fields in `datasource`. |
|
||||||
| `target_field` | Optional | ip2geo | The field that contains the geographical information retrieved from the data source. |
|
| `target_field` | Optional | The field containing the geographical information retrieved from the data source. Default is `ip2geo`. |
|
||||||
|
|
||||||
## Using the processor
|
## Using the processor
|
||||||
|
|
||||||
Follow these steps to use the processor in a pipeline.
|
Follow these steps to use the processor in a pipeline.
|
||||||
|
|
||||||
**Step 1: Create a pipeline.**
|
**Step 1: Create a pipeline**
|
||||||
|
|
||||||
The following query creates a pipeline, named `my-pipeline`, that converts the IP address to geographical information:
|
The following query creates a pipeline, named `my-pipeline`, that converts the IP address to geographical information:
|
||||||
|
|
||||||
|
@ -189,7 +193,7 @@ PUT /_ingest/pipeline/my-pipeline
|
||||||
```
|
```
|
||||||
{% include copy-curl.html %}
|
{% include copy-curl.html %}
|
||||||
|
|
||||||
**Step 2 (Optional): Test the pipeline.**
|
**Step 2 (Optional): Test the pipeline**
|
||||||
|
|
||||||
{::nomarkdown}<img src="{{site.url}}{{site.baseurl}}/images/icons/info-icon.png" class="inline-icon" alt="info icon"/>{:/} **NOTE**<br>It is recommended that you test your pipeline before you ingest documents.
|
{::nomarkdown}<img src="{{site.url}}{{site.baseurl}}/images/icons/info-icon.png" class="inline-icon" alt="info icon"/>{:/} **NOTE**<br>It is recommended that you test your pipeline before you ingest documents.
|
||||||
{: .note}
|
{: .note}
|
||||||
|
@ -211,7 +215,7 @@ POST _ingest/pipeline/my-pipeline/_simulate
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Response
|
**Response**
|
||||||
|
|
||||||
The following response confirms that the pipeline is working as expected:
|
The following response confirms that the pipeline is working as expected:
|
||||||
|
|
||||||
|
@ -240,7 +244,7 @@ The following response confirms that the pipeline is working as expected:
|
||||||
```
|
```
|
||||||
{% include copy-curl.html %}
|
{% include copy-curl.html %}
|
||||||
|
|
||||||
**Step 3: Ingest a document.**
|
**Step 3: Ingest a document**
|
||||||
|
|
||||||
The following query ingests a document into an index named `my-index`:
|
The following query ingests a document into an index named `my-index`:
|
||||||
|
|
||||||
|
@ -252,7 +256,7 @@ PUT /my-index/_doc/my-id?pipeline=ip2geo
|
||||||
```
|
```
|
||||||
{% include copy-curl.html %}
|
{% include copy-curl.html %}
|
||||||
|
|
||||||
**Step 4 (Optional): Retrieve the document.**
|
**Step 4 (Optional): Retrieve the document**
|
||||||
|
|
||||||
To retrieve the document, run the following query:
|
To retrieve the document, run the following query:
|
||||||
|
|
||||||
|
|
|
@ -8,12 +8,11 @@ redirect_from:
|
||||||
---
|
---
|
||||||
|
|
||||||
# Lowercase processor
|
# Lowercase processor
|
||||||
**Introduced 1.0**
|
|
||||||
{: .label .label-purple }
|
|
||||||
|
|
||||||
The `lowercase` processor converts all the text in a specific field to lowercase letters.
|
The `lowercase` processor converts all the text in a specific field to lowercase letters.
|
||||||
|
|
||||||
## Example
|
## Syntax
|
||||||
|
|
||||||
The following is the syntax for the `lowercase` processor:
|
The following is the syntax for the `lowercase` processor:
|
||||||
|
|
||||||
```json
|
```json
|
||||||
|
@ -25,7 +24,7 @@ The following is the syntax for the `lowercase` processor:
|
||||||
```
|
```
|
||||||
{% include copy-curl.html %}
|
{% include copy-curl.html %}
|
||||||
|
|
||||||
#### Configuration parameters
|
## Configuration parameters
|
||||||
|
|
||||||
The following table lists the required and optional parameters for the `lowercase` processor.
|
The following table lists the required and optional parameters for the `lowercase` processor.
|
||||||
|
|
||||||
|
@ -33,10 +32,10 @@ The following table lists the required and optional parameters for the `lowercas
|
||||||
|---|---|---|
|
|---|---|---|
|
||||||
`field` | Required | The name of the field containing the data to be converted. Supports [template snippets]({{site.url}}{{site.baseurl}}/ingest-pipelines/create-ingest/#template-snippets). |
|
`field` | Required | The name of the field containing the data to be converted. Supports [template snippets]({{site.url}}{{site.baseurl}}/ingest-pipelines/create-ingest/#template-snippets). |
|
||||||
`description` | Optional | A brief description of the processor. |
|
`description` | Optional | A brief description of the processor. |
|
||||||
`if` | Optional | A condition for running this processor. |
|
`if` | Optional | A condition for running the processor. |
|
||||||
`ignore_failure` | Optional | If set to `true`, failures are ignored. Default is `false`. |
|
`ignore_failure` | Optional | Specifies whether the processor continues execution even if it encounters errors. If set to `true`, failures are ignored. Default is `false`. |
|
||||||
`on_failure` | Optional | A list of processors to run if the processor fails. |
|
`on_failure` | Optional | A list of processors to run if the processor fails. |
|
||||||
`ignore_missing` | Optional | Specifies whether the processor should ignore documents that do not have the specified field. Default is `false`. |
|
`ignore_missing` | Optional | Specifies whether the processor should ignore documents that do not contain the specified field. If set to `true`, the processor does not modify the document if the field does not exist or is `null`. Default is `false`. |
|
||||||
`tag` | Optional | An identifier tag for the processor. Useful for debugging in order to distinguish between processors of the same type. |
|
`tag` | Optional | An identifier tag for the processor. Useful for debugging in order to distinguish between processors of the same type. |
|
||||||
`target_field` | Optional | The name of the field in which to store the parsed data. Default is `field`. By default, `field` is updated in place. |
|
`target_field` | Optional | The name of the field in which to store the parsed data. Default is `field`. By default, `field` is updated in place. |
|
||||||
|
|
||||||
|
@ -44,7 +43,7 @@ The following table lists the required and optional parameters for the `lowercas
|
||||||
|
|
||||||
Follow these steps to use the processor in a pipeline.
|
Follow these steps to use the processor in a pipeline.
|
||||||
|
|
||||||
**Step 1: Create a pipeline.**
|
**Step 1: Create a pipeline**
|
||||||
|
|
||||||
The following query creates a pipeline, named `lowercase-title`, that uses the `lowercase` processor to lowercase the `title` field of a document:
|
The following query creates a pipeline, named `lowercase-title`, that uses the `lowercase` processor to lowercase the `title` field of a document:
|
||||||
|
|
||||||
|
@ -63,7 +62,7 @@ PUT _ingest/pipeline/lowercase-title
|
||||||
```
|
```
|
||||||
{% include copy-curl.html %}
|
{% include copy-curl.html %}
|
||||||
|
|
||||||
**Step 2 (Optional): Test the pipeline.**
|
**Step 2 (Optional): Test the pipeline**
|
||||||
|
|
||||||
It is recommended that you test your pipeline before you ingest documents.
|
It is recommended that you test your pipeline before you ingest documents.
|
||||||
{: .tip}
|
{: .tip}
|
||||||
|
@ -86,7 +85,7 @@ POST _ingest/pipeline/lowercase-title/_simulate
|
||||||
```
|
```
|
||||||
{% include copy-curl.html %}
|
{% include copy-curl.html %}
|
||||||
|
|
||||||
#### Response
|
**Response**
|
||||||
|
|
||||||
The following example response confirms that the pipeline is working as expected:
|
The following example response confirms that the pipeline is working as expected:
|
||||||
|
|
||||||
|
@ -109,7 +108,7 @@ The following example response confirms that the pipeline is working as expected
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Step 3: Ingest a document.**
|
**Step 3: Ingest a document**
|
||||||
|
|
||||||
The following query ingests a document into an index named `testindex1`:
|
The following query ingests a document into an index named `testindex1`:
|
||||||
|
|
||||||
|
@ -121,7 +120,7 @@ PUT testindex1/_doc/1?pipeline=lowercase-title
|
||||||
```
|
```
|
||||||
{% include copy-curl.html %}
|
{% include copy-curl.html %}
|
||||||
|
|
||||||
**Step 4 (Optional): Retrieve the document.**
|
**Step 4 (Optional): Retrieve the document**
|
||||||
|
|
||||||
To retrieve the document, run the following query:
|
To retrieve the document, run the following query:
|
||||||
|
|
||||||
|
|
|
@ -8,12 +8,11 @@ redirect_from:
|
||||||
---
|
---
|
||||||
|
|
||||||
# Remove processor
|
# Remove processor
|
||||||
**Introduced 1.0**
|
|
||||||
{: .label .label-purple }
|
|
||||||
|
|
||||||
The `remove` processor is used to remove a field from a document.
|
The `remove` processor is used to remove a field from a document.
|
||||||
|
|
||||||
## Example
|
## Syntax
|
||||||
|
|
||||||
The following is the syntax for the `remove` processor:
|
The following is the syntax for the `remove` processor:
|
||||||
|
|
||||||
```json
|
```json
|
||||||
|
@ -25,24 +24,24 @@ The following is the syntax for the `remove` processor:
|
||||||
```
|
```
|
||||||
{% include copy-curl.html %}
|
{% include copy-curl.html %}
|
||||||
|
|
||||||
#### Configuration parameters
|
## Configuration parameters
|
||||||
|
|
||||||
The following table lists the required and optional parameters for the `remove` processor.
|
The following table lists the required and optional parameters for the `remove` processor.
|
||||||
|
|
||||||
| Name | Required | Description |
|
| Parameter | Required/Optional | Description |
|
||||||
|---|---|---|
|
|---|---|---|
|
||||||
`field` | Required | The name of the field to which the data should be appended. Supports template snippets. |
|
`field` | Required | The name of the field containing the data to be appended. Supports [template snippets]({{site.url}}{{site.baseurl}}/ingest-pipelines/create-ingest/#template-snippets). |
|
||||||
`description` | Optional | A brief description of the processor. |
|
`description` | Optional | A brief description of the processor. |
|
||||||
`if` | Optional | A condition for running this processor. |
|
`if` | Optional | A condition for running the processor. |
|
||||||
`ignore_failure` | Optional | If set to `true`, failures are ignored. Default is `false`. |
|
`ignore_failure` | Optional | Specifies whether the processor continues execution even if it encounters errors. If set to `true`, failures are ignored. Default is `false`. |
|
||||||
`on_failure` | Optional | A list of processors to run if the processor fails. |
|
`on_failure` | Optional | A list of processors to run if the processor fails. |
|
||||||
`tag` | Optional | An identifier tag for the processor. Useful for debugging to distinguish between processors of the same type. |
|
`tag` | Optional | An identifier tag for the processor. Useful for debugging in order to distinguish between processors of the same type. |
|
||||||
|
|
||||||
## Using the processor
|
## Using the processor
|
||||||
|
|
||||||
Follow these steps to use the processor in a pipeline.
|
Follow these steps to use the processor in a pipeline.
|
||||||
|
|
||||||
**Step 1: Create a pipeline.**
|
**Step 1: Create a pipeline**
|
||||||
|
|
||||||
The following query creates a pipeline, named `remove_ip`, that removes the `ip_address` field from a document:
|
The following query creates a pipeline, named `remove_ip`, that removes the `ip_address` field from a document:
|
||||||
|
|
||||||
|
@ -61,7 +60,7 @@ PUT /_ingest/pipeline/remove_ip
|
||||||
```
|
```
|
||||||
{% include copy-curl.html %}
|
{% include copy-curl.html %}
|
||||||
|
|
||||||
**Step 2 (Optional): Test the pipeline.**
|
**Step 2 (Optional): Test the pipeline**
|
||||||
|
|
||||||
It is recommended that you test your pipeline before you ingest documents.
|
It is recommended that you test your pipeline before you ingest documents.
|
||||||
{: .tip}
|
{: .tip}
|
||||||
|
@ -85,7 +84,7 @@ POST _ingest/pipeline/remove_ip/_simulate
|
||||||
```
|
```
|
||||||
{% include copy-curl.html %}
|
{% include copy-curl.html %}
|
||||||
|
|
||||||
#### Response
|
**Response**
|
||||||
|
|
||||||
The following example response confirms that the pipeline is working as expected:
|
The following example response confirms that the pipeline is working as expected:
|
||||||
|
|
||||||
|
@ -108,7 +107,7 @@ The following example response confirms that the pipeline is working as expected
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Step 3: Ingest a document.**
|
**Step 3: Ingest a document**
|
||||||
|
|
||||||
The following query ingests a document into an index named `testindex1`:
|
The following query ingests a document into an index named `testindex1`:
|
||||||
|
|
||||||
|
@ -121,7 +120,7 @@ PPUT testindex1/_doc/1?pipeline=remove_ip
|
||||||
```
|
```
|
||||||
{% include copy-curl.html %}
|
{% include copy-curl.html %}
|
||||||
|
|
||||||
**Step 4 (Optional): Retrieve the document.**
|
**Step 4 (Optional): Retrieve the document**
|
||||||
|
|
||||||
To retrieve the document, run the following query:
|
To retrieve the document, run the following query:
|
||||||
|
|
||||||
|
|
|
@ -29,11 +29,11 @@ The following is the syntax for the `sparse_encoding` processor:
|
||||||
```
|
```
|
||||||
{% include copy-curl.html %}
|
{% include copy-curl.html %}
|
||||||
|
|
||||||
#### Configuration parameters
|
## Configuration parameters
|
||||||
|
|
||||||
The following table lists the required and optional parameters for the `sparse_encoding` processor.
|
The following table lists the required and optional parameters for the `sparse_encoding` processor.
|
||||||
|
|
||||||
| Name | Data type | Required | Description |
|
| Parameter | Data type | Required/Optional | Description |
|
||||||
|:---|:---|:---|:---|
|
|:---|:---|:---|:---|
|
||||||
`model_id` | String | Required | The ID of the model that will be used to generate the embeddings. The model must be deployed in OpenSearch before it can be used in neural search. For more information, see [Using custom models within OpenSearch]({{site.url}}{{site.baseurl}}/ml-commons-plugin/using-ml-models/) and [Neural sparse search]({{site.url}}{{site.baseurl}}/search-plugins/neural-sparse-search/).
|
`model_id` | String | Required | The ID of the model that will be used to generate the embeddings. The model must be deployed in OpenSearch before it can be used in neural search. For more information, see [Using custom models within OpenSearch]({{site.url}}{{site.baseurl}}/ml-commons-plugin/using-ml-models/) and [Neural sparse search]({{site.url}}{{site.baseurl}}/search-plugins/neural-sparse-search/).
|
||||||
`field_map` | Object | Required | Contains key-value pairs that specify the mapping of a text field to a `rank_features` field.
|
`field_map` | Object | Required | Contains key-value pairs that specify the mapping of a text field to a `rank_features` field.
|
||||||
|
|
|
@ -29,11 +29,11 @@ The following is the syntax for the `text_embedding` processor:
|
||||||
```
|
```
|
||||||
{% include copy-curl.html %}
|
{% include copy-curl.html %}
|
||||||
|
|
||||||
#### Configuration parameters
|
## Configuration parameters
|
||||||
|
|
||||||
The following table lists the required and optional parameters for the `text_embedding` processor.
|
The following table lists the required and optional parameters for the `text_embedding` processor.
|
||||||
|
|
||||||
| Name | Data type | Required | Description |
|
| Parameter | Data type | Required/Optional | Description |
|
||||||
|:---|:---|:---|:---|
|
|:---|:---|:---|:---|
|
||||||
`model_id` | String | Required | The ID of the model that will be used to generate the embeddings. The model must be deployed in OpenSearch before it can be used in neural search. For more information, see [Using custom models within OpenSearch]({{site.url}}{{site.baseurl}}/ml-commons-plugin/using-ml-models/) and [Semantic search]({{site.url}}{{site.baseurl}}/search-plugins/semantic-search/).
|
`model_id` | String | Required | The ID of the model that will be used to generate the embeddings. The model must be deployed in OpenSearch before it can be used in neural search. For more information, see [Using custom models within OpenSearch]({{site.url}}{{site.baseurl}}/ml-commons-plugin/using-ml-models/) and [Semantic search]({{site.url}}{{site.baseurl}}/search-plugins/semantic-search/).
|
||||||
`field_map` | Object | Required | Contains key-value pairs that specify the mapping of a text field to a vector field.
|
`field_map` | Object | Required | Contains key-value pairs that specify the mapping of a text field to a vector field.
|
||||||
|
|
|
@ -35,7 +35,7 @@ The following is the syntax for the `text_image_embedding` processor:
|
||||||
|
|
||||||
The following table lists the required and optional parameters for the `text_image_embedding` processor.
|
The following table lists the required and optional parameters for the `text_image_embedding` processor.
|
||||||
|
|
||||||
| Name | Data type | Required | Description |
|
| Parameter | Data type | Required/Optional | Description |
|
||||||
|:---|:---|:---|:---|
|
|:---|:---|:---|:---|
|
||||||
`model_id` | String | Required | The ID of the model that will be used to generate the embeddings. The model must be deployed in OpenSearch before it can be used in neural search. For more information, see [Using custom models within OpenSearch]({{site.url}}{{site.baseurl}}/ml-commons-plugin/using-ml-models/) and [Multimodal search]({{site.url}}{{site.baseurl}}/search-plugins/multimodal-search/).
|
`model_id` | String | Required | The ID of the model that will be used to generate the embeddings. The model must be deployed in OpenSearch before it can be used in neural search. For more information, see [Using custom models within OpenSearch]({{site.url}}{{site.baseurl}}/ml-commons-plugin/using-ml-models/) and [Multimodal search]({{site.url}}{{site.baseurl}}/search-plugins/multimodal-search/).
|
||||||
`embedding` | String | Required | The name of the vector field in which to store the generated embeddings. A single embedding is generated for both `text` and `image` fields.
|
`embedding` | String | Required | The name of the vector field in which to store the generated embeddings. A single embedding is generated for both `text` and `image` fields.
|
||||||
|
|
|
@ -8,12 +8,11 @@ redirect_from:
|
||||||
---
|
---
|
||||||
|
|
||||||
# Uppercase processor
|
# Uppercase processor
|
||||||
**Introduced 1.0**
|
|
||||||
{: .label .label-purple }
|
|
||||||
|
|
||||||
The `uppercase` processor converts all the text in a specific field to uppercase letters.
|
The `uppercase` processor converts all the text in a specific field to uppercase letters.
|
||||||
|
|
||||||
## Example
|
## Syntax
|
||||||
|
|
||||||
The following is the syntax for the `uppercase` processor:
|
The following is the syntax for the `uppercase` processor:
|
||||||
|
|
||||||
```json
|
```json
|
||||||
|
@ -25,26 +24,26 @@ The following is the syntax for the `uppercase` processor:
|
||||||
```
|
```
|
||||||
{% include copy-curl.html %}
|
{% include copy-curl.html %}
|
||||||
|
|
||||||
#### Configuration parameters
|
## Configuration parameters
|
||||||
|
|
||||||
The following table lists the required and optional parameters for the `uppercase` processor.
|
The following table lists the required and optional parameters for the `uppercase` processor.
|
||||||
|
|
||||||
| Name | Required | Description |
|
| Parameter | Required/Optional | Description |
|
||||||
|---|---|---|
|
|---|---|---|
|
||||||
`field` | Required | The name of the field to which the data should be appended. Supports template snippets. |
|
`field` | Required | The name of the field containing the data to be appended. Supports [template snippets]({{site.url}}{{site.baseurl}}/ingest-pipelines/create-ingest/#template-snippets). |
|
||||||
`description` | Optional | A brief description of the processor. |
|
`description` | Optional | A brief description of the processor. |
|
||||||
`if` | Optional | A condition for running this processor. |
|
`if` | Optional | A condition for running the processor. |
|
||||||
`ignore_failure` | Optional | If set to `true`, failures are ignored. Default is `false`. |
|
`ignore_failure` | Optional | Specifies whether the processor continues execution even if it encounters errors. If set to `true`, failures are ignored. Default is `false`. |
|
||||||
`ignore_missing` | Optional | Specifies whether the processor should ignore documents that do not have the specified field. Default is `false`. |
|
`ignore_missing` | Optional | Specifies whether the processor should ignore documents that do not contain the specified field. If set to `true`, the processor does not modify the document if the field does not exist or is `null`. Default is `false`. |
|
||||||
`on_failure` | Optional | A list of processors to run if the processor fails. |
|
`on_failure` | Optional | A list of processors to run if the processor fails. |
|
||||||
`tag` | Optional | An identifier tag for the processor. Useful for debugging to distinguish between processors of the same type. |
|
`tag` | Optional | An identifier tag for the processor. Useful for debugging in order to distinguish between processors of the same type. |
|
||||||
`target_field` | Optional | The name of the field in which to store the parsed data. Default is `field`. By default, `field` is updated in place. |
|
`target_field` | Optional | The name of the field in which to store the parsed data. Default is `field`. By default, `field` is updated in place. |
|
||||||
|
|
||||||
## Using the processor
|
## Using the processor
|
||||||
|
|
||||||
Follow these steps to use the processor in a pipeline.
|
Follow these steps to use the processor in a pipeline.
|
||||||
|
|
||||||
**Step 1: Create a pipeline.**
|
**Step 1: Create a pipeline**
|
||||||
|
|
||||||
The following query creates a pipeline, named `uppercase`, that converts the text in the `field` field to uppercase:
|
The following query creates a pipeline, named `uppercase`, that converts the text in the `field` field to uppercase:
|
||||||
|
|
||||||
|
@ -63,7 +62,7 @@ PUT _ingest/pipeline/uppercase
|
||||||
{% include copy-curl.html %}
|
{% include copy-curl.html %}
|
||||||
|
|
||||||
|
|
||||||
**Step 2 (Optional): Test the pipeline.**
|
**Step 2 (Optional): Test the pipeline**
|
||||||
|
|
||||||
It is recommended that you test your pipeline before you ingest documents.
|
It is recommended that you test your pipeline before you ingest documents.
|
||||||
{: .tip}
|
{: .tip}
|
||||||
|
@ -86,7 +85,7 @@ POST _ingest/pipeline/uppercase/_simulate
|
||||||
```
|
```
|
||||||
{% include copy-curl.html %}
|
{% include copy-curl.html %}
|
||||||
|
|
||||||
#### Response
|
**Response**
|
||||||
|
|
||||||
The following example response confirms that the pipeline is working as expected:
|
The following example response confirms that the pipeline is working as expected:
|
||||||
|
|
||||||
|
@ -109,7 +108,7 @@ The following example response confirms that the pipeline is working as expected
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Step 3: Ingest a document.**
|
**Step 3: Ingest a document**
|
||||||
|
|
||||||
The following query ingests a document into an index named `testindex1`:
|
The following query ingests a document into an index named `testindex1`:
|
||||||
|
|
||||||
|
@ -121,7 +120,7 @@ PUT testindex1/_doc/1?pipeline=uppercase
|
||||||
```
|
```
|
||||||
{% include copy-curl.html %}
|
{% include copy-curl.html %}
|
||||||
|
|
||||||
**Step 4 (Optional): Retrieve the document.**
|
**Step 4 (Optional): Retrieve the document**
|
||||||
|
|
||||||
To retrieve the document, run the following query:
|
To retrieve the document, run the following query:
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue