diff --git a/docs/reference/analysis/tokenfilters.asciidoc b/docs/reference/analysis/tokenfilters.asciidoc index 57c4341f28a..c13a820dd98 100644 --- a/docs/reference/analysis/tokenfilters.asciidoc +++ b/docs/reference/analysis/tokenfilters.asciidoc @@ -69,3 +69,4 @@ include::tokenfilters/common-grams-tokenfilter.asciidoc[] include::tokenfilters/normalization-tokenfilter.asciidoc[] +include::tokenfilters/delimited-payload-tokenfilter.asciidoc[] diff --git a/docs/reference/analysis/tokenfilters/delimited-payload-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/delimited-payload-tokenfilter.asciidoc new file mode 100644 index 00000000000..293b51a0331 --- /dev/null +++ b/docs/reference/analysis/tokenfilters/delimited-payload-tokenfilter.asciidoc @@ -0,0 +1,16 @@ +[[analysis-delimited-payload-tokenfilter]] +=== Delimited Payload Token Filter + +Named `delimited_payload_filter`. Splits tokens into tokens and payload whenever a delimiter character is found. + +Example: "the|1 quick|2 fox|3" is split per default int to tokens `fox`, `quick` and `the` with payloads `1`, `2` and `3` respectively. + + + +Parameters: + +`delimiter`:: + Character used for splitting the tokens. Default is `|`. + +`encoding`:: + The type of the payload. `int` for integer, `float` for float and `identity` for characters. Default is `float`. \ No newline at end of file