OpenSearch/docs/reference/analysis
Itamar Syn-Hershko 5f172b6795 [Feature] Adding a char_group tokenizer (#24186)
=== Char Group Tokenizer

The `char_group` tokenizer breaks text into terms whenever it encounters
a
character which is in a defined set. It is mostly useful for cases where
a simple
custom tokenization is desired, and the overhead of use of the
<<analysis-pattern-tokenizer, `pattern` tokenizer>>
is not acceptable.

=== Configuration

The `char_group` tokenizer accepts one parameter:

`tokenize_on_chars`::
    A string containing a list of characters to tokenize the string on.
Whenever a character
    from this list is encountered, a new token is started. Also supports
escaped values like `\\n` and `\\f`,
    and in addition `\\s` to represent whitespace, `\\d` to represent
digits and `\\w` to represent letters.
    Defaults to an empty list.

=== Example output

```The 2 QUICK Brown-Foxes jumped over the lazy dog's bone for $2```

When the configuration `\\s-:<>` is used for `tokenize_on_chars`, the
above sentence would produce the following terms:

```[ The, 2, QUICK, Brown, Foxes, jumped, over, the, lazy, dog's, bone,
for, $2 ]```
2018-05-22 16:26:31 +02:00
..
analyzers Fix docs failure on language analyzers (#30722) 2018-05-22 09:58:12 +02:00
charfilters Default to one shard (#30539) 2018-05-14 12:22:35 -04:00
tokenfilters Mark synonym_graph as beta in the docs (#28496) 2018-02-02 16:33:48 +01:00
tokenizers [Feature] Adding a char_group tokenizer (#24186) 2018-05-22 16:26:31 +02:00
analyzers.asciidoc First pass at improving analyzer docs (#18269) 2016-05-11 14:17:56 +02:00
anatomy.asciidoc Correction of the names of numirals (#21531) 2016-11-25 14:30:49 +01:00
charfilters.asciidoc Hindu-Arabico-Latino Numerals (#22476) 2017-01-10 15:24:56 +01:00
normalizers.asciidoc [DOCS] Add supported token filters 2018-02-13 14:10:25 -08:00
testing.asciidoc Allow `_doc` as a type. (#27816) 2017-12-14 17:47:53 +01:00
tokenfilters.asciidoc Add missing link for the WordDelimiterGraphFilter 2017-04-28 17:12:38 +02:00
tokenizers.asciidoc [Feature] Adding a char_group tokenizer (#24186) 2018-05-22 16:26:31 +02:00