From 05e44c073046e56189836e0f561a081b203a7efc Mon Sep 17 00:00:00 2001 From: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Date: Thu, 18 Jan 2024 16:29:20 -0500 Subject: [PATCH] Update index.md (#6223) Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --- _analyzers/tokenizers/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_analyzers/tokenizers/index.md b/_analyzers/tokenizers/index.md index 1d1752ad..d401851f 100644 --- a/_analyzers/tokenizers/index.md +++ b/_analyzers/tokenizers/index.md @@ -14,7 +14,7 @@ The output of a tokenizer is a stream of tokens. Tokenizers also maintain the fo - The **order** or **position** of each token: This information is used for word and phrase proximity queries. - The starting and ending positions (**offsets**) of the tokens in the text: This information is used for highlighting search terms. -- The token **type**: Some tokenizers (for example, `standard`) classify tokens by type, for example, or . Simpler tokenizers (for example, `letter`) only classify tokens as type `word`. +- The token **type**: Some tokenizers (for example, `standard`) classify tokens by type, for example, `` or ``. Simpler tokenizers (for example, `letter`) only classify tokens as type `word`. You can use tokenizers to define custom analyzers.