OpenSearch/docs/reference/analysis/tokenizers
Mikhail Korobov 955473f475 Docs: unescape regexes in Pattern Tokenizer docs
Currently regexes in Pattern Tokenizer docs are escaped (it seems according to Java rules). I think it is better not to escape them because JSON escaping should be automatic in client libraries, and string escaping depends on a client language used. The default pattern is `\W+`, not `\\W+`.

Closes #6615
2014-07-03 13:34:13 +02:00
..
classic-tokenizer.asciidoc Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene 2014-07-03 05:47:49 -04:00
edgengram-tokenizer.asciidoc [DOCS] Fixed some bad UTF8 2014-03-19 12:46:06 +01:00
keyword-tokenizer.asciidoc Update keyword-tokenizer.asciidoc 2014-05-07 15:04:07 +02:00
letter-tokenizer.asciidoc Migrated documentation into the main repo 2013-08-29 01:24:34 +02:00
lowercase-tokenizer.asciidoc Wrong link to Token Filter 2013-12-03 10:39:13 +01:00
ngram-tokenizer.asciidoc [DOCS] Fixed some bad UTF8 2014-03-19 12:46:06 +01:00
pathhierarchy-tokenizer.asciidoc Migrated documentation into the main repo 2013-08-29 01:24:34 +02:00
pattern-tokenizer.asciidoc Docs: unescape regexes in Pattern Tokenizer docs 2014-07-03 13:34:13 +02:00
standard-tokenizer.asciidoc Migrated documentation into the main repo 2013-08-29 01:24:34 +02:00
thai-tokenizer.asciidoc Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene 2014-07-03 05:47:49 -04:00
uaxurlemail-tokenizer.asciidoc Migrated documentation into the main repo 2013-08-29 01:24:34 +02:00
whitespace-tokenizer.asciidoc Migrated documentation into the main repo 2013-08-29 01:24:34 +02:00