2017-06-13 15:46:59 -04:00
|
|
|
[[analysis-simplepattern-tokenizer]]
|
|
|
|
=== Simple Pattern Tokenizer
|
|
|
|
|
2017-07-18 08:06:22 -04:00
|
|
|
experimental[This functionality is marked as experimental in Lucene]
|
2017-06-13 15:46:59 -04:00
|
|
|
|
2017-06-19 16:48:43 -04:00
|
|
|
The `simple_pattern` tokenizer uses a regular expression to capture matching
|
2017-06-13 15:46:59 -04:00
|
|
|
text as terms. The set of regular expression features it supports is more
|
|
|
|
limited than the <<analysis-pattern-tokenizer,`pattern`>> tokenizer, but the
|
|
|
|
tokenization is generally faster.
|
|
|
|
|
|
|
|
This tokenizer does not support splitting the input on a pattern match, unlike
|
|
|
|
the <<analysis-pattern-tokenizer,`pattern`>> tokenizer. To split on pattern
|
|
|
|
matches using the same restricted regular expression subset, see the
|
2017-06-19 16:48:43 -04:00
|
|
|
<<analysis-simplepatternsplit-tokenizer,`simple_pattern_split`>> tokenizer.
|
2017-06-13 15:46:59 -04:00
|
|
|
|
|
|
|
This tokenizer uses {lucene-core-javadoc}/org/apache/lucene/util/automaton/RegExp.html[Lucene regular expressions].
|
|
|
|
For an explanation of the supported features and syntax, see <<regexp-syntax,Regular Expression Syntax>>.
|
|
|
|
|
|
|
|
The default pattern is the empty string, which produces no terms. This
|
|
|
|
tokenizer should always be configured with a non-default pattern.
|
|
|
|
|
|
|
|
[float]
|
|
|
|
=== Configuration
|
|
|
|
|
2017-06-19 16:48:43 -04:00
|
|
|
The `simple_pattern` tokenizer accepts the following parameters:
|
2017-06-13 15:46:59 -04:00
|
|
|
|
|
|
|
[horizontal]
|
|
|
|
`pattern`::
|
|
|
|
{lucene-core-javadoc}/org/apache/lucene/util/automaton/RegExp.html[Lucene regular expression], defaults to the empty string.
|
|
|
|
|
|
|
|
[float]
|
|
|
|
=== Example configuration
|
|
|
|
|
2017-06-19 16:48:43 -04:00
|
|
|
This example configures the `simple_pattern` tokenizer to produce terms that are
|
2017-06-13 15:46:59 -04:00
|
|
|
three-digit numbers
|
|
|
|
|
2019-09-09 13:38:14 -04:00
|
|
|
[source,console]
|
2017-06-13 15:46:59 -04:00
|
|
|
----------------------------
|
2019-01-18 03:34:11 -05:00
|
|
|
PUT my_index
|
2017-06-13 15:46:59 -04:00
|
|
|
{
|
|
|
|
"settings": {
|
|
|
|
"analysis": {
|
|
|
|
"analyzer": {
|
|
|
|
"my_analyzer": {
|
|
|
|
"tokenizer": "my_tokenizer"
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"tokenizer": {
|
|
|
|
"my_tokenizer": {
|
2017-06-19 16:48:43 -04:00
|
|
|
"type": "simple_pattern",
|
2017-06-13 15:46:59 -04:00
|
|
|
"pattern": "[0123456789]{3}"
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
POST my_index/_analyze
|
|
|
|
{
|
|
|
|
"analyzer": "my_analyzer",
|
|
|
|
"text": "fd-786-335-514-x"
|
|
|
|
}
|
|
|
|
----------------------------
|
|
|
|
|
|
|
|
/////////////////////
|
|
|
|
|
2019-09-06 09:22:08 -04:00
|
|
|
[source,console-result]
|
2017-06-13 15:46:59 -04:00
|
|
|
----------------------------
|
|
|
|
{
|
|
|
|
"tokens" : [
|
|
|
|
{
|
|
|
|
"token" : "786",
|
|
|
|
"start_offset" : 3,
|
|
|
|
"end_offset" : 6,
|
|
|
|
"type" : "word",
|
|
|
|
"position" : 0
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"token" : "335",
|
|
|
|
"start_offset" : 7,
|
|
|
|
"end_offset" : 10,
|
|
|
|
"type" : "word",
|
|
|
|
"position" : 1
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"token" : "514",
|
|
|
|
"start_offset" : 11,
|
|
|
|
"end_offset" : 14,
|
|
|
|
"type" : "word",
|
|
|
|
"position" : 2
|
|
|
|
}
|
|
|
|
]
|
|
|
|
}
|
|
|
|
----------------------------
|
|
|
|
|
|
|
|
/////////////////////
|
|
|
|
|
|
|
|
The above example produces these terms:
|
|
|
|
|
|
|
|
[source,text]
|
|
|
|
---------------------------
|
|
|
|
[ 786, 335, 514 ]
|
|
|
|
---------------------------
|