2017-06-13 15:46:59 -04:00
|
|
|
[[analysis-simplepatternsplit-tokenizer]]
|
|
|
|
=== Simple Pattern Split Tokenizer
|
|
|
|
|
2017-07-18 08:06:22 -04:00
|
|
|
experimental[This functionality is marked as experimental in Lucene]
|
2017-06-13 15:46:59 -04:00
|
|
|
|
2017-06-19 16:48:43 -04:00
|
|
|
The `simple_pattern_split` tokenizer uses a regular expression to split the
|
2017-06-13 15:46:59 -04:00
|
|
|
input into terms at pattern matches. The set of regular expression features it
|
|
|
|
supports is more limited than the <<analysis-pattern-tokenizer,`pattern`>>
|
|
|
|
tokenizer, but the tokenization is generally faster.
|
|
|
|
|
|
|
|
This tokenizer does not produce terms from the matches themselves. To produce
|
|
|
|
terms from matches using patterns in the same restricted regular expression
|
2017-06-19 16:48:43 -04:00
|
|
|
subset, see the <<analysis-simplepattern-tokenizer,`simple_pattern`>>
|
2017-06-13 15:46:59 -04:00
|
|
|
tokenizer.
|
|
|
|
|
|
|
|
This tokenizer uses {lucene-core-javadoc}/org/apache/lucene/util/automaton/RegExp.html[Lucene regular expressions].
|
|
|
|
For an explanation of the supported features and syntax, see <<regexp-syntax,Regular Expression Syntax>>.
|
|
|
|
|
|
|
|
The default pattern is the empty string, which produces one term containing the
|
|
|
|
full input. This tokenizer should always be configured with a non-default
|
|
|
|
pattern.
|
|
|
|
|
|
|
|
[float]
|
|
|
|
=== Configuration
|
|
|
|
|
2017-06-19 16:48:43 -04:00
|
|
|
The `simple_pattern_split` tokenizer accepts the following parameters:
|
2017-06-13 15:46:59 -04:00
|
|
|
|
|
|
|
[horizontal]
|
|
|
|
`pattern`::
|
|
|
|
A {lucene-core-javadoc}/org/apache/lucene/util/automaton/RegExp.html[Lucene regular expression], defaults to the empty string.
|
|
|
|
|
|
|
|
[float]
|
|
|
|
=== Example configuration
|
|
|
|
|
2017-06-19 16:48:43 -04:00
|
|
|
This example configures the `simple_pattern_split` tokenizer to split the input
|
2017-06-13 15:46:59 -04:00
|
|
|
text on underscores.
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----------------------------
|
2019-01-18 03:34:11 -05:00
|
|
|
PUT my_index
|
2017-06-13 15:46:59 -04:00
|
|
|
{
|
|
|
|
"settings": {
|
|
|
|
"analysis": {
|
|
|
|
"analyzer": {
|
|
|
|
"my_analyzer": {
|
|
|
|
"tokenizer": "my_tokenizer"
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"tokenizer": {
|
|
|
|
"my_tokenizer": {
|
2017-06-19 16:48:43 -04:00
|
|
|
"type": "simple_pattern_split",
|
2017-06-13 15:46:59 -04:00
|
|
|
"pattern": "_"
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
POST my_index/_analyze
|
|
|
|
{
|
|
|
|
"analyzer": "my_analyzer",
|
|
|
|
"text": "an_underscored_phrase"
|
|
|
|
}
|
|
|
|
----------------------------
|
|
|
|
// CONSOLE
|
|
|
|
|
|
|
|
/////////////////////
|
|
|
|
|
2019-09-06 09:22:08 -04:00
|
|
|
[source,console-result]
|
2017-06-13 15:46:59 -04:00
|
|
|
----------------------------
|
|
|
|
{
|
|
|
|
"tokens" : [
|
|
|
|
{
|
|
|
|
"token" : "an",
|
|
|
|
"start_offset" : 0,
|
|
|
|
"end_offset" : 2,
|
|
|
|
"type" : "word",
|
|
|
|
"position" : 0
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"token" : "underscored",
|
|
|
|
"start_offset" : 3,
|
|
|
|
"end_offset" : 14,
|
|
|
|
"type" : "word",
|
|
|
|
"position" : 1
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"token" : "phrase",
|
|
|
|
"start_offset" : 15,
|
|
|
|
"end_offset" : 21,
|
|
|
|
"type" : "word",
|
|
|
|
"position" : 2
|
|
|
|
}
|
|
|
|
]
|
|
|
|
}
|
|
|
|
----------------------------
|
|
|
|
|
|
|
|
/////////////////////
|
|
|
|
|
|
|
|
The above example produces these terms:
|
|
|
|
|
|
|
|
[source,text]
|
|
|
|
---------------------------
|
|
|
|
[ an, underscored, phrase ]
|
|
|
|
---------------------------
|