2013-08-28 19:24:34 -04:00
|
|
|
[[analysis-custom-analyzer]]
|
2020-01-16 13:11:42 -05:00
|
|
|
=== Create a custom analyzer
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2016-05-11 08:17:56 -04:00
|
|
|
When the built-in analyzers do not fulfill your needs, you can create a
|
|
|
|
`custom` analyzer which uses the appropriate combination of:
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2016-05-11 08:17:56 -04:00
|
|
|
* zero or more <<analysis-charfilters, character filters>>
|
|
|
|
* a <<analysis-tokenizers,tokenizer>>
|
|
|
|
* zero or more <<analysis-tokenfilters,token filters>>.
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2016-05-11 08:17:56 -04:00
|
|
|
[float]
|
|
|
|
=== Configuration
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2016-05-11 08:17:56 -04:00
|
|
|
The `custom` analyzer accepts the following parameters:
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2016-05-11 08:17:56 -04:00
|
|
|
[horizontal]
|
|
|
|
`tokenizer`::
|
|
|
|
|
|
|
|
A built-in or customised <<analysis-tokenizers,tokenizer>>.
|
|
|
|
(Required)
|
|
|
|
|
|
|
|
`char_filter`::
|
|
|
|
|
|
|
|
An optional array of built-in or customised
|
|
|
|
<<analysis-charfilters, character filters>>.
|
|
|
|
|
|
|
|
`filter`::
|
|
|
|
|
|
|
|
An optional array of built-in or customised
|
|
|
|
<<analysis-tokenfilters, token filters>>.
|
|
|
|
|
|
|
|
`position_increment_gap`::
|
|
|
|
|
|
|
|
When indexing an array of text values, Elasticsearch inserts a fake "gap"
|
|
|
|
between the last term of one value and the first term of the next value to
|
|
|
|
ensure that a phrase query doesn't match two terms from different array
|
|
|
|
elements. Defaults to `100`. See <<position-increment-gap>> for more.
|
|
|
|
|
|
|
|
[float]
|
|
|
|
=== Example configuration
|
|
|
|
|
|
|
|
Here is an example that combines the following:
|
|
|
|
|
|
|
|
Character Filter::
|
|
|
|
* <<analysis-htmlstrip-charfilter,HTML Strip Character Filter>>
|
|
|
|
|
|
|
|
Tokenizer::
|
|
|
|
* <<analysis-standard-tokenizer,Standard Tokenizer>>
|
|
|
|
|
|
|
|
Token Filters::
|
|
|
|
* <<analysis-lowercase-tokenfilter,Lowercase Token Filter>>
|
|
|
|
* <<analysis-asciifolding-tokenfilter,ASCII-Folding Token Filter>>
|
|
|
|
|
2019-09-09 13:38:14 -04:00
|
|
|
[source,console]
|
2016-05-11 08:17:56 -04:00
|
|
|
--------------------------------
|
2019-01-18 03:34:11 -05:00
|
|
|
PUT my_index
|
2016-05-11 08:17:56 -04:00
|
|
|
{
|
|
|
|
"settings": {
|
|
|
|
"analysis": {
|
|
|
|
"analyzer": {
|
|
|
|
"my_custom_analyzer": {
|
2019-10-15 15:52:52 -04:00
|
|
|
"type": "custom", <1>
|
2016-05-11 08:17:56 -04:00
|
|
|
"tokenizer": "standard",
|
|
|
|
"char_filter": [
|
|
|
|
"html_strip"
|
|
|
|
],
|
|
|
|
"filter": [
|
|
|
|
"lowercase",
|
|
|
|
"asciifolding"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
POST my_index/_analyze
|
|
|
|
{
|
|
|
|
"analyzer": "my_custom_analyzer",
|
|
|
|
"text": "Is this <b>déjà vu</b>?"
|
|
|
|
}
|
|
|
|
--------------------------------
|
|
|
|
|
2018-09-25 09:32:27 -04:00
|
|
|
<1> Setting `type` to `custom` tells Elasticsearch that we are defining a custom analyzer.
|
|
|
|
Compare this to how <<configuring-analyzers,built-in analyzers can be configured>>:
|
|
|
|
`type` will be set to the name of the built-in analyzer, like
|
|
|
|
<<analysis-standard-analyzer,`standard`>> or <<analysis-simple-analyzer,`simple`>>.
|
|
|
|
|
2016-05-19 13:42:23 -04:00
|
|
|
/////////////////////
|
|
|
|
|
2019-09-06 09:22:08 -04:00
|
|
|
[source,console-result]
|
2016-05-19 13:42:23 -04:00
|
|
|
----------------------------
|
|
|
|
{
|
|
|
|
"tokens": [
|
|
|
|
{
|
|
|
|
"token": "is",
|
|
|
|
"start_offset": 0,
|
|
|
|
"end_offset": 2,
|
|
|
|
"type": "<ALPHANUM>",
|
|
|
|
"position": 0
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"token": "this",
|
|
|
|
"start_offset": 3,
|
|
|
|
"end_offset": 7,
|
|
|
|
"type": "<ALPHANUM>",
|
|
|
|
"position": 1
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"token": "deja",
|
|
|
|
"start_offset": 11,
|
|
|
|
"end_offset": 15,
|
|
|
|
"type": "<ALPHANUM>",
|
|
|
|
"position": 2
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"token": "vu",
|
|
|
|
"start_offset": 16,
|
|
|
|
"end_offset": 22,
|
|
|
|
"type": "<ALPHANUM>",
|
|
|
|
"position": 3
|
|
|
|
}
|
|
|
|
]
|
|
|
|
}
|
|
|
|
----------------------------
|
|
|
|
|
|
|
|
/////////////////////
|
|
|
|
|
|
|
|
|
2016-05-11 08:17:56 -04:00
|
|
|
The above example produces the following terms:
|
|
|
|
|
|
|
|
[source,text]
|
|
|
|
---------------------------
|
|
|
|
[ is, this, deja, vu ]
|
|
|
|
---------------------------
|
|
|
|
|
|
|
|
The previous example used tokenizer, token filters, and character filters with
|
|
|
|
their default configurations, but it is possible to create configured versions
|
|
|
|
of each and to use them in a custom analyzer.
|
|
|
|
|
|
|
|
Here is a more complicated example that combines the following:
|
|
|
|
|
|
|
|
Character Filter::
|
|
|
|
* <<analysis-mapping-charfilter,Mapping Character Filter>>, configured to replace `:)` with `_happy_` and `:(` with `_sad_`
|
|
|
|
|
|
|
|
Tokenizer::
|
|
|
|
* <<analysis-pattern-tokenizer,Pattern Tokenizer>>, configured to split on punctuation characters
|
|
|
|
|
|
|
|
Token Filters::
|
|
|
|
* <<analysis-lowercase-tokenfilter,Lowercase Token Filter>>
|
|
|
|
* <<analysis-stop-tokenfilter,Stop Token Filter>>, configured to use the pre-defined list of English stop words
|
2015-05-02 00:36:27 -04:00
|
|
|
|
2013-08-28 19:24:34 -04:00
|
|
|
|
|
|
|
Here is an example:
|
|
|
|
|
2019-09-09 13:38:14 -04:00
|
|
|
[source,console]
|
2013-08-28 19:24:34 -04:00
|
|
|
--------------------------------------------------
|
2019-01-18 03:34:11 -05:00
|
|
|
PUT my_index
|
2016-05-11 08:17:56 -04:00
|
|
|
{
|
|
|
|
"settings": {
|
|
|
|
"analysis": {
|
|
|
|
"analyzer": {
|
2019-08-29 10:07:52 -04:00
|
|
|
"my_custom_analyzer": { <1>
|
2016-05-11 08:17:56 -04:00
|
|
|
"type": "custom",
|
|
|
|
"char_filter": [
|
2019-08-29 10:07:52 -04:00
|
|
|
"emoticons"
|
2016-05-11 08:17:56 -04:00
|
|
|
],
|
2019-08-29 10:07:52 -04:00
|
|
|
"tokenizer": "punctuation",
|
2016-05-11 08:17:56 -04:00
|
|
|
"filter": [
|
|
|
|
"lowercase",
|
2019-08-29 10:07:52 -04:00
|
|
|
"english_stop"
|
2016-05-11 08:17:56 -04:00
|
|
|
]
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"tokenizer": {
|
2019-08-29 10:07:52 -04:00
|
|
|
"punctuation": { <2>
|
2016-05-11 08:17:56 -04:00
|
|
|
"type": "pattern",
|
|
|
|
"pattern": "[ .,!?]"
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"char_filter": {
|
2019-08-29 10:07:52 -04:00
|
|
|
"emoticons": { <3>
|
2016-05-11 08:17:56 -04:00
|
|
|
"type": "mapping",
|
|
|
|
"mappings": [
|
|
|
|
":) => _happy_",
|
|
|
|
":( => _sad_"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"filter": {
|
2019-08-29 10:07:52 -04:00
|
|
|
"english_stop": { <4>
|
2016-05-11 08:17:56 -04:00
|
|
|
"type": "stop",
|
|
|
|
"stopwords": "_english_"
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
POST my_index/_analyze
|
|
|
|
{
|
|
|
|
"analyzer": "my_custom_analyzer",
|
2019-10-15 15:52:52 -04:00
|
|
|
"text": "I'm a :) person, and you?"
|
2016-05-11 08:17:56 -04:00
|
|
|
}
|
2013-08-28 19:24:34 -04:00
|
|
|
--------------------------------------------------
|
2016-05-11 08:17:56 -04:00
|
|
|
|
2019-08-29 10:07:52 -04:00
|
|
|
<1> Assigns the index a default custom analyzer, `my_custom_analyzer`. This
|
|
|
|
analyzer uses a custom tokenizer, character filter, and token filter that
|
|
|
|
are defined later in the request.
|
|
|
|
<2> Defines the custom `punctuation` tokenizer.
|
|
|
|
<3> Defines the custom `emoticons` character filter.
|
|
|
|
<4> Defines the custom `english_stop` token filter.
|
2016-05-11 08:17:56 -04:00
|
|
|
|
2016-05-19 13:42:23 -04:00
|
|
|
/////////////////////
|
|
|
|
|
2019-09-06 09:22:08 -04:00
|
|
|
[source,console-result]
|
2016-05-19 13:42:23 -04:00
|
|
|
----------------------------
|
|
|
|
{
|
|
|
|
"tokens": [
|
|
|
|
{
|
|
|
|
"token": "i'm",
|
|
|
|
"start_offset": 0,
|
|
|
|
"end_offset": 3,
|
|
|
|
"type": "word",
|
|
|
|
"position": 0
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"token": "_happy_",
|
|
|
|
"start_offset": 6,
|
|
|
|
"end_offset": 8,
|
|
|
|
"type": "word",
|
|
|
|
"position": 2
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"token": "person",
|
|
|
|
"start_offset": 9,
|
|
|
|
"end_offset": 15,
|
|
|
|
"type": "word",
|
|
|
|
"position": 3
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"token": "you",
|
|
|
|
"start_offset": 21,
|
|
|
|
"end_offset": 24,
|
|
|
|
"type": "word",
|
|
|
|
"position": 5
|
|
|
|
}
|
|
|
|
]
|
|
|
|
}
|
|
|
|
----------------------------
|
|
|
|
|
|
|
|
/////////////////////
|
|
|
|
|
|
|
|
|
2016-05-11 08:17:56 -04:00
|
|
|
The above example produces the following terms:
|
|
|
|
|
|
|
|
[source,text]
|
|
|
|
---------------------------
|
|
|
|
[ i'm, _happy_, person, you ]
|
|
|
|
---------------------------
|