2013-08-28 19:24:34 -04:00
|
|
|
[[analysis-edgengram-tokenizer]]
|
|
|
|
=== Edge NGram Tokenizer
|
|
|
|
|
2016-05-19 13:42:23 -04:00
|
|
|
The `edge_ngram` tokenizer first breaks text down into words whenever it
|
|
|
|
encounters one of a list of specified characters, then it emits
|
|
|
|
https://en.wikipedia.org/wiki/N-gram[N-grams] of each word where the start of
|
|
|
|
the N-gram is anchored to the beginning of the word.
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2016-05-19 13:42:23 -04:00
|
|
|
Edge N-Grams are useful for _search-as-you-type_ queries.
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2016-05-19 13:42:23 -04:00
|
|
|
TIP: When you need _search-as-you-type_ for text which has a widely known
|
|
|
|
order, such as movie or song titles, the
|
|
|
|
<<search-suggesters-completion,completion suggester>> is a much more efficient
|
|
|
|
choice than edge N-grams. Edge N-grams have the advantage when trying to
|
|
|
|
autocomplete words that can appear in any order.
|
|
|
|
|
|
|
|
[float]
|
|
|
|
=== Example output
|
|
|
|
|
|
|
|
With the default settings, the `edge_ngram` tokenizer treats the initial text as a
|
|
|
|
single token and produces N-grams with minimum length `1` and maximum length
|
|
|
|
`2`:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
---------------------------
|
|
|
|
POST _analyze
|
|
|
|
{
|
|
|
|
"tokenizer": "edge_ngram",
|
|
|
|
"text": "Quick Fox"
|
|
|
|
}
|
|
|
|
---------------------------
|
|
|
|
// CONSOLE
|
|
|
|
|
|
|
|
/////////////////////
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----------------------------
|
|
|
|
{
|
|
|
|
"tokens": [
|
|
|
|
{
|
|
|
|
"token": "Q",
|
|
|
|
"start_offset": 0,
|
|
|
|
"end_offset": 1,
|
|
|
|
"type": "word",
|
|
|
|
"position": 0
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"token": "Qu",
|
|
|
|
"start_offset": 0,
|
|
|
|
"end_offset": 2,
|
|
|
|
"type": "word",
|
|
|
|
"position": 1
|
|
|
|
}
|
|
|
|
]
|
|
|
|
}
|
|
|
|
----------------------------
|
|
|
|
// TESTRESPONSE
|
|
|
|
|
|
|
|
/////////////////////
|
2013-08-28 19:24:34 -04:00
|
|
|
|
|
|
|
|
2016-05-19 13:42:23 -04:00
|
|
|
The above sentence would produce the following terms:
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2016-05-19 13:42:23 -04:00
|
|
|
[source,text]
|
|
|
|
---------------------------
|
|
|
|
[ Q, Qu ]
|
|
|
|
---------------------------
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2016-05-19 13:42:23 -04:00
|
|
|
NOTE: These default gram lengths are almost entirely useless. You need to
|
|
|
|
configure the `edge_ngram` before using it.
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2016-05-19 13:42:23 -04:00
|
|
|
[float]
|
|
|
|
=== Configuration
|
|
|
|
|
|
|
|
The `edge_ngram` tokenizer accepts the following parameters:
|
2013-08-28 19:24:34 -04:00
|
|
|
|
|
|
|
[horizontal]
|
2016-05-19 13:42:23 -04:00
|
|
|
`min_gram`::
|
|
|
|
Minimum length of characters in a gram. Defaults to `1`.
|
|
|
|
|
|
|
|
`max_gram`::
|
|
|
|
Maximum length of characters in a gram. Defaults to `2`.
|
|
|
|
|
|
|
|
`token_chars`::
|
|
|
|
|
|
|
|
Character classes that should be included in a token. Elasticsearch
|
|
|
|
will split on characters that don't belong to the classes specified.
|
|
|
|
Defaults to `[]` (keep all characters).
|
|
|
|
+
|
|
|
|
Character classes may be any of the following:
|
|
|
|
+
|
|
|
|
* `letter` -- for example `a`, `b`, `ï` or `京`
|
|
|
|
* `digit` -- for example `3` or `7`
|
|
|
|
* `whitespace` -- for example `" "` or `"\n"`
|
|
|
|
* `punctuation` -- for example `!` or `"`
|
|
|
|
* `symbol` -- for example `$` or `√`
|
2013-08-28 19:24:34 -04:00
|
|
|
|
|
|
|
[float]
|
2016-05-19 13:42:23 -04:00
|
|
|
=== Example configuration
|
|
|
|
|
|
|
|
In this example, we configure the `edge_ngram` tokenizer to treat letters and
|
|
|
|
digits as tokens, and to produce grams with minimum length `2` and maximum
|
|
|
|
length `10`:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----------------------------
|
2019-01-14 16:08:01 -05:00
|
|
|
PUT my_index?include_type_name=true
|
2016-05-19 13:42:23 -04:00
|
|
|
{
|
|
|
|
"settings": {
|
|
|
|
"analysis": {
|
|
|
|
"analyzer": {
|
|
|
|
"my_analyzer": {
|
|
|
|
"tokenizer": "my_tokenizer"
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"tokenizer": {
|
|
|
|
"my_tokenizer": {
|
|
|
|
"type": "edge_ngram",
|
|
|
|
"min_gram": 2,
|
|
|
|
"max_gram": 10,
|
|
|
|
"token_chars": [
|
|
|
|
"letter",
|
|
|
|
"digit"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
POST my_index/_analyze
|
|
|
|
{
|
|
|
|
"analyzer": "my_analyzer",
|
|
|
|
"text": "2 Quick Foxes."
|
|
|
|
}
|
|
|
|
----------------------------
|
|
|
|
// CONSOLE
|
|
|
|
|
|
|
|
/////////////////////
|
2013-08-28 19:24:34 -04:00
|
|
|
|
|
|
|
[source,js]
|
2016-05-19 13:42:23 -04:00
|
|
|
----------------------------
|
|
|
|
{
|
|
|
|
"tokens": [
|
|
|
|
{
|
|
|
|
"token": "Qu",
|
|
|
|
"start_offset": 2,
|
|
|
|
"end_offset": 4,
|
|
|
|
"type": "word",
|
|
|
|
"position": 0
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"token": "Qui",
|
|
|
|
"start_offset": 2,
|
|
|
|
"end_offset": 5,
|
|
|
|
"type": "word",
|
|
|
|
"position": 1
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"token": "Quic",
|
|
|
|
"start_offset": 2,
|
|
|
|
"end_offset": 6,
|
|
|
|
"type": "word",
|
|
|
|
"position": 2
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"token": "Quick",
|
|
|
|
"start_offset": 2,
|
|
|
|
"end_offset": 7,
|
|
|
|
"type": "word",
|
|
|
|
"position": 3
|
|
|
|
},
|
2013-08-28 19:24:34 -04:00
|
|
|
{
|
2016-05-19 13:42:23 -04:00
|
|
|
"token": "Fo",
|
|
|
|
"start_offset": 8,
|
|
|
|
"end_offset": 10,
|
|
|
|
"type": "word",
|
|
|
|
"position": 4
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"token": "Fox",
|
|
|
|
"start_offset": 8,
|
|
|
|
"end_offset": 11,
|
|
|
|
"type": "word",
|
|
|
|
"position": 5
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"token": "Foxe",
|
|
|
|
"start_offset": 8,
|
|
|
|
"end_offset": 12,
|
|
|
|
"type": "word",
|
|
|
|
"position": 6
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"token": "Foxes",
|
|
|
|
"start_offset": 8,
|
|
|
|
"end_offset": 13,
|
|
|
|
"type": "word",
|
|
|
|
"position": 7
|
|
|
|
}
|
|
|
|
]
|
|
|
|
}
|
|
|
|
----------------------------
|
|
|
|
// TESTRESPONSE
|
|
|
|
|
|
|
|
/////////////////////
|
|
|
|
|
|
|
|
The above example produces the following terms:
|
|
|
|
|
|
|
|
[source,text]
|
|
|
|
---------------------------
|
|
|
|
[ Qu, Qui, Quic, Quick, Fo, Fox, Foxe, Foxes ]
|
|
|
|
---------------------------
|
|
|
|
|
|
|
|
Usually we recommend using the same `analyzer` at index time and at search
|
|
|
|
time. In the case of the `edge_ngram` tokenizer, the advice is different. It
|
|
|
|
only makes sense to use the `edge_ngram` tokenizer at index time, to ensure
|
|
|
|
that partial words are available for matching in the index. At search time,
|
|
|
|
just search for the terms the user has typed in, for instance: `Quick Fo`.
|
|
|
|
|
|
|
|
Below is an example of how to set up a field for _search-as-you-type_:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
-----------------------------------
|
2019-01-14 16:08:01 -05:00
|
|
|
PUT my_index?include_type_name=true
|
2016-05-19 13:42:23 -04:00
|
|
|
{
|
|
|
|
"settings": {
|
|
|
|
"analysis": {
|
|
|
|
"analyzer": {
|
|
|
|
"autocomplete": {
|
|
|
|
"tokenizer": "autocomplete",
|
|
|
|
"filter": [
|
|
|
|
"lowercase"
|
|
|
|
]
|
|
|
|
},
|
|
|
|
"autocomplete_search": {
|
|
|
|
"tokenizer": "lowercase"
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"tokenizer": {
|
|
|
|
"autocomplete": {
|
|
|
|
"type": "edge_ngram",
|
|
|
|
"min_gram": 2,
|
|
|
|
"max_gram": 10,
|
|
|
|
"token_chars": [
|
|
|
|
"letter"
|
|
|
|
]
|
2013-08-28 19:24:34 -04:00
|
|
|
}
|
2016-05-19 13:42:23 -04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"mappings": {
|
2017-12-14 11:47:53 -05:00
|
|
|
"_doc": {
|
2016-05-19 13:42:23 -04:00
|
|
|
"properties": {
|
|
|
|
"title": {
|
|
|
|
"type": "text",
|
|
|
|
"analyzer": "autocomplete",
|
|
|
|
"search_analyzer": "autocomplete_search"
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2017-12-14 11:47:53 -05:00
|
|
|
PUT my_index/_doc/1
|
2016-05-19 13:42:23 -04:00
|
|
|
{
|
|
|
|
"title": "Quick Foxes" <1>
|
|
|
|
}
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2016-05-19 13:42:23 -04:00
|
|
|
POST my_index/_refresh
|
|
|
|
|
|
|
|
GET my_index/_search
|
|
|
|
{
|
|
|
|
"query": {
|
|
|
|
"match": {
|
|
|
|
"title": {
|
|
|
|
"query": "Quick Fo", <2>
|
|
|
|
"operator": "and"
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
-----------------------------------
|
|
|
|
// CONSOLE
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2016-05-19 13:42:23 -04:00
|
|
|
<1> The `autocomplete` analyzer indexes the terms `[qu, qui, quic, quick, fo, fox, foxe, foxes]`.
|
|
|
|
<2> The `autocomplete_search` analyzer searches for the terms `[quick, fo]`, both of which appear in the index.
|
|
|
|
|
|
|
|
/////////////////////
|
2013-08-28 19:24:34 -04:00
|
|
|
|
|
|
|
[source,js]
|
2016-05-19 13:42:23 -04:00
|
|
|
----------------------------
|
|
|
|
{
|
|
|
|
"took": $body.took,
|
|
|
|
"timed_out": false,
|
|
|
|
"_shards": {
|
2018-05-14 12:22:35 -04:00
|
|
|
"total": 1,
|
|
|
|
"successful": 1,
|
2017-07-12 16:19:20 -04:00
|
|
|
"skipped" : 0,
|
2016-05-19 13:42:23 -04:00
|
|
|
"failed": 0
|
|
|
|
},
|
|
|
|
"hits": {
|
2018-12-05 13:49:06 -05:00
|
|
|
"total" : {
|
|
|
|
"value": 1,
|
|
|
|
"relation": "eq"
|
|
|
|
},
|
2017-06-15 03:52:07 -04:00
|
|
|
"max_score": 0.5753642,
|
2016-05-19 13:42:23 -04:00
|
|
|
"hits": [
|
|
|
|
{
|
|
|
|
"_index": "my_index",
|
2017-12-14 11:47:53 -05:00
|
|
|
"_type": "_doc",
|
2016-05-19 13:42:23 -04:00
|
|
|
"_id": "1",
|
2017-06-15 03:52:07 -04:00
|
|
|
"_score": 0.5753642,
|
2016-05-19 13:42:23 -04:00
|
|
|
"_source": {
|
|
|
|
"title": "Quick Foxes"
|
|
|
|
}
|
|
|
|
}
|
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
----------------------------
|
|
|
|
// TESTRESPONSE[s/"took".*/"took": "$body.took",/]
|
|
|
|
/////////////////////
|