mirror of
https://github.com/honeymoose/OpenSearch.git
synced 2025-02-06 13:08:29 +00:00
* Default include_type_name to false for get and put mappings. * Default include_type_name to false for get field mappings. * Add a constant for the default include_type_name value. * Default include_type_name to false for get and put index templates. * Default include_type_name to false for create index. * Update create index calls in REST documentation to use include_type_name=true. * Some minor clean-ups around the get index API. * In REST tests, use include_type_name=true by default for index creation. * Make sure to use 'expression == false'. * Clarify the different IndexTemplateMetaData toXContent methods. * Fix FullClusterRestartIT#testSnapshotRestore. * Fix the ml_anomalies_default_mappings test. * Fix GetFieldMappingsResponseTests and GetIndexTemplateResponseTests. We make sure to specify include_type_name=true during xContent parsing, so we continue to test the legacy typed responses. XContent generation for the typeless responses is currently only covered by REST tests, but we will be adding unit test coverage for these as we implement each typeless API in the Java HLRC. This commit also refactors GetMappingsResponse to follow the same appraoch as the other mappings-related responses, where we read include_type_name out of the xContent params, instead of creating a second toXContent method. This gives better consistency in the response parsing code. * Fix more REST tests. * Improve some wording in the create index documentation. * Add a note about types removal in the create index docs. * Fix SmokeTestMonitoringWithSecurityIT#testHTTPExporterWithSSL. * Make sure to mention include_type_name in the REST docs for affected APIs. * Make sure to use 'expression == false' in FullClusterRestartIT. * Mention include_type_name in the REST templates docs.
324 lines
6.8 KiB
Plaintext
324 lines
6.8 KiB
Plaintext
[[analysis-edgengram-tokenizer]]
|
|
=== Edge NGram Tokenizer
|
|
|
|
The `edge_ngram` tokenizer first breaks text down into words whenever it
|
|
encounters one of a list of specified characters, then it emits
|
|
https://en.wikipedia.org/wiki/N-gram[N-grams] of each word where the start of
|
|
the N-gram is anchored to the beginning of the word.
|
|
|
|
Edge N-Grams are useful for _search-as-you-type_ queries.
|
|
|
|
TIP: When you need _search-as-you-type_ for text which has a widely known
|
|
order, such as movie or song titles, the
|
|
<<search-suggesters-completion,completion suggester>> is a much more efficient
|
|
choice than edge N-grams. Edge N-grams have the advantage when trying to
|
|
autocomplete words that can appear in any order.
|
|
|
|
[float]
|
|
=== Example output
|
|
|
|
With the default settings, the `edge_ngram` tokenizer treats the initial text as a
|
|
single token and produces N-grams with minimum length `1` and maximum length
|
|
`2`:
|
|
|
|
[source,js]
|
|
---------------------------
|
|
POST _analyze
|
|
{
|
|
"tokenizer": "edge_ngram",
|
|
"text": "Quick Fox"
|
|
}
|
|
---------------------------
|
|
// CONSOLE
|
|
|
|
/////////////////////
|
|
|
|
[source,js]
|
|
----------------------------
|
|
{
|
|
"tokens": [
|
|
{
|
|
"token": "Q",
|
|
"start_offset": 0,
|
|
"end_offset": 1,
|
|
"type": "word",
|
|
"position": 0
|
|
},
|
|
{
|
|
"token": "Qu",
|
|
"start_offset": 0,
|
|
"end_offset": 2,
|
|
"type": "word",
|
|
"position": 1
|
|
}
|
|
]
|
|
}
|
|
----------------------------
|
|
// TESTRESPONSE
|
|
|
|
/////////////////////
|
|
|
|
|
|
The above sentence would produce the following terms:
|
|
|
|
[source,text]
|
|
---------------------------
|
|
[ Q, Qu ]
|
|
---------------------------
|
|
|
|
NOTE: These default gram lengths are almost entirely useless. You need to
|
|
configure the `edge_ngram` before using it.
|
|
|
|
[float]
|
|
=== Configuration
|
|
|
|
The `edge_ngram` tokenizer accepts the following parameters:
|
|
|
|
[horizontal]
|
|
`min_gram`::
|
|
Minimum length of characters in a gram. Defaults to `1`.
|
|
|
|
`max_gram`::
|
|
Maximum length of characters in a gram. Defaults to `2`.
|
|
|
|
`token_chars`::
|
|
|
|
Character classes that should be included in a token. Elasticsearch
|
|
will split on characters that don't belong to the classes specified.
|
|
Defaults to `[]` (keep all characters).
|
|
+
|
|
Character classes may be any of the following:
|
|
+
|
|
* `letter` -- for example `a`, `b`, `ï` or `京`
|
|
* `digit` -- for example `3` or `7`
|
|
* `whitespace` -- for example `" "` or `"\n"`
|
|
* `punctuation` -- for example `!` or `"`
|
|
* `symbol` -- for example `$` or `√`
|
|
|
|
[float]
|
|
=== Example configuration
|
|
|
|
In this example, we configure the `edge_ngram` tokenizer to treat letters and
|
|
digits as tokens, and to produce grams with minimum length `2` and maximum
|
|
length `10`:
|
|
|
|
[source,js]
|
|
----------------------------
|
|
PUT my_index?include_type_name=true
|
|
{
|
|
"settings": {
|
|
"analysis": {
|
|
"analyzer": {
|
|
"my_analyzer": {
|
|
"tokenizer": "my_tokenizer"
|
|
}
|
|
},
|
|
"tokenizer": {
|
|
"my_tokenizer": {
|
|
"type": "edge_ngram",
|
|
"min_gram": 2,
|
|
"max_gram": 10,
|
|
"token_chars": [
|
|
"letter",
|
|
"digit"
|
|
]
|
|
}
|
|
}
|
|
}
|
|
}
|
|
}
|
|
|
|
POST my_index/_analyze
|
|
{
|
|
"analyzer": "my_analyzer",
|
|
"text": "2 Quick Foxes."
|
|
}
|
|
----------------------------
|
|
// CONSOLE
|
|
|
|
/////////////////////
|
|
|
|
[source,js]
|
|
----------------------------
|
|
{
|
|
"tokens": [
|
|
{
|
|
"token": "Qu",
|
|
"start_offset": 2,
|
|
"end_offset": 4,
|
|
"type": "word",
|
|
"position": 0
|
|
},
|
|
{
|
|
"token": "Qui",
|
|
"start_offset": 2,
|
|
"end_offset": 5,
|
|
"type": "word",
|
|
"position": 1
|
|
},
|
|
{
|
|
"token": "Quic",
|
|
"start_offset": 2,
|
|
"end_offset": 6,
|
|
"type": "word",
|
|
"position": 2
|
|
},
|
|
{
|
|
"token": "Quick",
|
|
"start_offset": 2,
|
|
"end_offset": 7,
|
|
"type": "word",
|
|
"position": 3
|
|
},
|
|
{
|
|
"token": "Fo",
|
|
"start_offset": 8,
|
|
"end_offset": 10,
|
|
"type": "word",
|
|
"position": 4
|
|
},
|
|
{
|
|
"token": "Fox",
|
|
"start_offset": 8,
|
|
"end_offset": 11,
|
|
"type": "word",
|
|
"position": 5
|
|
},
|
|
{
|
|
"token": "Foxe",
|
|
"start_offset": 8,
|
|
"end_offset": 12,
|
|
"type": "word",
|
|
"position": 6
|
|
},
|
|
{
|
|
"token": "Foxes",
|
|
"start_offset": 8,
|
|
"end_offset": 13,
|
|
"type": "word",
|
|
"position": 7
|
|
}
|
|
]
|
|
}
|
|
----------------------------
|
|
// TESTRESPONSE
|
|
|
|
/////////////////////
|
|
|
|
The above example produces the following terms:
|
|
|
|
[source,text]
|
|
---------------------------
|
|
[ Qu, Qui, Quic, Quick, Fo, Fox, Foxe, Foxes ]
|
|
---------------------------
|
|
|
|
Usually we recommend using the same `analyzer` at index time and at search
|
|
time. In the case of the `edge_ngram` tokenizer, the advice is different. It
|
|
only makes sense to use the `edge_ngram` tokenizer at index time, to ensure
|
|
that partial words are available for matching in the index. At search time,
|
|
just search for the terms the user has typed in, for instance: `Quick Fo`.
|
|
|
|
Below is an example of how to set up a field for _search-as-you-type_:
|
|
|
|
[source,js]
|
|
-----------------------------------
|
|
PUT my_index?include_type_name=true
|
|
{
|
|
"settings": {
|
|
"analysis": {
|
|
"analyzer": {
|
|
"autocomplete": {
|
|
"tokenizer": "autocomplete",
|
|
"filter": [
|
|
"lowercase"
|
|
]
|
|
},
|
|
"autocomplete_search": {
|
|
"tokenizer": "lowercase"
|
|
}
|
|
},
|
|
"tokenizer": {
|
|
"autocomplete": {
|
|
"type": "edge_ngram",
|
|
"min_gram": 2,
|
|
"max_gram": 10,
|
|
"token_chars": [
|
|
"letter"
|
|
]
|
|
}
|
|
}
|
|
}
|
|
},
|
|
"mappings": {
|
|
"_doc": {
|
|
"properties": {
|
|
"title": {
|
|
"type": "text",
|
|
"analyzer": "autocomplete",
|
|
"search_analyzer": "autocomplete_search"
|
|
}
|
|
}
|
|
}
|
|
}
|
|
}
|
|
|
|
PUT my_index/_doc/1
|
|
{
|
|
"title": "Quick Foxes" <1>
|
|
}
|
|
|
|
POST my_index/_refresh
|
|
|
|
GET my_index/_search
|
|
{
|
|
"query": {
|
|
"match": {
|
|
"title": {
|
|
"query": "Quick Fo", <2>
|
|
"operator": "and"
|
|
}
|
|
}
|
|
}
|
|
}
|
|
-----------------------------------
|
|
// CONSOLE
|
|
|
|
<1> The `autocomplete` analyzer indexes the terms `[qu, qui, quic, quick, fo, fox, foxe, foxes]`.
|
|
<2> The `autocomplete_search` analyzer searches for the terms `[quick, fo]`, both of which appear in the index.
|
|
|
|
/////////////////////
|
|
|
|
[source,js]
|
|
----------------------------
|
|
{
|
|
"took": $body.took,
|
|
"timed_out": false,
|
|
"_shards": {
|
|
"total": 1,
|
|
"successful": 1,
|
|
"skipped" : 0,
|
|
"failed": 0
|
|
},
|
|
"hits": {
|
|
"total" : {
|
|
"value": 1,
|
|
"relation": "eq"
|
|
},
|
|
"max_score": 0.5753642,
|
|
"hits": [
|
|
{
|
|
"_index": "my_index",
|
|
"_type": "_doc",
|
|
"_id": "1",
|
|
"_score": 0.5753642,
|
|
"_source": {
|
|
"title": "Quick Foxes"
|
|
}
|
|
}
|
|
]
|
|
}
|
|
}
|
|
----------------------------
|
|
// TESTRESPONSE[s/"took".*/"took": "$body.took",/]
|
|
/////////////////////
|