mirror of
https://github.com/honeymoose/OpenSearch.git
synced 2025-02-06 13:08:29 +00:00
* Default include_type_name to false for get and put mappings. * Default include_type_name to false for get field mappings. * Add a constant for the default include_type_name value. * Default include_type_name to false for get and put index templates. * Default include_type_name to false for create index. * Update create index calls in REST documentation to use include_type_name=true. * Some minor clean-ups around the get index API. * In REST tests, use include_type_name=true by default for index creation. * Make sure to use 'expression == false'. * Clarify the different IndexTemplateMetaData toXContent methods. * Fix FullClusterRestartIT#testSnapshotRestore. * Fix the ml_anomalies_default_mappings test. * Fix GetFieldMappingsResponseTests and GetIndexTemplateResponseTests. We make sure to specify include_type_name=true during xContent parsing, so we continue to test the legacy typed responses. XContent generation for the typeless responses is currently only covered by REST tests, but we will be adding unit test coverage for these as we implement each typeless API in the Java HLRC. This commit also refactors GetMappingsResponse to follow the same appraoch as the other mappings-related responses, where we read include_type_name out of the xContent params, instead of creating a second toXContent method. This gives better consistency in the response parsing code. * Fix more REST tests. * Improve some wording in the create index documentation. * Add a note about types removal in the create index docs. * Fix SmokeTestMonitoringWithSecurityIT#testHTTPExporterWithSSL. * Make sure to mention include_type_name in the REST docs for affected APIs. * Make sure to use 'expression == false' in FullClusterRestartIT. * Mention include_type_name in the REST templates docs.
270 lines
5.1 KiB
Plaintext
270 lines
5.1 KiB
Plaintext
[[analysis-standard-tokenizer]]
|
|
=== Standard Tokenizer
|
|
|
|
The `standard` tokenizer provides grammar based tokenization (based on the
|
|
Unicode Text Segmentation algorithm, as specified in
|
|
http://unicode.org/reports/tr29/[Unicode Standard Annex #29]) and works well
|
|
for most languages.
|
|
|
|
[float]
|
|
=== Example output
|
|
|
|
[source,js]
|
|
---------------------------
|
|
POST _analyze
|
|
{
|
|
"tokenizer": "standard",
|
|
"text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."
|
|
}
|
|
---------------------------
|
|
// CONSOLE
|
|
|
|
/////////////////////
|
|
|
|
[source,js]
|
|
----------------------------
|
|
{
|
|
"tokens": [
|
|
{
|
|
"token": "The",
|
|
"start_offset": 0,
|
|
"end_offset": 3,
|
|
"type": "<ALPHANUM>",
|
|
"position": 0
|
|
},
|
|
{
|
|
"token": "2",
|
|
"start_offset": 4,
|
|
"end_offset": 5,
|
|
"type": "<NUM>",
|
|
"position": 1
|
|
},
|
|
{
|
|
"token": "QUICK",
|
|
"start_offset": 6,
|
|
"end_offset": 11,
|
|
"type": "<ALPHANUM>",
|
|
"position": 2
|
|
},
|
|
{
|
|
"token": "Brown",
|
|
"start_offset": 12,
|
|
"end_offset": 17,
|
|
"type": "<ALPHANUM>",
|
|
"position": 3
|
|
},
|
|
{
|
|
"token": "Foxes",
|
|
"start_offset": 18,
|
|
"end_offset": 23,
|
|
"type": "<ALPHANUM>",
|
|
"position": 4
|
|
},
|
|
{
|
|
"token": "jumped",
|
|
"start_offset": 24,
|
|
"end_offset": 30,
|
|
"type": "<ALPHANUM>",
|
|
"position": 5
|
|
},
|
|
{
|
|
"token": "over",
|
|
"start_offset": 31,
|
|
"end_offset": 35,
|
|
"type": "<ALPHANUM>",
|
|
"position": 6
|
|
},
|
|
{
|
|
"token": "the",
|
|
"start_offset": 36,
|
|
"end_offset": 39,
|
|
"type": "<ALPHANUM>",
|
|
"position": 7
|
|
},
|
|
{
|
|
"token": "lazy",
|
|
"start_offset": 40,
|
|
"end_offset": 44,
|
|
"type": "<ALPHANUM>",
|
|
"position": 8
|
|
},
|
|
{
|
|
"token": "dog's",
|
|
"start_offset": 45,
|
|
"end_offset": 50,
|
|
"type": "<ALPHANUM>",
|
|
"position": 9
|
|
},
|
|
{
|
|
"token": "bone",
|
|
"start_offset": 51,
|
|
"end_offset": 55,
|
|
"type": "<ALPHANUM>",
|
|
"position": 10
|
|
}
|
|
]
|
|
}
|
|
----------------------------
|
|
// TESTRESPONSE
|
|
|
|
/////////////////////
|
|
|
|
|
|
The above sentence would produce the following terms:
|
|
|
|
[source,text]
|
|
---------------------------
|
|
[ The, 2, QUICK, Brown, Foxes, jumped, over, the, lazy, dog's, bone ]
|
|
---------------------------
|
|
|
|
[float]
|
|
=== Configuration
|
|
|
|
The `standard` tokenizer accepts the following parameters:
|
|
|
|
[horizontal]
|
|
`max_token_length`::
|
|
|
|
The maximum token length. If a token is seen that exceeds this length then
|
|
it is split at `max_token_length` intervals. Defaults to `255`.
|
|
|
|
[float]
|
|
=== Example configuration
|
|
|
|
In this example, we configure the `standard` tokenizer to have a
|
|
`max_token_length` of 5 (for demonstration purposes):
|
|
|
|
[source,js]
|
|
----------------------------
|
|
PUT my_index?include_type_name=true
|
|
{
|
|
"settings": {
|
|
"analysis": {
|
|
"analyzer": {
|
|
"my_analyzer": {
|
|
"tokenizer": "my_tokenizer"
|
|
}
|
|
},
|
|
"tokenizer": {
|
|
"my_tokenizer": {
|
|
"type": "standard",
|
|
"max_token_length": 5
|
|
}
|
|
}
|
|
}
|
|
}
|
|
}
|
|
|
|
POST my_index/_analyze
|
|
{
|
|
"analyzer": "my_analyzer",
|
|
"text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."
|
|
}
|
|
----------------------------
|
|
// CONSOLE
|
|
|
|
/////////////////////
|
|
|
|
[source,js]
|
|
----------------------------
|
|
{
|
|
"tokens": [
|
|
{
|
|
"token": "The",
|
|
"start_offset": 0,
|
|
"end_offset": 3,
|
|
"type": "<ALPHANUM>",
|
|
"position": 0
|
|
},
|
|
{
|
|
"token": "2",
|
|
"start_offset": 4,
|
|
"end_offset": 5,
|
|
"type": "<NUM>",
|
|
"position": 1
|
|
},
|
|
{
|
|
"token": "QUICK",
|
|
"start_offset": 6,
|
|
"end_offset": 11,
|
|
"type": "<ALPHANUM>",
|
|
"position": 2
|
|
},
|
|
{
|
|
"token": "Brown",
|
|
"start_offset": 12,
|
|
"end_offset": 17,
|
|
"type": "<ALPHANUM>",
|
|
"position": 3
|
|
},
|
|
{
|
|
"token": "Foxes",
|
|
"start_offset": 18,
|
|
"end_offset": 23,
|
|
"type": "<ALPHANUM>",
|
|
"position": 4
|
|
},
|
|
{
|
|
"token": "jumpe",
|
|
"start_offset": 24,
|
|
"end_offset": 29,
|
|
"type": "<ALPHANUM>",
|
|
"position": 5
|
|
},
|
|
{
|
|
"token": "d",
|
|
"start_offset": 29,
|
|
"end_offset": 30,
|
|
"type": "<ALPHANUM>",
|
|
"position": 6
|
|
},
|
|
{
|
|
"token": "over",
|
|
"start_offset": 31,
|
|
"end_offset": 35,
|
|
"type": "<ALPHANUM>",
|
|
"position": 7
|
|
},
|
|
{
|
|
"token": "the",
|
|
"start_offset": 36,
|
|
"end_offset": 39,
|
|
"type": "<ALPHANUM>",
|
|
"position": 8
|
|
},
|
|
{
|
|
"token": "lazy",
|
|
"start_offset": 40,
|
|
"end_offset": 44,
|
|
"type": "<ALPHANUM>",
|
|
"position": 9
|
|
},
|
|
{
|
|
"token": "dog's",
|
|
"start_offset": 45,
|
|
"end_offset": 50,
|
|
"type": "<ALPHANUM>",
|
|
"position": 10
|
|
},
|
|
{
|
|
"token": "bone",
|
|
"start_offset": 51,
|
|
"end_offset": 55,
|
|
"type": "<ALPHANUM>",
|
|
"position": 11
|
|
}
|
|
]
|
|
}
|
|
----------------------------
|
|
// TESTRESPONSE
|
|
|
|
/////////////////////
|
|
|
|
|
|
The above example produces the following terms:
|
|
|
|
[source,text]
|
|
---------------------------
|
|
[ The, 2, QUICK, Brown, Foxes, jumpe, d, over, the, lazy, dog's, bone ]
|
|
---------------------------
|