2013-08-28 19:24:34 -04:00
|
|
|
[[analysis-keyword-tokenizer]]
|
|
|
|
=== Keyword Tokenizer
|
|
|
|
|
2016-05-19 13:42:23 -04:00
|
|
|
The `keyword` tokenizer is a ``noop'' tokenizer that accepts whatever text it
|
|
|
|
is given and outputs the exact same text as a single term. It can be combined
|
|
|
|
with token filters to normalise output, e.g. lower-casing email addresses.
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2016-05-19 13:42:23 -04:00
|
|
|
[float]
|
|
|
|
=== Example output
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2016-05-19 13:42:23 -04:00
|
|
|
[source,js]
|
|
|
|
---------------------------
|
|
|
|
POST _analyze
|
|
|
|
{
|
|
|
|
"tokenizer": "keyword",
|
|
|
|
"text": "New York"
|
|
|
|
}
|
|
|
|
---------------------------
|
|
|
|
// CONSOLE
|
|
|
|
|
|
|
|
/////////////////////
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----------------------------
|
|
|
|
{
|
|
|
|
"tokens": [
|
|
|
|
{
|
|
|
|
"token": "New York",
|
|
|
|
"start_offset": 0,
|
|
|
|
"end_offset": 8,
|
|
|
|
"type": "word",
|
|
|
|
"position": 0
|
|
|
|
}
|
|
|
|
]
|
|
|
|
}
|
|
|
|
----------------------------
|
|
|
|
// TESTRESPONSE
|
|
|
|
|
|
|
|
/////////////////////
|
|
|
|
|
|
|
|
|
|
|
|
The above sentence would produce the following term:
|
|
|
|
|
|
|
|
[source,text]
|
|
|
|
---------------------------
|
|
|
|
[ New York ]
|
|
|
|
---------------------------
|
|
|
|
|
|
|
|
[float]
|
|
|
|
=== Configuration
|
|
|
|
|
|
|
|
The `keyword` tokenizer accepts the following parameters:
|
|
|
|
|
|
|
|
[horizontal]
|
|
|
|
`buffer_size`::
|
|
|
|
|
|
|
|
The number of characters read into the term buffer in a single pass.
|
|
|
|
Defaults to `256`. The term buffer will grow by this size until all the
|
|
|
|
text has been consumed. It is advisable not to change this setting.
|
2013-08-28 19:24:34 -04:00
|
|
|
|