mirror of
https://github.com/honeymoose/OpenSearch.git
synced 2025-02-06 21:18:31 +00:00
5da9e5dcbc
* Docs: Improved tokenizer docs Added descriptions and runnable examples * Addressed Nik's comments * Added TESTRESPONSEs for all tokenizer examples * Added TESTRESPONSEs for all analyzer examples too * Added docs, examples, and TESTRESPONSES for character filters * Skipping two tests: One interprets "$1" as a stack variable - same problem exists with the REST tests The other because the "took" value is always different * Fixed tests with "took" * Fixed failing tests and removed preserve_original from fingerprint analyzer
60 lines
1.1 KiB
Plaintext
60 lines
1.1 KiB
Plaintext
[[analysis-keyword-analyzer]]
|
|
=== Keyword Analyzer
|
|
|
|
The `keyword` analyzer is a ``noop'' analyzer which returns the entire input
|
|
string as a single token.
|
|
|
|
[float]
|
|
=== Definition
|
|
|
|
It consists of:
|
|
|
|
Tokenizer::
|
|
* <<analysis-keyword-tokenizer,Keyword Tokenizer>>
|
|
|
|
[float]
|
|
=== Example output
|
|
|
|
[source,js]
|
|
---------------------------
|
|
POST _analyze
|
|
{
|
|
"analyzer": "keyword",
|
|
"text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."
|
|
}
|
|
---------------------------
|
|
// CONSOLE
|
|
|
|
/////////////////////
|
|
|
|
[source,js]
|
|
----------------------------
|
|
{
|
|
"tokens": [
|
|
{
|
|
"token": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone.",
|
|
"start_offset": 0,
|
|
"end_offset": 56,
|
|
"type": "word",
|
|
"position": 0
|
|
}
|
|
]
|
|
}
|
|
----------------------------
|
|
// TESTRESPONSE
|
|
|
|
/////////////////////
|
|
|
|
|
|
The above sentence would produce the following single term:
|
|
|
|
[source,text]
|
|
---------------------------
|
|
[ The 2 QUICK Brown-Foxes jumped over the lazy dog's bone. ]
|
|
---------------------------
|
|
|
|
[float]
|
|
=== Configuration
|
|
|
|
The `keyword` analyzer is not configurable.
|