2013-08-28 19:24:34 -04:00
|
|
|
[[analysis-stop-analyzer]]
|
|
|
|
=== Stop Analyzer
|
|
|
|
|
2016-05-11 08:17:56 -04:00
|
|
|
The `stop` analyzer is the same as the <<analysis-simple-analyzer,`simple` analyzer>>
|
|
|
|
but adds support for removing stop words. It defaults to using the
|
|
|
|
`_english_` stop words.
|
|
|
|
|
|
|
|
[float]
|
|
|
|
=== Definition
|
|
|
|
|
|
|
|
It consists of:
|
|
|
|
|
|
|
|
Tokenizer::
|
|
|
|
* <<analysis-lowercase-tokenizer,Lower Case Tokenizer>>
|
|
|
|
|
|
|
|
Token filters::
|
|
|
|
* <<analysis-stop-tokenfilter,Stop Token Filter>>
|
|
|
|
|
|
|
|
[float]
|
|
|
|
=== Example output
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
---------------------------
|
|
|
|
POST _analyze
|
|
|
|
{
|
|
|
|
"analyzer": "stop",
|
|
|
|
"text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."
|
|
|
|
}
|
|
|
|
---------------------------
|
|
|
|
// CONSOLE
|
|
|
|
|
2016-05-19 13:42:23 -04:00
|
|
|
/////////////////////
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----------------------------
|
|
|
|
{
|
|
|
|
"tokens": [
|
|
|
|
{
|
|
|
|
"token": "quick",
|
|
|
|
"start_offset": 6,
|
|
|
|
"end_offset": 11,
|
|
|
|
"type": "word",
|
|
|
|
"position": 1
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"token": "brown",
|
|
|
|
"start_offset": 12,
|
|
|
|
"end_offset": 17,
|
|
|
|
"type": "word",
|
|
|
|
"position": 2
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"token": "foxes",
|
|
|
|
"start_offset": 18,
|
|
|
|
"end_offset": 23,
|
|
|
|
"type": "word",
|
|
|
|
"position": 3
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"token": "jumped",
|
|
|
|
"start_offset": 24,
|
|
|
|
"end_offset": 30,
|
|
|
|
"type": "word",
|
|
|
|
"position": 4
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"token": "over",
|
|
|
|
"start_offset": 31,
|
|
|
|
"end_offset": 35,
|
|
|
|
"type": "word",
|
|
|
|
"position": 5
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"token": "lazy",
|
|
|
|
"start_offset": 40,
|
|
|
|
"end_offset": 44,
|
|
|
|
"type": "word",
|
|
|
|
"position": 7
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"token": "dog",
|
|
|
|
"start_offset": 45,
|
|
|
|
"end_offset": 48,
|
|
|
|
"type": "word",
|
|
|
|
"position": 8
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"token": "s",
|
|
|
|
"start_offset": 49,
|
|
|
|
"end_offset": 50,
|
|
|
|
"type": "word",
|
|
|
|
"position": 9
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"token": "bone",
|
|
|
|
"start_offset": 51,
|
|
|
|
"end_offset": 55,
|
|
|
|
"type": "word",
|
|
|
|
"position": 10
|
|
|
|
}
|
|
|
|
]
|
|
|
|
}
|
|
|
|
----------------------------
|
|
|
|
// TESTRESPONSE
|
|
|
|
|
|
|
|
/////////////////////
|
|
|
|
|
|
|
|
|
2016-05-11 08:17:56 -04:00
|
|
|
The above sentence would produce the following terms:
|
|
|
|
|
|
|
|
[source,text]
|
|
|
|
---------------------------
|
|
|
|
[ quick, brown, foxes, jumped, over, lazy, dog, s, bone ]
|
|
|
|
---------------------------
|
|
|
|
|
|
|
|
[float]
|
|
|
|
=== Configuration
|
|
|
|
|
|
|
|
The `stop` analyzer accepts the following parameters:
|
|
|
|
|
|
|
|
[horizontal]
|
|
|
|
`stopwords`::
|
|
|
|
|
|
|
|
A pre-defined stop words list like `_english_` or an array containing a
|
|
|
|
list of stop words. Defaults to `_english_`.
|
|
|
|
|
|
|
|
`stopwords_path`::
|
|
|
|
|
2017-02-16 07:35:45 -05:00
|
|
|
The path to a file containing stop words. This path is relative to the
|
|
|
|
Elasticsearch `config` directory.
|
2016-05-11 08:17:56 -04:00
|
|
|
|
|
|
|
|
|
|
|
See the <<analysis-stop-tokenfilter,Stop Token Filter>> for more information
|
|
|
|
about stop word configuration.
|
|
|
|
|
|
|
|
[float]
|
|
|
|
=== Example configuration
|
|
|
|
|
|
|
|
In this example, we configure the `stop` analyzer to use a specified list of
|
|
|
|
words as stop words:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----------------------------
|
|
|
|
PUT my_index
|
|
|
|
{
|
|
|
|
"settings": {
|
|
|
|
"analysis": {
|
|
|
|
"analyzer": {
|
|
|
|
"my_stop_analyzer": {
|
|
|
|
"type": "stop",
|
|
|
|
"stopwords": ["the", "over"]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
POST my_index/_analyze
|
|
|
|
{
|
|
|
|
"analyzer": "my_stop_analyzer",
|
|
|
|
"text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."
|
|
|
|
}
|
|
|
|
----------------------------
|
|
|
|
// CONSOLE
|
|
|
|
|
2016-05-19 13:42:23 -04:00
|
|
|
/////////////////////
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----------------------------
|
|
|
|
{
|
|
|
|
"tokens": [
|
|
|
|
{
|
|
|
|
"token": "quick",
|
|
|
|
"start_offset": 6,
|
|
|
|
"end_offset": 11,
|
|
|
|
"type": "word",
|
|
|
|
"position": 1
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"token": "brown",
|
|
|
|
"start_offset": 12,
|
|
|
|
"end_offset": 17,
|
|
|
|
"type": "word",
|
|
|
|
"position": 2
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"token": "foxes",
|
|
|
|
"start_offset": 18,
|
|
|
|
"end_offset": 23,
|
|
|
|
"type": "word",
|
|
|
|
"position": 3
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"token": "jumped",
|
|
|
|
"start_offset": 24,
|
|
|
|
"end_offset": 30,
|
|
|
|
"type": "word",
|
|
|
|
"position": 4
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"token": "lazy",
|
|
|
|
"start_offset": 40,
|
|
|
|
"end_offset": 44,
|
|
|
|
"type": "word",
|
|
|
|
"position": 7
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"token": "dog",
|
|
|
|
"start_offset": 45,
|
|
|
|
"end_offset": 48,
|
|
|
|
"type": "word",
|
|
|
|
"position": 8
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"token": "s",
|
|
|
|
"start_offset": 49,
|
|
|
|
"end_offset": 50,
|
|
|
|
"type": "word",
|
|
|
|
"position": 9
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"token": "bone",
|
|
|
|
"start_offset": 51,
|
|
|
|
"end_offset": 55,
|
|
|
|
"type": "word",
|
|
|
|
"position": 10
|
|
|
|
}
|
|
|
|
]
|
|
|
|
}
|
|
|
|
----------------------------
|
|
|
|
// TESTRESPONSE
|
|
|
|
|
|
|
|
/////////////////////
|
|
|
|
|
|
|
|
|
2016-05-11 08:17:56 -04:00
|
|
|
The above example produces the following terms:
|
|
|
|
|
|
|
|
[source,text]
|
|
|
|
---------------------------
|
|
|
|
[ quick, brown, foxes, jumped, lazy, dog, s, bone ]
|
|
|
|
---------------------------
|