[Docs] Fix error in Common Grams Token Filter (#36774)

The first example given is missing the two single-token cases for "is" and "a".
The later usage example is slightly wrong in that custom analyzers should
go under `settings.analysis.analyzer`.
This commit is contained in:
Christoph Büscher 2018-12-18 16:54:06 +01:00 committed by GitHub
parent f05c404934
commit 41feaf137c
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -7,8 +7,8 @@ Single terms are still indexed. It can be used as an alternative to the
Token Filter>> when we don't want to completely ignore common terms.
For example, the text "the quick brown is a fox" will be tokenized as
"the", "the_quick", "quick", "brown", "brown_is", "is_a", "a_fox",
"fox". Assuming "the", "is" and "a" are common words.
"the", "the_quick", "quick", "brown", "brown_is", "is", "is_a", "a",
"a_fox", "fox". Assuming "the", "is" and "a" are common words.
When `query_mode` is enabled, the token filter removes common words and
single terms followed by a common word. This parameter should be enabled
@ -45,7 +45,7 @@ PUT /common_grams_example
{
"settings": {
"analysis": {
"my_analyzer": {
"analyzer": {
"index_grams": {
"tokenizer": "whitespace",
"filter": ["common_grams"]