2013-08-28 19:24:34 -04:00
|
|
|
[[analysis-lang-analyzer]]
|
|
|
|
=== Language Analyzers
|
|
|
|
|
|
|
|
A set of analyzers aimed at analyzing specific language text. The
|
2014-06-09 16:41:25 -04:00
|
|
|
following types are supported:
|
|
|
|
<<arabic-analyzer,`arabic`>>,
|
|
|
|
<<armenian-analyzer,`armenian`>>,
|
|
|
|
<<basque-analyzer,`basque`>>,
|
2017-09-06 18:48:58 -04:00
|
|
|
<<bengali-analyzer,`bengali`>>,
|
2014-06-09 16:41:25 -04:00
|
|
|
<<brazilian-analyzer,`brazilian`>>,
|
|
|
|
<<bulgarian-analyzer,`bulgarian`>>,
|
|
|
|
<<catalan-analyzer,`catalan`>>,
|
|
|
|
<<cjk-analyzer,`cjk`>>,
|
|
|
|
<<czech-analyzer,`czech`>>,
|
2014-06-09 16:50:48 -04:00
|
|
|
<<danish-analyzer,`danish`>>,
|
2014-06-09 16:41:25 -04:00
|
|
|
<<dutch-analyzer,`dutch`>>,
|
|
|
|
<<english-analyzer,`english`>>,
|
|
|
|
<<finnish-analyzer,`finnish`>>,
|
|
|
|
<<french-analyzer,`french`>>,
|
|
|
|
<<galician-analyzer,`galician`>>,
|
|
|
|
<<german-analyzer,`german`>>,
|
|
|
|
<<greek-analyzer,`greek`>>,
|
|
|
|
<<hindi-analyzer,`hindi`>>,
|
|
|
|
<<hungarian-analyzer,`hungarian`>>,
|
|
|
|
<<indonesian-analyzer,`indonesian`>>,
|
Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene
Add `irish` analyzer
Add `sorani` analyzer (Kurdish)
Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc.
Add `thai` tokenizer: segments thai text into words.
Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer
Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself
Add `german_normalization` tokenfilter: umlaut/sharp S normalization
Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences
Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages
Add `sorani_normalization` tokenfilter: normalizes kurdish text
Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text
Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization`
Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk`
Add support access to default Thai stopword set "_thai_"
Fix some bugs and broken links in documentation.
Closes #5935
2014-07-02 14:59:18 -04:00
|
|
|
<<irish-analyzer,`irish`>>,
|
2014-06-09 16:41:25 -04:00
|
|
|
<<italian-analyzer,`italian`>>,
|
2014-09-02 19:22:31 -04:00
|
|
|
<<latvian-analyzer,`latvian`>>,
|
2015-09-01 08:52:10 -04:00
|
|
|
<<lithuanian-analyzer,`lithuanian`>>,
|
2014-06-09 16:41:25 -04:00
|
|
|
<<norwegian-analyzer,`norwegian`>>,
|
|
|
|
<<persian-analyzer,`persian`>>,
|
|
|
|
<<portuguese-analyzer,`portuguese`>>,
|
|
|
|
<<romanian-analyzer,`romanian`>>,
|
|
|
|
<<russian-analyzer,`russian`>>,
|
Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene
Add `irish` analyzer
Add `sorani` analyzer (Kurdish)
Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc.
Add `thai` tokenizer: segments thai text into words.
Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer
Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself
Add `german_normalization` tokenfilter: umlaut/sharp S normalization
Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences
Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages
Add `sorani_normalization` tokenfilter: normalizes kurdish text
Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text
Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization`
Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk`
Add support access to default Thai stopword set "_thai_"
Fix some bugs and broken links in documentation.
Closes #5935
2014-07-02 14:59:18 -04:00
|
|
|
<<sorani-analyzer,`sorani`>>,
|
2014-06-09 16:41:25 -04:00
|
|
|
<<spanish-analyzer,`spanish`>>,
|
|
|
|
<<swedish-analyzer,`swedish`>>,
|
|
|
|
<<turkish-analyzer,`turkish`>>,
|
|
|
|
<<thai-analyzer,`thai`>>.
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2014-07-07 04:06:18 -04:00
|
|
|
==== Configuring language analyzers
|
|
|
|
|
|
|
|
===== Stopwords
|
|
|
|
|
2013-08-28 19:24:34 -04:00
|
|
|
All analyzers support setting custom `stopwords` either internally in
|
|
|
|
the config, or by using an external stopwords file by setting
|
2014-01-13 14:04:30 -05:00
|
|
|
`stopwords_path`. Check <<analysis-stop-analyzer,Stop Analyzer>> for
|
|
|
|
more details.
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2014-07-07 04:06:18 -04:00
|
|
|
===== Excluding words from stemming
|
|
|
|
|
|
|
|
The `stem_exclusion` parameter allows you to specify an array
|
|
|
|
of lowercase words that should not be stemmed. Internally, this
|
|
|
|
functionality is implemented by adding the
|
|
|
|
<<analysis-keyword-marker-tokenfilter,`keyword_marker` token filter>>
|
|
|
|
with the `keywords` set to the value of the `stem_exclusion` parameter.
|
|
|
|
|
2013-08-28 19:24:34 -04:00
|
|
|
The following analyzers support setting custom `stem_exclusion` list:
|
2017-09-06 18:48:58 -04:00
|
|
|
`arabic`, `armenian`, `basque`, `bengali`, `bulgarian`, `catalan`, `czech`,
|
2016-12-21 11:31:21 -05:00
|
|
|
`dutch`, `english`, `finnish`, `french`, `galician`,
|
|
|
|
`german`, `hindi`, `hungarian`, `indonesian`, `irish`, `italian`, `latvian`,
|
2015-09-01 08:52:10 -04:00
|
|
|
`lithuanian`, `norwegian`, `portuguese`, `romanian`, `russian`, `sorani`,
|
|
|
|
`spanish`, `swedish`, `turkish`.
|
2014-06-09 16:41:25 -04:00
|
|
|
|
2014-07-07 04:06:18 -04:00
|
|
|
==== Reimplementing language analyzers
|
|
|
|
|
|
|
|
The built-in language analyzers can be reimplemented as `custom` analyzers
|
|
|
|
(as described below) in order to customize their behaviour.
|
|
|
|
|
|
|
|
NOTE: If you do not intend to exclude words from being stemmed (the
|
|
|
|
equivalent of the `stem_exclusion` parameter above), then you should remove
|
|
|
|
the `keyword_marker` token filter from the custom analyzer configuration.
|
|
|
|
|
2014-06-09 16:41:25 -04:00
|
|
|
[[arabic-analyzer]]
|
2014-07-07 04:06:18 -04:00
|
|
|
===== `arabic` analyzer
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
The `arabic` analyzer could be reimplemented as a `custom` analyzer as follows:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
PUT /arabic_example
|
2014-06-09 16:41:25 -04:00
|
|
|
{
|
|
|
|
"settings": {
|
|
|
|
"analysis": {
|
|
|
|
"filter": {
|
|
|
|
"arabic_stop": {
|
|
|
|
"type": "stop",
|
|
|
|
"stopwords": "_arabic_" <1>
|
|
|
|
},
|
|
|
|
"arabic_keywords": {
|
2014-06-11 07:43:02 -04:00
|
|
|
"type": "keyword_marker",
|
2017-04-01 13:59:54 -04:00
|
|
|
"keywords": ["مثال"] <2>
|
2014-06-09 16:41:25 -04:00
|
|
|
},
|
|
|
|
"arabic_stemmer": {
|
|
|
|
"type": "stemmer",
|
|
|
|
"language": "arabic"
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"analyzer": {
|
2018-05-09 09:23:10 -04:00
|
|
|
"rebuilt_arabic": {
|
2014-06-09 16:41:25 -04:00
|
|
|
"tokenizer": "standard",
|
|
|
|
"filter": [
|
|
|
|
"lowercase",
|
2018-05-09 09:23:10 -04:00
|
|
|
"decimal_digit",
|
2014-06-09 16:41:25 -04:00
|
|
|
"arabic_stop",
|
|
|
|
"arabic_normalization",
|
2014-06-09 16:50:48 -04:00
|
|
|
"arabic_keywords",
|
|
|
|
"arabic_stemmer"
|
2014-06-09 16:41:25 -04:00
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
// CONSOLE
|
2018-05-09 09:23:10 -04:00
|
|
|
// TEST[s/"arabic_keywords",//]
|
|
|
|
// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: arabic_example, first: arabic, second: rebuilt_arabic}\nendyaml\n/]
|
2014-06-09 16:41:25 -04:00
|
|
|
<1> The default stopwords can be overridden with the `stopwords`
|
|
|
|
or `stopwords_path` parameters.
|
2014-07-07 04:06:18 -04:00
|
|
|
<2> This filter should be removed unless there are words which should
|
|
|
|
be excluded from stemming.
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
[[armenian-analyzer]]
|
2014-07-07 04:06:18 -04:00
|
|
|
===== `armenian` analyzer
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
The `armenian` analyzer could be reimplemented as a `custom` analyzer as follows:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
PUT /armenian_example
|
2014-06-09 16:41:25 -04:00
|
|
|
{
|
|
|
|
"settings": {
|
|
|
|
"analysis": {
|
|
|
|
"filter": {
|
|
|
|
"armenian_stop": {
|
|
|
|
"type": "stop",
|
|
|
|
"stopwords": "_armenian_" <1>
|
|
|
|
},
|
|
|
|
"armenian_keywords": {
|
2014-06-11 07:43:02 -04:00
|
|
|
"type": "keyword_marker",
|
2017-04-01 13:59:54 -04:00
|
|
|
"keywords": ["օրինակ"] <2>
|
2014-06-09 16:41:25 -04:00
|
|
|
},
|
|
|
|
"armenian_stemmer": {
|
|
|
|
"type": "stemmer",
|
|
|
|
"language": "armenian"
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"analyzer": {
|
2018-05-09 09:23:10 -04:00
|
|
|
"rebuilt_armenian": {
|
2014-06-09 16:41:25 -04:00
|
|
|
"tokenizer": "standard",
|
|
|
|
"filter": [
|
|
|
|
"lowercase",
|
|
|
|
"armenian_stop",
|
|
|
|
"armenian_keywords",
|
|
|
|
"armenian_stemmer"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
// CONSOLE
|
2018-05-09 09:23:10 -04:00
|
|
|
// TEST[s/"armenian_keywords",//]
|
|
|
|
// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: armenian_example, first: armenian, second: rebuilt_armenian}\nendyaml\n/]
|
2014-06-09 16:41:25 -04:00
|
|
|
<1> The default stopwords can be overridden with the `stopwords`
|
|
|
|
or `stopwords_path` parameters.
|
2014-07-07 04:06:18 -04:00
|
|
|
<2> This filter should be removed unless there are words which should
|
|
|
|
be excluded from stemming.
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
[[basque-analyzer]]
|
2014-07-07 04:06:18 -04:00
|
|
|
===== `basque` analyzer
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
The `basque` analyzer could be reimplemented as a `custom` analyzer as follows:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----------------------------------------------------
|
2018-01-18 03:51:53 -05:00
|
|
|
PUT /basque_example
|
2014-06-09 16:41:25 -04:00
|
|
|
{
|
|
|
|
"settings": {
|
|
|
|
"analysis": {
|
|
|
|
"filter": {
|
|
|
|
"basque_stop": {
|
|
|
|
"type": "stop",
|
|
|
|
"stopwords": "_basque_" <1>
|
|
|
|
},
|
|
|
|
"basque_keywords": {
|
2014-06-11 07:43:02 -04:00
|
|
|
"type": "keyword_marker",
|
2017-04-01 13:59:54 -04:00
|
|
|
"keywords": ["Adibidez"] <2>
|
2014-06-09 16:41:25 -04:00
|
|
|
},
|
|
|
|
"basque_stemmer": {
|
|
|
|
"type": "stemmer",
|
|
|
|
"language": "basque"
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"analyzer": {
|
2018-05-09 09:23:10 -04:00
|
|
|
"rebuilt_basque": {
|
2014-06-09 16:41:25 -04:00
|
|
|
"tokenizer": "standard",
|
|
|
|
"filter": [
|
|
|
|
"lowercase",
|
|
|
|
"basque_stop",
|
|
|
|
"basque_keywords",
|
|
|
|
"basque_stemmer"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
// CONSOLE
|
2018-05-09 09:23:10 -04:00
|
|
|
// TEST[s/"basque_keywords",//]
|
|
|
|
// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: basque_example, first: basque, second: rebuilt_basque}\nendyaml\n/]
|
2014-06-09 16:41:25 -04:00
|
|
|
<1> The default stopwords can be overridden with the `stopwords`
|
|
|
|
or `stopwords_path` parameters.
|
2014-07-07 04:06:18 -04:00
|
|
|
<2> This filter should be removed unless there are words which should
|
|
|
|
be excluded from stemming.
|
2014-06-09 16:41:25 -04:00
|
|
|
|
2017-09-06 18:48:58 -04:00
|
|
|
[[bengali-analyzer]]
|
|
|
|
===== `bengali` analyzer
|
|
|
|
|
|
|
|
The `bengali` analyzer could be reimplemented as a `custom` analyzer as follows:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----------------------------------------------------
|
|
|
|
PUT /bengali_example
|
|
|
|
{
|
|
|
|
"settings": {
|
|
|
|
"analysis": {
|
|
|
|
"filter": {
|
|
|
|
"bengali_stop": {
|
|
|
|
"type": "stop",
|
|
|
|
"stopwords": "_bengali_" <1>
|
|
|
|
},
|
|
|
|
"bengali_keywords": {
|
|
|
|
"type": "keyword_marker",
|
|
|
|
"keywords": ["উদাহরণ"] <2>
|
|
|
|
},
|
|
|
|
"bengali_stemmer": {
|
|
|
|
"type": "stemmer",
|
|
|
|
"language": "bengali"
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"analyzer": {
|
2018-05-09 09:23:10 -04:00
|
|
|
"rebuilt_bengali": {
|
2017-09-06 18:48:58 -04:00
|
|
|
"tokenizer": "standard",
|
|
|
|
"filter": [
|
|
|
|
"lowercase",
|
2018-05-09 09:23:10 -04:00
|
|
|
"decimal_digit",
|
|
|
|
"bengali_keywords",
|
2017-09-06 18:48:58 -04:00
|
|
|
"indic_normalization",
|
|
|
|
"bengali_normalization",
|
|
|
|
"bengali_stop",
|
|
|
|
"bengali_stemmer"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
----------------------------------------------------
|
|
|
|
// CONSOLE
|
2018-05-09 09:23:10 -04:00
|
|
|
// TEST[s/"bengali_keywords",//]
|
|
|
|
// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: bengali_example, first: bengali, second: rebuilt_bengali}\nendyaml\n/]
|
2017-09-06 18:48:58 -04:00
|
|
|
<1> The default stopwords can be overridden with the `stopwords`
|
|
|
|
or `stopwords_path` parameters.
|
|
|
|
<2> This filter should be removed unless there are words which should
|
|
|
|
be excluded from stemming.
|
|
|
|
|
2014-06-09 16:41:25 -04:00
|
|
|
[[brazilian-analyzer]]
|
2014-07-07 04:06:18 -04:00
|
|
|
===== `brazilian` analyzer
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
The `brazilian` analyzer could be reimplemented as a `custom` analyzer as follows:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
PUT /brazilian_example
|
2014-06-09 16:41:25 -04:00
|
|
|
{
|
|
|
|
"settings": {
|
|
|
|
"analysis": {
|
|
|
|
"filter": {
|
|
|
|
"brazilian_stop": {
|
|
|
|
"type": "stop",
|
|
|
|
"stopwords": "_brazilian_" <1>
|
|
|
|
},
|
|
|
|
"brazilian_keywords": {
|
2014-06-11 07:43:02 -04:00
|
|
|
"type": "keyword_marker",
|
2017-04-01 13:59:54 -04:00
|
|
|
"keywords": ["exemplo"] <2>
|
2014-06-09 16:41:25 -04:00
|
|
|
},
|
|
|
|
"brazilian_stemmer": {
|
|
|
|
"type": "stemmer",
|
|
|
|
"language": "brazilian"
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"analyzer": {
|
2018-05-09 09:23:10 -04:00
|
|
|
"rebuilt_brazilian": {
|
2014-06-09 16:41:25 -04:00
|
|
|
"tokenizer": "standard",
|
|
|
|
"filter": [
|
|
|
|
"lowercase",
|
|
|
|
"brazilian_stop",
|
|
|
|
"brazilian_keywords",
|
|
|
|
"brazilian_stemmer"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
// CONSOLE
|
2018-05-09 09:23:10 -04:00
|
|
|
// TEST[s/"brazilian_keywords",//]
|
|
|
|
// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: brazilian_example, first: brazilian, second: rebuilt_brazilian}\nendyaml\n/]
|
2014-06-09 16:41:25 -04:00
|
|
|
<1> The default stopwords can be overridden with the `stopwords`
|
|
|
|
or `stopwords_path` parameters.
|
2014-07-07 04:06:18 -04:00
|
|
|
<2> This filter should be removed unless there are words which should
|
|
|
|
be excluded from stemming.
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
[[bulgarian-analyzer]]
|
2014-07-07 04:06:18 -04:00
|
|
|
===== `bulgarian` analyzer
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
The `bulgarian` analyzer could be reimplemented as a `custom` analyzer as follows:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
PUT /bulgarian_example
|
2014-06-09 16:41:25 -04:00
|
|
|
{
|
|
|
|
"settings": {
|
|
|
|
"analysis": {
|
|
|
|
"filter": {
|
|
|
|
"bulgarian_stop": {
|
|
|
|
"type": "stop",
|
|
|
|
"stopwords": "_bulgarian_" <1>
|
|
|
|
},
|
|
|
|
"bulgarian_keywords": {
|
2014-06-11 07:43:02 -04:00
|
|
|
"type": "keyword_marker",
|
2017-04-01 13:59:54 -04:00
|
|
|
"keywords": ["пример"] <2>
|
2014-06-09 16:41:25 -04:00
|
|
|
},
|
|
|
|
"bulgarian_stemmer": {
|
|
|
|
"type": "stemmer",
|
|
|
|
"language": "bulgarian"
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"analyzer": {
|
2018-05-09 09:23:10 -04:00
|
|
|
"rebuilt_bulgarian": {
|
2014-06-09 16:41:25 -04:00
|
|
|
"tokenizer": "standard",
|
|
|
|
"filter": [
|
|
|
|
"lowercase",
|
|
|
|
"bulgarian_stop",
|
|
|
|
"bulgarian_keywords",
|
|
|
|
"bulgarian_stemmer"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
// CONSOLE
|
2018-05-09 09:23:10 -04:00
|
|
|
// TEST[s/"bulgarian_keywords",//]
|
|
|
|
// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: bulgarian_example, first: bulgarian, second: rebuilt_bulgarian}\nendyaml\n/]
|
2014-06-09 16:41:25 -04:00
|
|
|
<1> The default stopwords can be overridden with the `stopwords`
|
|
|
|
or `stopwords_path` parameters.
|
2014-07-07 04:06:18 -04:00
|
|
|
<2> This filter should be removed unless there are words which should
|
|
|
|
be excluded from stemming.
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
[[catalan-analyzer]]
|
2014-07-07 04:06:18 -04:00
|
|
|
===== `catalan` analyzer
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
The `catalan` analyzer could be reimplemented as a `custom` analyzer as follows:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
PUT /catalan_example
|
2014-06-09 16:41:25 -04:00
|
|
|
{
|
|
|
|
"settings": {
|
|
|
|
"analysis": {
|
|
|
|
"filter": {
|
|
|
|
"catalan_elision": {
|
2016-05-11 08:17:56 -04:00
|
|
|
"type": "elision",
|
2018-05-22 03:58:12 -04:00
|
|
|
"articles": [ "d", "l", "m", "n", "s", "t"],
|
|
|
|
"articles_case": true
|
2014-06-09 16:41:25 -04:00
|
|
|
},
|
|
|
|
"catalan_stop": {
|
|
|
|
"type": "stop",
|
|
|
|
"stopwords": "_catalan_" <1>
|
|
|
|
},
|
|
|
|
"catalan_keywords": {
|
2014-06-11 07:43:02 -04:00
|
|
|
"type": "keyword_marker",
|
2017-04-01 13:59:54 -04:00
|
|
|
"keywords": ["exemple"] <2>
|
2014-06-09 16:41:25 -04:00
|
|
|
},
|
|
|
|
"catalan_stemmer": {
|
|
|
|
"type": "stemmer",
|
|
|
|
"language": "catalan"
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"analyzer": {
|
2018-05-09 09:23:10 -04:00
|
|
|
"rebuilt_catalan": {
|
2014-06-09 16:41:25 -04:00
|
|
|
"tokenizer": "standard",
|
|
|
|
"filter": [
|
|
|
|
"catalan_elision",
|
|
|
|
"lowercase",
|
|
|
|
"catalan_stop",
|
|
|
|
"catalan_keywords",
|
|
|
|
"catalan_stemmer"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
// CONSOLE
|
2018-05-09 09:23:10 -04:00
|
|
|
// TEST[s/"catalan_keywords",//]
|
|
|
|
// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: catalan_example, first: catalan, second: rebuilt_catalan}\nendyaml\n/]
|
2014-06-09 16:41:25 -04:00
|
|
|
<1> The default stopwords can be overridden with the `stopwords`
|
|
|
|
or `stopwords_path` parameters.
|
2014-07-07 04:06:18 -04:00
|
|
|
<2> This filter should be removed unless there are words which should
|
|
|
|
be excluded from stemming.
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
[[cjk-analyzer]]
|
2014-07-07 04:06:18 -04:00
|
|
|
===== `cjk` analyzer
|
2014-06-09 16:41:25 -04:00
|
|
|
|
2018-11-21 04:00:48 -05:00
|
|
|
NOTE: You may find that `icu_analyzer` in the ICU analysis plugin works better
|
|
|
|
for CJK text than the `cjk` analyzer. Experiment with your text and queries.
|
|
|
|
|
2014-06-09 16:41:25 -04:00
|
|
|
The `cjk` analyzer could be reimplemented as a `custom` analyzer as follows:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
PUT /cjk_example
|
2014-06-09 16:41:25 -04:00
|
|
|
{
|
|
|
|
"settings": {
|
|
|
|
"analysis": {
|
|
|
|
"filter": {
|
|
|
|
"english_stop": {
|
|
|
|
"type": "stop",
|
2018-05-09 09:23:10 -04:00
|
|
|
"stopwords": [ <1>
|
|
|
|
"a", "and", "are", "as", "at", "be", "but", "by", "for",
|
|
|
|
"if", "in", "into", "is", "it", "no", "not", "of", "on",
|
|
|
|
"or", "s", "such", "t", "that", "the", "their", "then",
|
|
|
|
"there", "these", "they", "this", "to", "was", "will",
|
|
|
|
"with", "www"
|
|
|
|
]
|
2014-06-09 16:41:25 -04:00
|
|
|
}
|
|
|
|
},
|
|
|
|
"analyzer": {
|
2018-05-09 09:23:10 -04:00
|
|
|
"rebuilt_cjk": {
|
2014-06-09 16:41:25 -04:00
|
|
|
"tokenizer": "standard",
|
|
|
|
"filter": [
|
2014-07-30 14:12:29 -04:00
|
|
|
"cjk_width",
|
2014-06-09 16:41:25 -04:00
|
|
|
"lowercase",
|
|
|
|
"cjk_bigram",
|
|
|
|
"english_stop"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
// CONSOLE
|
2018-05-09 09:23:10 -04:00
|
|
|
// TEST[s/"cjk_keywords",//]
|
|
|
|
// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: cjk_example, first: cjk, second: rebuilt_cjk}\nendyaml\n/]
|
2014-06-09 16:41:25 -04:00
|
|
|
<1> The default stopwords can be overridden with the `stopwords`
|
2018-05-09 09:23:10 -04:00
|
|
|
or `stopwords_path` parameters. The default stop words are
|
|
|
|
*almost* the same as the `_english_` set, but not exactly
|
|
|
|
the same.
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
[[czech-analyzer]]
|
2014-07-07 04:06:18 -04:00
|
|
|
===== `czech` analyzer
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
The `czech` analyzer could be reimplemented as a `custom` analyzer as follows:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
PUT /czech_example
|
2014-06-09 16:41:25 -04:00
|
|
|
{
|
|
|
|
"settings": {
|
|
|
|
"analysis": {
|
|
|
|
"filter": {
|
|
|
|
"czech_stop": {
|
|
|
|
"type": "stop",
|
|
|
|
"stopwords": "_czech_" <1>
|
|
|
|
},
|
|
|
|
"czech_keywords": {
|
2014-06-11 07:43:02 -04:00
|
|
|
"type": "keyword_marker",
|
2017-04-01 13:59:54 -04:00
|
|
|
"keywords": ["příklad"] <2>
|
2014-06-09 16:41:25 -04:00
|
|
|
},
|
|
|
|
"czech_stemmer": {
|
|
|
|
"type": "stemmer",
|
|
|
|
"language": "czech"
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"analyzer": {
|
2018-05-09 09:23:10 -04:00
|
|
|
"rebuilt_czech": {
|
2014-06-09 16:41:25 -04:00
|
|
|
"tokenizer": "standard",
|
|
|
|
"filter": [
|
|
|
|
"lowercase",
|
|
|
|
"czech_stop",
|
|
|
|
"czech_keywords",
|
|
|
|
"czech_stemmer"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
// CONSOLE
|
2018-05-09 09:23:10 -04:00
|
|
|
// TEST[s/"czech_keywords",//]
|
|
|
|
// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: czech_example, first: czech, second: rebuilt_czech}\nendyaml\n/]
|
2014-06-09 16:41:25 -04:00
|
|
|
<1> The default stopwords can be overridden with the `stopwords`
|
|
|
|
or `stopwords_path` parameters.
|
2014-07-07 04:06:18 -04:00
|
|
|
<2> This filter should be removed unless there are words which should
|
|
|
|
be excluded from stemming.
|
2014-06-09 16:41:25 -04:00
|
|
|
|
2014-06-09 16:50:48 -04:00
|
|
|
[[danish-analyzer]]
|
2014-07-07 04:06:18 -04:00
|
|
|
===== `danish` analyzer
|
2014-06-09 16:41:25 -04:00
|
|
|
|
2014-06-09 16:50:48 -04:00
|
|
|
The `danish` analyzer could be reimplemented as a `custom` analyzer as follows:
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
PUT /danish_example
|
2014-06-09 16:41:25 -04:00
|
|
|
{
|
|
|
|
"settings": {
|
|
|
|
"analysis": {
|
|
|
|
"filter": {
|
2014-06-09 16:50:48 -04:00
|
|
|
"danish_stop": {
|
2014-06-09 16:41:25 -04:00
|
|
|
"type": "stop",
|
2014-06-09 16:50:48 -04:00
|
|
|
"stopwords": "_danish_" <1>
|
2014-06-09 16:41:25 -04:00
|
|
|
},
|
2014-06-09 16:50:48 -04:00
|
|
|
"danish_keywords": {
|
2014-06-11 07:43:02 -04:00
|
|
|
"type": "keyword_marker",
|
2017-04-01 13:59:54 -04:00
|
|
|
"keywords": ["eksempel"] <2>
|
2014-06-09 16:41:25 -04:00
|
|
|
},
|
2014-06-09 16:50:48 -04:00
|
|
|
"danish_stemmer": {
|
2014-06-09 16:41:25 -04:00
|
|
|
"type": "stemmer",
|
2014-06-09 16:50:48 -04:00
|
|
|
"language": "danish"
|
2014-06-09 16:41:25 -04:00
|
|
|
}
|
|
|
|
},
|
|
|
|
"analyzer": {
|
2018-05-09 09:23:10 -04:00
|
|
|
"rebuilt_danish": {
|
2014-06-09 16:41:25 -04:00
|
|
|
"tokenizer": "standard",
|
|
|
|
"filter": [
|
|
|
|
"lowercase",
|
2014-06-09 16:50:48 -04:00
|
|
|
"danish_stop",
|
|
|
|
"danish_keywords",
|
|
|
|
"danish_stemmer"
|
2014-06-09 16:41:25 -04:00
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
// CONSOLE
|
2018-05-09 09:23:10 -04:00
|
|
|
// TEST[s/"danish_keywords",//]
|
|
|
|
// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: danish_example, first: danish, second: rebuilt_danish}\nendyaml\n/]
|
2014-06-09 16:41:25 -04:00
|
|
|
<1> The default stopwords can be overridden with the `stopwords`
|
|
|
|
or `stopwords_path` parameters.
|
2014-07-07 04:06:18 -04:00
|
|
|
<2> This filter should be removed unless there are words which should
|
|
|
|
be excluded from stemming.
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
[[dutch-analyzer]]
|
2014-07-07 04:06:18 -04:00
|
|
|
===== `dutch` analyzer
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
The `dutch` analyzer could be reimplemented as a `custom` analyzer as follows:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----------------------------------------------------
|
2018-01-18 03:51:53 -05:00
|
|
|
PUT /dutch_example
|
2014-06-09 16:41:25 -04:00
|
|
|
{
|
|
|
|
"settings": {
|
|
|
|
"analysis": {
|
|
|
|
"filter": {
|
|
|
|
"dutch_stop": {
|
|
|
|
"type": "stop",
|
|
|
|
"stopwords": "_dutch_" <1>
|
|
|
|
},
|
|
|
|
"dutch_keywords": {
|
2014-06-11 07:43:02 -04:00
|
|
|
"type": "keyword_marker",
|
2017-04-01 13:59:54 -04:00
|
|
|
"keywords": ["voorbeeld"] <2>
|
2014-06-09 16:41:25 -04:00
|
|
|
},
|
|
|
|
"dutch_stemmer": {
|
|
|
|
"type": "stemmer",
|
|
|
|
"language": "dutch"
|
|
|
|
},
|
|
|
|
"dutch_override": {
|
|
|
|
"type": "stemmer_override",
|
|
|
|
"rules": [
|
|
|
|
"fiets=>fiets",
|
|
|
|
"bromfiets=>bromfiets",
|
|
|
|
"ei=>eier",
|
|
|
|
"kind=>kinder"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"analyzer": {
|
2018-05-09 09:23:10 -04:00
|
|
|
"rebuilt_dutch": {
|
2014-06-09 16:41:25 -04:00
|
|
|
"tokenizer": "standard",
|
|
|
|
"filter": [
|
|
|
|
"lowercase",
|
|
|
|
"dutch_stop",
|
|
|
|
"dutch_keywords",
|
|
|
|
"dutch_override",
|
|
|
|
"dutch_stemmer"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
// CONSOLE
|
2018-05-09 09:23:10 -04:00
|
|
|
// TEST[s/"dutch_keywords",//]
|
|
|
|
// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: dutch_example, first: dutch, second: rebuilt_dutch}\nendyaml\n/]
|
2014-06-09 16:41:25 -04:00
|
|
|
<1> The default stopwords can be overridden with the `stopwords`
|
|
|
|
or `stopwords_path` parameters.
|
2014-07-07 04:06:18 -04:00
|
|
|
<2> This filter should be removed unless there are words which should
|
|
|
|
be excluded from stemming.
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
[[english-analyzer]]
|
2014-07-07 04:06:18 -04:00
|
|
|
===== `english` analyzer
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
The `english` analyzer could be reimplemented as a `custom` analyzer as follows:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
PUT /english_example
|
2014-06-09 16:41:25 -04:00
|
|
|
{
|
|
|
|
"settings": {
|
|
|
|
"analysis": {
|
|
|
|
"filter": {
|
|
|
|
"english_stop": {
|
|
|
|
"type": "stop",
|
|
|
|
"stopwords": "_english_" <1>
|
|
|
|
},
|
|
|
|
"english_keywords": {
|
2014-06-11 07:43:02 -04:00
|
|
|
"type": "keyword_marker",
|
2017-04-01 13:59:54 -04:00
|
|
|
"keywords": ["example"] <2>
|
2014-06-09 16:41:25 -04:00
|
|
|
},
|
2014-06-11 07:47:01 -04:00
|
|
|
"english_stemmer": {
|
|
|
|
"type": "stemmer",
|
|
|
|
"language": "english"
|
|
|
|
},
|
2014-06-09 16:41:25 -04:00
|
|
|
"english_possessive_stemmer": {
|
|
|
|
"type": "stemmer",
|
|
|
|
"language": "possessive_english"
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"analyzer": {
|
2018-05-09 09:23:10 -04:00
|
|
|
"rebuilt_english": {
|
2014-06-09 16:41:25 -04:00
|
|
|
"tokenizer": "standard",
|
|
|
|
"filter": [
|
|
|
|
"english_possessive_stemmer",
|
|
|
|
"lowercase",
|
|
|
|
"english_stop",
|
|
|
|
"english_keywords",
|
2014-06-11 07:47:01 -04:00
|
|
|
"english_stemmer"
|
2014-06-09 16:41:25 -04:00
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
// CONSOLE
|
2018-05-09 09:23:10 -04:00
|
|
|
// TEST[s/"english_keywords",//]
|
|
|
|
// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: english_example, first: english, second: rebuilt_english}\nendyaml\n/]
|
2014-06-09 16:41:25 -04:00
|
|
|
<1> The default stopwords can be overridden with the `stopwords`
|
|
|
|
or `stopwords_path` parameters.
|
2014-07-07 04:06:18 -04:00
|
|
|
<2> This filter should be removed unless there are words which should
|
|
|
|
be excluded from stemming.
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
[[finnish-analyzer]]
|
2014-07-07 04:06:18 -04:00
|
|
|
===== `finnish` analyzer
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
The `finnish` analyzer could be reimplemented as a `custom` analyzer as follows:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
PUT /finnish_example
|
2014-06-09 16:41:25 -04:00
|
|
|
{
|
|
|
|
"settings": {
|
|
|
|
"analysis": {
|
|
|
|
"filter": {
|
|
|
|
"finnish_stop": {
|
|
|
|
"type": "stop",
|
|
|
|
"stopwords": "_finnish_" <1>
|
|
|
|
},
|
|
|
|
"finnish_keywords": {
|
2014-06-11 07:43:02 -04:00
|
|
|
"type": "keyword_marker",
|
2017-04-01 13:59:54 -04:00
|
|
|
"keywords": ["esimerkki"] <2>
|
2014-06-09 16:41:25 -04:00
|
|
|
},
|
|
|
|
"finnish_stemmer": {
|
|
|
|
"type": "stemmer",
|
|
|
|
"language": "finnish"
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"analyzer": {
|
2018-05-09 09:23:10 -04:00
|
|
|
"rebuilt_finnish": {
|
2014-06-09 16:41:25 -04:00
|
|
|
"tokenizer": "standard",
|
|
|
|
"filter": [
|
|
|
|
"lowercase",
|
|
|
|
"finnish_stop",
|
|
|
|
"finnish_keywords",
|
|
|
|
"finnish_stemmer"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
// CONSOLE
|
2018-05-09 09:23:10 -04:00
|
|
|
// TEST[s/"finnish_keywords",//]
|
|
|
|
// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: finnish_example, first: finnish, second: rebuilt_finnish}\nendyaml\n/]
|
2014-06-09 16:41:25 -04:00
|
|
|
<1> The default stopwords can be overridden with the `stopwords`
|
|
|
|
or `stopwords_path` parameters.
|
2014-07-07 04:06:18 -04:00
|
|
|
<2> This filter should be removed unless there are words which should
|
|
|
|
be excluded from stemming.
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
[[french-analyzer]]
|
2014-07-07 04:06:18 -04:00
|
|
|
===== `french` analyzer
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
The `french` analyzer could be reimplemented as a `custom` analyzer as follows:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
PUT /french_example
|
2014-06-09 16:41:25 -04:00
|
|
|
{
|
|
|
|
"settings": {
|
|
|
|
"analysis": {
|
|
|
|
"filter": {
|
|
|
|
"french_elision": {
|
2016-01-06 12:26:03 -05:00
|
|
|
"type": "elision",
|
|
|
|
"articles_case": true,
|
2016-05-11 08:17:56 -04:00
|
|
|
"articles": [
|
2016-01-06 12:26:03 -05:00
|
|
|
"l", "m", "t", "qu", "n", "s",
|
2016-05-11 08:17:56 -04:00
|
|
|
"j", "d", "c", "jusqu", "quoiqu",
|
|
|
|
"lorsqu", "puisqu"
|
2016-01-06 12:26:03 -05:00
|
|
|
]
|
2014-06-09 16:41:25 -04:00
|
|
|
},
|
|
|
|
"french_stop": {
|
|
|
|
"type": "stop",
|
|
|
|
"stopwords": "_french_" <1>
|
|
|
|
},
|
|
|
|
"french_keywords": {
|
2014-06-11 07:43:02 -04:00
|
|
|
"type": "keyword_marker",
|
2017-04-01 13:59:54 -04:00
|
|
|
"keywords": ["Exemple"] <2>
|
2014-06-09 16:41:25 -04:00
|
|
|
},
|
|
|
|
"french_stemmer": {
|
|
|
|
"type": "stemmer",
|
|
|
|
"language": "light_french"
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"analyzer": {
|
2018-05-09 09:23:10 -04:00
|
|
|
"rebuilt_french": {
|
2014-06-09 16:41:25 -04:00
|
|
|
"tokenizer": "standard",
|
|
|
|
"filter": [
|
|
|
|
"french_elision",
|
|
|
|
"lowercase",
|
|
|
|
"french_stop",
|
|
|
|
"french_keywords",
|
|
|
|
"french_stemmer"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
// CONSOLE
|
2018-05-09 09:23:10 -04:00
|
|
|
// TEST[s/"french_keywords",//]
|
|
|
|
// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: french_example, first: french, second: rebuilt_french}\nendyaml\n/]
|
2014-06-09 16:41:25 -04:00
|
|
|
<1> The default stopwords can be overridden with the `stopwords`
|
|
|
|
or `stopwords_path` parameters.
|
2014-07-07 04:06:18 -04:00
|
|
|
<2> This filter should be removed unless there are words which should
|
|
|
|
be excluded from stemming.
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
[[galician-analyzer]]
|
2014-07-07 04:06:18 -04:00
|
|
|
===== `galician` analyzer
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
The `galician` analyzer could be reimplemented as a `custom` analyzer as follows:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
PUT /galician_example
|
2014-06-09 16:41:25 -04:00
|
|
|
{
|
|
|
|
"settings": {
|
|
|
|
"analysis": {
|
|
|
|
"filter": {
|
|
|
|
"galician_stop": {
|
|
|
|
"type": "stop",
|
|
|
|
"stopwords": "_galician_" <1>
|
|
|
|
},
|
|
|
|
"galician_keywords": {
|
2014-06-11 07:43:02 -04:00
|
|
|
"type": "keyword_marker",
|
2017-04-01 13:59:54 -04:00
|
|
|
"keywords": ["exemplo"] <2>
|
2014-06-09 16:41:25 -04:00
|
|
|
},
|
|
|
|
"galician_stemmer": {
|
|
|
|
"type": "stemmer",
|
|
|
|
"language": "galician"
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"analyzer": {
|
2018-05-09 09:23:10 -04:00
|
|
|
"rebuilt_galician": {
|
2014-06-09 16:41:25 -04:00
|
|
|
"tokenizer": "standard",
|
|
|
|
"filter": [
|
|
|
|
"lowercase",
|
|
|
|
"galician_stop",
|
|
|
|
"galician_keywords",
|
|
|
|
"galician_stemmer"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
// CONSOLE
|
2018-05-09 09:23:10 -04:00
|
|
|
// TEST[s/"galician_keywords",//]
|
|
|
|
// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: galician_example, first: galician, second: rebuilt_galician}\nendyaml\n/]
|
2014-06-09 16:41:25 -04:00
|
|
|
<1> The default stopwords can be overridden with the `stopwords`
|
|
|
|
or `stopwords_path` parameters.
|
2014-07-07 04:06:18 -04:00
|
|
|
<2> This filter should be removed unless there are words which should
|
|
|
|
be excluded from stemming.
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
[[german-analyzer]]
|
2014-07-07 04:06:18 -04:00
|
|
|
===== `german` analyzer
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
The `german` analyzer could be reimplemented as a `custom` analyzer as follows:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
PUT /german_example
|
2014-06-09 16:41:25 -04:00
|
|
|
{
|
|
|
|
"settings": {
|
|
|
|
"analysis": {
|
|
|
|
"filter": {
|
|
|
|
"german_stop": {
|
|
|
|
"type": "stop",
|
|
|
|
"stopwords": "_german_" <1>
|
|
|
|
},
|
|
|
|
"german_keywords": {
|
2014-06-11 07:43:02 -04:00
|
|
|
"type": "keyword_marker",
|
2017-04-01 13:59:54 -04:00
|
|
|
"keywords": ["Beispiel"] <2>
|
2014-06-09 16:41:25 -04:00
|
|
|
},
|
|
|
|
"german_stemmer": {
|
|
|
|
"type": "stemmer",
|
|
|
|
"language": "light_german"
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"analyzer": {
|
2018-05-09 09:23:10 -04:00
|
|
|
"rebuilt_german": {
|
2014-06-09 16:41:25 -04:00
|
|
|
"tokenizer": "standard",
|
|
|
|
"filter": [
|
|
|
|
"lowercase",
|
|
|
|
"german_stop",
|
|
|
|
"german_keywords",
|
Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene
Add `irish` analyzer
Add `sorani` analyzer (Kurdish)
Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc.
Add `thai` tokenizer: segments thai text into words.
Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer
Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself
Add `german_normalization` tokenfilter: umlaut/sharp S normalization
Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences
Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages
Add `sorani_normalization` tokenfilter: normalizes kurdish text
Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text
Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization`
Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk`
Add support access to default Thai stopword set "_thai_"
Fix some bugs and broken links in documentation.
Closes #5935
2014-07-02 14:59:18 -04:00
|
|
|
"german_normalization",
|
2014-06-09 16:41:25 -04:00
|
|
|
"german_stemmer"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
// CONSOLE
|
2018-05-09 09:23:10 -04:00
|
|
|
// TEST[s/"german_keywords",//]
|
|
|
|
// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: german_example, first: german, second: rebuilt_german}\nendyaml\n/]
|
2014-06-09 16:41:25 -04:00
|
|
|
<1> The default stopwords can be overridden with the `stopwords`
|
|
|
|
or `stopwords_path` parameters.
|
2014-07-07 04:06:18 -04:00
|
|
|
<2> This filter should be removed unless there are words which should
|
|
|
|
be excluded from stemming.
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
[[greek-analyzer]]
|
2014-07-07 04:06:18 -04:00
|
|
|
===== `greek` analyzer
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
The `greek` analyzer could be reimplemented as a `custom` analyzer as follows:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
PUT /greek_example
|
2014-06-09 16:41:25 -04:00
|
|
|
{
|
|
|
|
"settings": {
|
|
|
|
"analysis": {
|
|
|
|
"filter": {
|
|
|
|
"greek_stop": {
|
|
|
|
"type": "stop",
|
|
|
|
"stopwords": "_greek_" <1>
|
|
|
|
},
|
Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene
Add `irish` analyzer
Add `sorani` analyzer (Kurdish)
Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc.
Add `thai` tokenizer: segments thai text into words.
Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer
Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself
Add `german_normalization` tokenfilter: umlaut/sharp S normalization
Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences
Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages
Add `sorani_normalization` tokenfilter: normalizes kurdish text
Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text
Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization`
Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk`
Add support access to default Thai stopword set "_thai_"
Fix some bugs and broken links in documentation.
Closes #5935
2014-07-02 14:59:18 -04:00
|
|
|
"greek_lowercase": {
|
|
|
|
"type": "lowercase",
|
|
|
|
"language": "greek"
|
|
|
|
},
|
2014-06-09 16:41:25 -04:00
|
|
|
"greek_keywords": {
|
2014-06-11 07:43:02 -04:00
|
|
|
"type": "keyword_marker",
|
2017-04-01 13:59:54 -04:00
|
|
|
"keywords": ["παράδειγμα"] <2>
|
2014-06-09 16:41:25 -04:00
|
|
|
},
|
|
|
|
"greek_stemmer": {
|
|
|
|
"type": "stemmer",
|
|
|
|
"language": "greek"
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"analyzer": {
|
2018-05-09 09:23:10 -04:00
|
|
|
"rebuilt_greek": {
|
2014-06-09 16:41:25 -04:00
|
|
|
"tokenizer": "standard",
|
|
|
|
"filter": [
|
Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene
Add `irish` analyzer
Add `sorani` analyzer (Kurdish)
Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc.
Add `thai` tokenizer: segments thai text into words.
Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer
Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself
Add `german_normalization` tokenfilter: umlaut/sharp S normalization
Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences
Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages
Add `sorani_normalization` tokenfilter: normalizes kurdish text
Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text
Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization`
Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk`
Add support access to default Thai stopword set "_thai_"
Fix some bugs and broken links in documentation.
Closes #5935
2014-07-02 14:59:18 -04:00
|
|
|
"greek_lowercase",
|
2014-06-09 16:41:25 -04:00
|
|
|
"greek_stop",
|
|
|
|
"greek_keywords",
|
|
|
|
"greek_stemmer"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
// CONSOLE
|
2018-05-09 09:23:10 -04:00
|
|
|
// TEST[s/"greek_keywords",//]
|
|
|
|
// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: greek_example, first: greek, second: rebuilt_greek}\nendyaml\n/]
|
2014-06-09 16:41:25 -04:00
|
|
|
<1> The default stopwords can be overridden with the `stopwords`
|
|
|
|
or `stopwords_path` parameters.
|
2014-07-07 04:06:18 -04:00
|
|
|
<2> This filter should be removed unless there are words which should
|
|
|
|
be excluded from stemming.
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
[[hindi-analyzer]]
|
2014-07-07 04:06:18 -04:00
|
|
|
===== `hindi` analyzer
|
2014-06-09 16:41:25 -04:00
|
|
|
|
Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene
Add `irish` analyzer
Add `sorani` analyzer (Kurdish)
Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc.
Add `thai` tokenizer: segments thai text into words.
Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer
Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself
Add `german_normalization` tokenfilter: umlaut/sharp S normalization
Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences
Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages
Add `sorani_normalization` tokenfilter: normalizes kurdish text
Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text
Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization`
Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk`
Add support access to default Thai stopword set "_thai_"
Fix some bugs and broken links in documentation.
Closes #5935
2014-07-02 14:59:18 -04:00
|
|
|
The `hindi` analyzer could be reimplemented as a `custom` analyzer as follows:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
PUT /hindi_example
|
Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene
Add `irish` analyzer
Add `sorani` analyzer (Kurdish)
Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc.
Add `thai` tokenizer: segments thai text into words.
Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer
Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself
Add `german_normalization` tokenfilter: umlaut/sharp S normalization
Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences
Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages
Add `sorani_normalization` tokenfilter: normalizes kurdish text
Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text
Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization`
Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk`
Add support access to default Thai stopword set "_thai_"
Fix some bugs and broken links in documentation.
Closes #5935
2014-07-02 14:59:18 -04:00
|
|
|
{
|
|
|
|
"settings": {
|
|
|
|
"analysis": {
|
|
|
|
"filter": {
|
|
|
|
"hindi_stop": {
|
|
|
|
"type": "stop",
|
|
|
|
"stopwords": "_hindi_" <1>
|
|
|
|
},
|
|
|
|
"hindi_keywords": {
|
|
|
|
"type": "keyword_marker",
|
2017-04-01 13:59:54 -04:00
|
|
|
"keywords": ["उदाहरण"] <2>
|
Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene
Add `irish` analyzer
Add `sorani` analyzer (Kurdish)
Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc.
Add `thai` tokenizer: segments thai text into words.
Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer
Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself
Add `german_normalization` tokenfilter: umlaut/sharp S normalization
Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences
Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages
Add `sorani_normalization` tokenfilter: normalizes kurdish text
Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text
Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization`
Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk`
Add support access to default Thai stopword set "_thai_"
Fix some bugs and broken links in documentation.
Closes #5935
2014-07-02 14:59:18 -04:00
|
|
|
},
|
|
|
|
"hindi_stemmer": {
|
|
|
|
"type": "stemmer",
|
|
|
|
"language": "hindi"
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"analyzer": {
|
2018-05-09 09:23:10 -04:00
|
|
|
"rebuilt_hindi": {
|
Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene
Add `irish` analyzer
Add `sorani` analyzer (Kurdish)
Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc.
Add `thai` tokenizer: segments thai text into words.
Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer
Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself
Add `german_normalization` tokenfilter: umlaut/sharp S normalization
Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences
Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages
Add `sorani_normalization` tokenfilter: normalizes kurdish text
Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text
Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization`
Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk`
Add support access to default Thai stopword set "_thai_"
Fix some bugs and broken links in documentation.
Closes #5935
2014-07-02 14:59:18 -04:00
|
|
|
"tokenizer": "standard",
|
|
|
|
"filter": [
|
|
|
|
"lowercase",
|
2018-05-09 09:23:10 -04:00
|
|
|
"decimal_digit",
|
|
|
|
"hindi_keywords",
|
Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene
Add `irish` analyzer
Add `sorani` analyzer (Kurdish)
Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc.
Add `thai` tokenizer: segments thai text into words.
Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer
Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself
Add `german_normalization` tokenfilter: umlaut/sharp S normalization
Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences
Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages
Add `sorani_normalization` tokenfilter: normalizes kurdish text
Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text
Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization`
Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk`
Add support access to default Thai stopword set "_thai_"
Fix some bugs and broken links in documentation.
Closes #5935
2014-07-02 14:59:18 -04:00
|
|
|
"indic_normalization",
|
|
|
|
"hindi_normalization",
|
|
|
|
"hindi_stop",
|
|
|
|
"hindi_stemmer"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
// CONSOLE
|
2018-05-09 09:23:10 -04:00
|
|
|
// TEST[s/"hindi_keywords",//]
|
|
|
|
// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: hindi_example, first: hindi, second: rebuilt_hindi}\nendyaml\n/]
|
Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene
Add `irish` analyzer
Add `sorani` analyzer (Kurdish)
Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc.
Add `thai` tokenizer: segments thai text into words.
Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer
Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself
Add `german_normalization` tokenfilter: umlaut/sharp S normalization
Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences
Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages
Add `sorani_normalization` tokenfilter: normalizes kurdish text
Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text
Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization`
Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk`
Add support access to default Thai stopword set "_thai_"
Fix some bugs and broken links in documentation.
Closes #5935
2014-07-02 14:59:18 -04:00
|
|
|
<1> The default stopwords can be overridden with the `stopwords`
|
|
|
|
or `stopwords_path` parameters.
|
2014-07-07 04:06:18 -04:00
|
|
|
<2> This filter should be removed unless there are words which should
|
|
|
|
be excluded from stemming.
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
[[hungarian-analyzer]]
|
2014-07-07 04:06:18 -04:00
|
|
|
===== `hungarian` analyzer
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
The `hungarian` analyzer could be reimplemented as a `custom` analyzer as follows:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
PUT /hungarian_example
|
2014-06-09 16:41:25 -04:00
|
|
|
{
|
|
|
|
"settings": {
|
|
|
|
"analysis": {
|
|
|
|
"filter": {
|
|
|
|
"hungarian_stop": {
|
|
|
|
"type": "stop",
|
|
|
|
"stopwords": "_hungarian_" <1>
|
|
|
|
},
|
|
|
|
"hungarian_keywords": {
|
2014-06-11 07:43:02 -04:00
|
|
|
"type": "keyword_marker",
|
2017-04-01 13:59:54 -04:00
|
|
|
"keywords": ["példa"] <2>
|
2014-06-09 16:41:25 -04:00
|
|
|
},
|
|
|
|
"hungarian_stemmer": {
|
|
|
|
"type": "stemmer",
|
|
|
|
"language": "hungarian"
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"analyzer": {
|
2018-05-09 09:23:10 -04:00
|
|
|
"rebuilt_hungarian": {
|
2014-06-09 16:41:25 -04:00
|
|
|
"tokenizer": "standard",
|
|
|
|
"filter": [
|
|
|
|
"lowercase",
|
|
|
|
"hungarian_stop",
|
|
|
|
"hungarian_keywords",
|
|
|
|
"hungarian_stemmer"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
// CONSOLE
|
2018-05-09 09:23:10 -04:00
|
|
|
// TEST[s/"hungarian_keywords",//]
|
|
|
|
// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: hungarian_example, first: hungarian, second: rebuilt_hungarian}\nendyaml\n/]
|
2014-06-09 16:41:25 -04:00
|
|
|
<1> The default stopwords can be overridden with the `stopwords`
|
|
|
|
or `stopwords_path` parameters.
|
2014-07-07 04:06:18 -04:00
|
|
|
<2> This filter should be removed unless there are words which should
|
|
|
|
be excluded from stemming.
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
|
|
|
|
[[indonesian-analyzer]]
|
2014-07-07 04:06:18 -04:00
|
|
|
===== `indonesian` analyzer
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
The `indonesian` analyzer could be reimplemented as a `custom` analyzer as follows:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
PUT /indonesian_example
|
2014-06-09 16:41:25 -04:00
|
|
|
{
|
|
|
|
"settings": {
|
|
|
|
"analysis": {
|
|
|
|
"filter": {
|
|
|
|
"indonesian_stop": {
|
|
|
|
"type": "stop",
|
|
|
|
"stopwords": "_indonesian_" <1>
|
|
|
|
},
|
|
|
|
"indonesian_keywords": {
|
2014-06-11 07:43:02 -04:00
|
|
|
"type": "keyword_marker",
|
2017-04-01 13:59:54 -04:00
|
|
|
"keywords": ["contoh"] <2>
|
2014-06-09 16:41:25 -04:00
|
|
|
},
|
|
|
|
"indonesian_stemmer": {
|
|
|
|
"type": "stemmer",
|
|
|
|
"language": "indonesian"
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"analyzer": {
|
2018-05-09 09:23:10 -04:00
|
|
|
"rebuilt_indonesian": {
|
2014-06-09 16:41:25 -04:00
|
|
|
"tokenizer": "standard",
|
|
|
|
"filter": [
|
|
|
|
"lowercase",
|
|
|
|
"indonesian_stop",
|
|
|
|
"indonesian_keywords",
|
|
|
|
"indonesian_stemmer"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
// CONSOLE
|
2018-05-09 09:23:10 -04:00
|
|
|
// TEST[s/"indonesian_keywords",//]
|
|
|
|
// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: indonesian_example, first: indonesian, second: rebuilt_indonesian}\nendyaml\n/]
|
2014-06-09 16:41:25 -04:00
|
|
|
<1> The default stopwords can be overridden with the `stopwords`
|
|
|
|
or `stopwords_path` parameters.
|
2014-07-07 04:06:18 -04:00
|
|
|
<2> This filter should be removed unless there are words which should
|
|
|
|
be excluded from stemming.
|
2014-06-09 16:41:25 -04:00
|
|
|
|
Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene
Add `irish` analyzer
Add `sorani` analyzer (Kurdish)
Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc.
Add `thai` tokenizer: segments thai text into words.
Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer
Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself
Add `german_normalization` tokenfilter: umlaut/sharp S normalization
Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences
Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages
Add `sorani_normalization` tokenfilter: normalizes kurdish text
Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text
Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization`
Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk`
Add support access to default Thai stopword set "_thai_"
Fix some bugs and broken links in documentation.
Closes #5935
2014-07-02 14:59:18 -04:00
|
|
|
[[irish-analyzer]]
|
2014-07-07 04:06:18 -04:00
|
|
|
===== `irish` analyzer
|
Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene
Add `irish` analyzer
Add `sorani` analyzer (Kurdish)
Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc.
Add `thai` tokenizer: segments thai text into words.
Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer
Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself
Add `german_normalization` tokenfilter: umlaut/sharp S normalization
Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences
Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages
Add `sorani_normalization` tokenfilter: normalizes kurdish text
Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text
Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization`
Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk`
Add support access to default Thai stopword set "_thai_"
Fix some bugs and broken links in documentation.
Closes #5935
2014-07-02 14:59:18 -04:00
|
|
|
|
|
|
|
The `irish` analyzer could be reimplemented as a `custom` analyzer as follows:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
PUT /irish_example
|
Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene
Add `irish` analyzer
Add `sorani` analyzer (Kurdish)
Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc.
Add `thai` tokenizer: segments thai text into words.
Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer
Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself
Add `german_normalization` tokenfilter: umlaut/sharp S normalization
Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences
Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages
Add `sorani_normalization` tokenfilter: normalizes kurdish text
Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text
Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization`
Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk`
Add support access to default Thai stopword set "_thai_"
Fix some bugs and broken links in documentation.
Closes #5935
2014-07-02 14:59:18 -04:00
|
|
|
{
|
|
|
|
"settings": {
|
|
|
|
"analysis": {
|
|
|
|
"filter": {
|
2018-05-09 09:23:10 -04:00
|
|
|
"irish_hyphenation": {
|
|
|
|
"type": "stop",
|
|
|
|
"stopwords": [ "h", "n", "t" ],
|
|
|
|
"ignore_case": true
|
|
|
|
},
|
Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene
Add `irish` analyzer
Add `sorani` analyzer (Kurdish)
Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc.
Add `thai` tokenizer: segments thai text into words.
Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer
Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself
Add `german_normalization` tokenfilter: umlaut/sharp S normalization
Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences
Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages
Add `sorani_normalization` tokenfilter: normalizes kurdish text
Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text
Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization`
Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk`
Add support access to default Thai stopword set "_thai_"
Fix some bugs and broken links in documentation.
Closes #5935
2014-07-02 14:59:18 -04:00
|
|
|
"irish_elision": {
|
|
|
|
"type": "elision",
|
2018-05-09 09:23:10 -04:00
|
|
|
"articles": [ "d", "m", "b" ],
|
|
|
|
"articles_case": true
|
Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene
Add `irish` analyzer
Add `sorani` analyzer (Kurdish)
Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc.
Add `thai` tokenizer: segments thai text into words.
Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer
Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself
Add `german_normalization` tokenfilter: umlaut/sharp S normalization
Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences
Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages
Add `sorani_normalization` tokenfilter: normalizes kurdish text
Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text
Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization`
Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk`
Add support access to default Thai stopword set "_thai_"
Fix some bugs and broken links in documentation.
Closes #5935
2014-07-02 14:59:18 -04:00
|
|
|
},
|
|
|
|
"irish_stop": {
|
|
|
|
"type": "stop",
|
|
|
|
"stopwords": "_irish_" <1>
|
|
|
|
},
|
|
|
|
"irish_lowercase": {
|
|
|
|
"type": "lowercase",
|
|
|
|
"language": "irish"
|
|
|
|
},
|
|
|
|
"irish_keywords": {
|
|
|
|
"type": "keyword_marker",
|
2017-04-01 13:59:54 -04:00
|
|
|
"keywords": ["sampla"] <2>
|
Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene
Add `irish` analyzer
Add `sorani` analyzer (Kurdish)
Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc.
Add `thai` tokenizer: segments thai text into words.
Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer
Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself
Add `german_normalization` tokenfilter: umlaut/sharp S normalization
Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences
Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages
Add `sorani_normalization` tokenfilter: normalizes kurdish text
Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text
Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization`
Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk`
Add support access to default Thai stopword set "_thai_"
Fix some bugs and broken links in documentation.
Closes #5935
2014-07-02 14:59:18 -04:00
|
|
|
},
|
|
|
|
"irish_stemmer": {
|
|
|
|
"type": "stemmer",
|
|
|
|
"language": "irish"
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"analyzer": {
|
2018-05-09 09:23:10 -04:00
|
|
|
"rebuilt_irish": {
|
Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene
Add `irish` analyzer
Add `sorani` analyzer (Kurdish)
Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc.
Add `thai` tokenizer: segments thai text into words.
Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer
Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself
Add `german_normalization` tokenfilter: umlaut/sharp S normalization
Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences
Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages
Add `sorani_normalization` tokenfilter: normalizes kurdish text
Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text
Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization`
Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk`
Add support access to default Thai stopword set "_thai_"
Fix some bugs and broken links in documentation.
Closes #5935
2014-07-02 14:59:18 -04:00
|
|
|
"tokenizer": "standard",
|
|
|
|
"filter": [
|
2018-05-09 09:23:10 -04:00
|
|
|
"irish_hyphenation",
|
Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene
Add `irish` analyzer
Add `sorani` analyzer (Kurdish)
Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc.
Add `thai` tokenizer: segments thai text into words.
Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer
Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself
Add `german_normalization` tokenfilter: umlaut/sharp S normalization
Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences
Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages
Add `sorani_normalization` tokenfilter: normalizes kurdish text
Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text
Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization`
Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk`
Add support access to default Thai stopword set "_thai_"
Fix some bugs and broken links in documentation.
Closes #5935
2014-07-02 14:59:18 -04:00
|
|
|
"irish_elision",
|
|
|
|
"irish_lowercase",
|
2018-05-09 09:23:10 -04:00
|
|
|
"irish_stop",
|
Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene
Add `irish` analyzer
Add `sorani` analyzer (Kurdish)
Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc.
Add `thai` tokenizer: segments thai text into words.
Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer
Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself
Add `german_normalization` tokenfilter: umlaut/sharp S normalization
Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences
Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages
Add `sorani_normalization` tokenfilter: normalizes kurdish text
Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text
Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization`
Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk`
Add support access to default Thai stopword set "_thai_"
Fix some bugs and broken links in documentation.
Closes #5935
2014-07-02 14:59:18 -04:00
|
|
|
"irish_keywords",
|
|
|
|
"irish_stemmer"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
// CONSOLE
|
2018-05-09 09:23:10 -04:00
|
|
|
// TEST[s/"irish_keywords",//]
|
|
|
|
// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: irish_example, first: irish, second: rebuilt_irish}\nendyaml\n/]
|
Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene
Add `irish` analyzer
Add `sorani` analyzer (Kurdish)
Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc.
Add `thai` tokenizer: segments thai text into words.
Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer
Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself
Add `german_normalization` tokenfilter: umlaut/sharp S normalization
Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences
Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages
Add `sorani_normalization` tokenfilter: normalizes kurdish text
Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text
Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization`
Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk`
Add support access to default Thai stopword set "_thai_"
Fix some bugs and broken links in documentation.
Closes #5935
2014-07-02 14:59:18 -04:00
|
|
|
<1> The default stopwords can be overridden with the `stopwords`
|
|
|
|
or `stopwords_path` parameters.
|
2014-07-07 04:06:18 -04:00
|
|
|
<2> This filter should be removed unless there are words which should
|
|
|
|
be excluded from stemming.
|
Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene
Add `irish` analyzer
Add `sorani` analyzer (Kurdish)
Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc.
Add `thai` tokenizer: segments thai text into words.
Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer
Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself
Add `german_normalization` tokenfilter: umlaut/sharp S normalization
Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences
Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages
Add `sorani_normalization` tokenfilter: normalizes kurdish text
Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text
Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization`
Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk`
Add support access to default Thai stopword set "_thai_"
Fix some bugs and broken links in documentation.
Closes #5935
2014-07-02 14:59:18 -04:00
|
|
|
|
2014-06-09 16:41:25 -04:00
|
|
|
[[italian-analyzer]]
|
2014-07-07 04:06:18 -04:00
|
|
|
===== `italian` analyzer
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
The `italian` analyzer could be reimplemented as a `custom` analyzer as follows:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
PUT /italian_example
|
2014-06-09 16:41:25 -04:00
|
|
|
{
|
|
|
|
"settings": {
|
|
|
|
"analysis": {
|
|
|
|
"filter": {
|
|
|
|
"italian_elision": {
|
2016-05-11 08:17:56 -04:00
|
|
|
"type": "elision",
|
|
|
|
"articles": [
|
2014-06-09 16:41:25 -04:00
|
|
|
"c", "l", "all", "dall", "dell",
|
|
|
|
"nell", "sull", "coll", "pell",
|
|
|
|
"gl", "agl", "dagl", "degl", "negl",
|
|
|
|
"sugl", "un", "m", "t", "s", "v", "d"
|
2018-05-22 03:58:12 -04:00
|
|
|
],
|
|
|
|
"articles_case": true
|
2014-06-09 16:41:25 -04:00
|
|
|
},
|
|
|
|
"italian_stop": {
|
|
|
|
"type": "stop",
|
|
|
|
"stopwords": "_italian_" <1>
|
|
|
|
},
|
|
|
|
"italian_keywords": {
|
2014-06-11 07:43:02 -04:00
|
|
|
"type": "keyword_marker",
|
2017-04-01 13:59:54 -04:00
|
|
|
"keywords": ["esempio"] <2>
|
2014-06-09 16:41:25 -04:00
|
|
|
},
|
|
|
|
"italian_stemmer": {
|
|
|
|
"type": "stemmer",
|
2014-06-09 16:50:48 -04:00
|
|
|
"language": "light_italian"
|
2014-06-09 16:41:25 -04:00
|
|
|
}
|
|
|
|
},
|
|
|
|
"analyzer": {
|
2018-05-09 09:23:10 -04:00
|
|
|
"rebuilt_italian": {
|
2014-06-09 16:41:25 -04:00
|
|
|
"tokenizer": "standard",
|
|
|
|
"filter": [
|
|
|
|
"italian_elision",
|
|
|
|
"lowercase",
|
|
|
|
"italian_stop",
|
|
|
|
"italian_keywords",
|
|
|
|
"italian_stemmer"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
// CONSOLE
|
2018-05-09 09:23:10 -04:00
|
|
|
// TEST[s/"italian_keywords",//]
|
|
|
|
// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: italian_example, first: italian, second: rebuilt_italian}\nendyaml\n/]
|
2014-06-09 16:41:25 -04:00
|
|
|
<1> The default stopwords can be overridden with the `stopwords`
|
|
|
|
or `stopwords_path` parameters.
|
2014-07-07 04:06:18 -04:00
|
|
|
<2> This filter should be removed unless there are words which should
|
|
|
|
be excluded from stemming.
|
2014-06-09 16:41:25 -04:00
|
|
|
|
2014-09-02 19:22:31 -04:00
|
|
|
[[latvian-analyzer]]
|
|
|
|
===== `latvian` analyzer
|
|
|
|
|
|
|
|
The `latvian` analyzer could be reimplemented as a `custom` analyzer as follows:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
PUT /latvian_example
|
2014-09-02 19:22:31 -04:00
|
|
|
{
|
|
|
|
"settings": {
|
|
|
|
"analysis": {
|
|
|
|
"filter": {
|
|
|
|
"latvian_stop": {
|
|
|
|
"type": "stop",
|
|
|
|
"stopwords": "_latvian_" <1>
|
|
|
|
},
|
|
|
|
"latvian_keywords": {
|
|
|
|
"type": "keyword_marker",
|
2017-04-01 13:59:54 -04:00
|
|
|
"keywords": ["piemērs"] <2>
|
2014-09-02 19:22:31 -04:00
|
|
|
},
|
2014-09-03 15:05:30 -04:00
|
|
|
"latvian_stemmer": {
|
2014-09-02 19:22:31 -04:00
|
|
|
"type": "stemmer",
|
|
|
|
"language": "latvian"
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"analyzer": {
|
2018-05-09 09:23:10 -04:00
|
|
|
"rebuilt_latvian": {
|
2014-09-02 19:22:31 -04:00
|
|
|
"tokenizer": "standard",
|
|
|
|
"filter": [
|
|
|
|
"lowercase",
|
|
|
|
"latvian_stop",
|
|
|
|
"latvian_keywords",
|
|
|
|
"latvian_stemmer"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
// CONSOLE
|
2018-05-09 09:23:10 -04:00
|
|
|
// TEST[s/"latvian_keywords",//]
|
|
|
|
// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: latvian_example, first: latvian, second: rebuilt_latvian}\nendyaml\n/]
|
2014-09-02 19:22:31 -04:00
|
|
|
<1> The default stopwords can be overridden with the `stopwords`
|
|
|
|
or `stopwords_path` parameters.
|
|
|
|
<2> This filter should be removed unless there are words which should
|
|
|
|
be excluded from stemming.
|
|
|
|
|
2015-09-01 08:52:10 -04:00
|
|
|
[[lithuanian-analyzer]]
|
|
|
|
===== `lithuanian` analyzer
|
|
|
|
|
|
|
|
The `lithuanian` analyzer could be reimplemented as a `custom` analyzer as follows:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
PUT /lithuanian_example
|
2015-09-01 08:52:10 -04:00
|
|
|
{
|
|
|
|
"settings": {
|
|
|
|
"analysis": {
|
|
|
|
"filter": {
|
|
|
|
"lithuanian_stop": {
|
|
|
|
"type": "stop",
|
|
|
|
"stopwords": "_lithuanian_" <1>
|
|
|
|
},
|
|
|
|
"lithuanian_keywords": {
|
|
|
|
"type": "keyword_marker",
|
2017-04-01 13:59:54 -04:00
|
|
|
"keywords": ["pavyzdys"] <2>
|
2015-09-01 08:52:10 -04:00
|
|
|
},
|
|
|
|
"lithuanian_stemmer": {
|
|
|
|
"type": "stemmer",
|
|
|
|
"language": "lithuanian"
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"analyzer": {
|
2018-05-09 09:23:10 -04:00
|
|
|
"rebuilt_lithuanian": {
|
2015-09-01 08:52:10 -04:00
|
|
|
"tokenizer": "standard",
|
|
|
|
"filter": [
|
|
|
|
"lowercase",
|
|
|
|
"lithuanian_stop",
|
|
|
|
"lithuanian_keywords",
|
|
|
|
"lithuanian_stemmer"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
// CONSOLE
|
2018-05-09 09:23:10 -04:00
|
|
|
// TEST[s/"lithuanian_keywords",//]
|
|
|
|
// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: lithuanian_example, first: lithuanian, second: rebuilt_lithuanian}\nendyaml\n/]
|
2015-09-01 08:52:10 -04:00
|
|
|
<1> The default stopwords can be overridden with the `stopwords`
|
|
|
|
or `stopwords_path` parameters.
|
|
|
|
<2> This filter should be removed unless there are words which should
|
|
|
|
be excluded from stemming.
|
|
|
|
|
2014-06-09 16:41:25 -04:00
|
|
|
[[norwegian-analyzer]]
|
2014-07-07 04:06:18 -04:00
|
|
|
===== `norwegian` analyzer
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
The `norwegian` analyzer could be reimplemented as a `custom` analyzer as follows:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
PUT /norwegian_example
|
2014-06-09 16:41:25 -04:00
|
|
|
{
|
|
|
|
"settings": {
|
|
|
|
"analysis": {
|
|
|
|
"filter": {
|
|
|
|
"norwegian_stop": {
|
|
|
|
"type": "stop",
|
|
|
|
"stopwords": "_norwegian_" <1>
|
|
|
|
},
|
|
|
|
"norwegian_keywords": {
|
2014-06-11 07:43:02 -04:00
|
|
|
"type": "keyword_marker",
|
2017-04-01 13:59:54 -04:00
|
|
|
"keywords": ["eksempel"] <2>
|
2014-06-09 16:41:25 -04:00
|
|
|
},
|
|
|
|
"norwegian_stemmer": {
|
|
|
|
"type": "stemmer",
|
|
|
|
"language": "norwegian"
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"analyzer": {
|
2018-05-09 09:23:10 -04:00
|
|
|
"rebuilt_norwegian": {
|
2014-06-09 16:41:25 -04:00
|
|
|
"tokenizer": "standard",
|
|
|
|
"filter": [
|
|
|
|
"lowercase",
|
|
|
|
"norwegian_stop",
|
|
|
|
"norwegian_keywords",
|
|
|
|
"norwegian_stemmer"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
// CONSOLE
|
2018-05-09 09:23:10 -04:00
|
|
|
// TEST[s/"norwegian_keywords",//]
|
|
|
|
// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: norwegian_example, first: norwegian, second: rebuilt_norwegian}\nendyaml\n/]
|
2014-06-09 16:41:25 -04:00
|
|
|
<1> The default stopwords can be overridden with the `stopwords`
|
|
|
|
or `stopwords_path` parameters.
|
2014-07-07 04:06:18 -04:00
|
|
|
<2> This filter should be removed unless there are words which should
|
|
|
|
be excluded from stemming.
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
[[persian-analyzer]]
|
2014-07-07 04:06:18 -04:00
|
|
|
===== `persian` analyzer
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
The `persian` analyzer could be reimplemented as a `custom` analyzer as follows:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
PUT /persian_example
|
2014-06-09 16:41:25 -04:00
|
|
|
{
|
|
|
|
"settings": {
|
|
|
|
"analysis": {
|
|
|
|
"char_filter": {
|
|
|
|
"zero_width_spaces": {
|
|
|
|
"type": "mapping",
|
|
|
|
"mappings": [ "\\u200C=> "] <1>
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"filter": {
|
|
|
|
"persian_stop": {
|
|
|
|
"type": "stop",
|
|
|
|
"stopwords": "_persian_" <2>
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"analyzer": {
|
2018-05-09 09:23:10 -04:00
|
|
|
"rebuilt_persian": {
|
2014-06-09 16:41:25 -04:00
|
|
|
"tokenizer": "standard",
|
2014-07-30 14:12:29 -04:00
|
|
|
"char_filter": [ "zero_width_spaces" ],
|
2014-06-09 16:41:25 -04:00
|
|
|
"filter": [
|
|
|
|
"lowercase",
|
2018-05-09 09:23:10 -04:00
|
|
|
"decimal_digit",
|
2014-06-09 16:41:25 -04:00
|
|
|
"arabic_normalization",
|
|
|
|
"persian_normalization",
|
|
|
|
"persian_stop"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
// CONSOLE
|
2018-05-09 09:23:10 -04:00
|
|
|
// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: persian_example, first: persian, second: rebuilt_persian}\nendyaml\n/]
|
2014-06-09 16:41:25 -04:00
|
|
|
<1> Replaces zero-width non-joiners with an ASCII space.
|
|
|
|
<2> The default stopwords can be overridden with the `stopwords`
|
|
|
|
or `stopwords_path` parameters.
|
|
|
|
|
|
|
|
[[portuguese-analyzer]]
|
2014-07-07 04:06:18 -04:00
|
|
|
===== `portuguese` analyzer
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
The `portuguese` analyzer could be reimplemented as a `custom` analyzer as follows:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
PUT /portuguese_example
|
2014-06-09 16:41:25 -04:00
|
|
|
{
|
|
|
|
"settings": {
|
|
|
|
"analysis": {
|
|
|
|
"filter": {
|
|
|
|
"portuguese_stop": {
|
|
|
|
"type": "stop",
|
|
|
|
"stopwords": "_portuguese_" <1>
|
|
|
|
},
|
|
|
|
"portuguese_keywords": {
|
2014-06-11 07:43:02 -04:00
|
|
|
"type": "keyword_marker",
|
2017-04-01 13:59:54 -04:00
|
|
|
"keywords": ["exemplo"] <2>
|
2014-06-09 16:41:25 -04:00
|
|
|
},
|
|
|
|
"portuguese_stemmer": {
|
|
|
|
"type": "stemmer",
|
|
|
|
"language": "light_portuguese"
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"analyzer": {
|
2018-05-09 09:23:10 -04:00
|
|
|
"rebuilt_portuguese": {
|
2014-06-09 16:41:25 -04:00
|
|
|
"tokenizer": "standard",
|
|
|
|
"filter": [
|
|
|
|
"lowercase",
|
|
|
|
"portuguese_stop",
|
|
|
|
"portuguese_keywords",
|
|
|
|
"portuguese_stemmer"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
// CONSOLE
|
2018-05-09 09:23:10 -04:00
|
|
|
// TEST[s/"portuguese_keywords",//]
|
|
|
|
// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: portuguese_example, first: portuguese, second: rebuilt_portuguese}\nendyaml\n/]
|
2014-06-09 16:41:25 -04:00
|
|
|
<1> The default stopwords can be overridden with the `stopwords`
|
|
|
|
or `stopwords_path` parameters.
|
2014-07-07 04:06:18 -04:00
|
|
|
<2> This filter should be removed unless there are words which should
|
|
|
|
be excluded from stemming.
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
[[romanian-analyzer]]
|
2014-07-07 04:06:18 -04:00
|
|
|
===== `romanian` analyzer
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
The `romanian` analyzer could be reimplemented as a `custom` analyzer as follows:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
PUT /romanian_example
|
2014-06-09 16:41:25 -04:00
|
|
|
{
|
|
|
|
"settings": {
|
|
|
|
"analysis": {
|
|
|
|
"filter": {
|
|
|
|
"romanian_stop": {
|
|
|
|
"type": "stop",
|
|
|
|
"stopwords": "_romanian_" <1>
|
|
|
|
},
|
|
|
|
"romanian_keywords": {
|
2014-06-11 07:43:02 -04:00
|
|
|
"type": "keyword_marker",
|
2017-04-01 13:59:54 -04:00
|
|
|
"keywords": ["exemplu"] <2>
|
2014-06-09 16:41:25 -04:00
|
|
|
},
|
|
|
|
"romanian_stemmer": {
|
|
|
|
"type": "stemmer",
|
|
|
|
"language": "romanian"
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"analyzer": {
|
2018-05-09 09:23:10 -04:00
|
|
|
"rebuilt_romanian": {
|
2014-06-09 16:41:25 -04:00
|
|
|
"tokenizer": "standard",
|
|
|
|
"filter": [
|
|
|
|
"lowercase",
|
|
|
|
"romanian_stop",
|
|
|
|
"romanian_keywords",
|
|
|
|
"romanian_stemmer"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
// CONSOLE
|
2018-05-09 09:23:10 -04:00
|
|
|
// TEST[s/"romanian_keywords",//]
|
|
|
|
// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: romanian_example, first: romanian, second: rebuilt_romanian}\nendyaml\n/]
|
2014-06-09 16:41:25 -04:00
|
|
|
<1> The default stopwords can be overridden with the `stopwords`
|
|
|
|
or `stopwords_path` parameters.
|
2014-07-07 04:06:18 -04:00
|
|
|
<2> This filter should be removed unless there are words which should
|
|
|
|
be excluded from stemming.
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
|
|
|
|
[[russian-analyzer]]
|
2014-07-07 04:06:18 -04:00
|
|
|
===== `russian` analyzer
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
The `russian` analyzer could be reimplemented as a `custom` analyzer as follows:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
PUT /russian_example
|
2014-06-09 16:41:25 -04:00
|
|
|
{
|
|
|
|
"settings": {
|
|
|
|
"analysis": {
|
|
|
|
"filter": {
|
|
|
|
"russian_stop": {
|
|
|
|
"type": "stop",
|
|
|
|
"stopwords": "_russian_" <1>
|
|
|
|
},
|
|
|
|
"russian_keywords": {
|
2014-06-11 07:43:02 -04:00
|
|
|
"type": "keyword_marker",
|
2017-04-01 13:59:54 -04:00
|
|
|
"keywords": ["пример"] <2>
|
2014-06-09 16:41:25 -04:00
|
|
|
},
|
|
|
|
"russian_stemmer": {
|
|
|
|
"type": "stemmer",
|
|
|
|
"language": "russian"
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"analyzer": {
|
2018-05-09 09:23:10 -04:00
|
|
|
"rebuilt_russian": {
|
2014-06-09 16:41:25 -04:00
|
|
|
"tokenizer": "standard",
|
|
|
|
"filter": [
|
|
|
|
"lowercase",
|
|
|
|
"russian_stop",
|
|
|
|
"russian_keywords",
|
|
|
|
"russian_stemmer"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
// CONSOLE
|
2018-05-09 09:23:10 -04:00
|
|
|
// TEST[s/"russian_keywords",//]
|
|
|
|
// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: russian_example, first: russian, second: rebuilt_russian}\nendyaml\n/]
|
2014-06-09 16:41:25 -04:00
|
|
|
<1> The default stopwords can be overridden with the `stopwords`
|
|
|
|
or `stopwords_path` parameters.
|
2014-07-07 04:06:18 -04:00
|
|
|
<2> This filter should be removed unless there are words which should
|
|
|
|
be excluded from stemming.
|
2014-06-09 16:41:25 -04:00
|
|
|
|
Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene
Add `irish` analyzer
Add `sorani` analyzer (Kurdish)
Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc.
Add `thai` tokenizer: segments thai text into words.
Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer
Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself
Add `german_normalization` tokenfilter: umlaut/sharp S normalization
Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences
Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages
Add `sorani_normalization` tokenfilter: normalizes kurdish text
Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text
Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization`
Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk`
Add support access to default Thai stopword set "_thai_"
Fix some bugs and broken links in documentation.
Closes #5935
2014-07-02 14:59:18 -04:00
|
|
|
[[sorani-analyzer]]
|
2014-07-07 04:06:18 -04:00
|
|
|
===== `sorani` analyzer
|
Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene
Add `irish` analyzer
Add `sorani` analyzer (Kurdish)
Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc.
Add `thai` tokenizer: segments thai text into words.
Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer
Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself
Add `german_normalization` tokenfilter: umlaut/sharp S normalization
Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences
Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages
Add `sorani_normalization` tokenfilter: normalizes kurdish text
Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text
Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization`
Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk`
Add support access to default Thai stopword set "_thai_"
Fix some bugs and broken links in documentation.
Closes #5935
2014-07-02 14:59:18 -04:00
|
|
|
|
|
|
|
The `sorani` analyzer could be reimplemented as a `custom` analyzer as follows:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
PUT /sorani_example
|
Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene
Add `irish` analyzer
Add `sorani` analyzer (Kurdish)
Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc.
Add `thai` tokenizer: segments thai text into words.
Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer
Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself
Add `german_normalization` tokenfilter: umlaut/sharp S normalization
Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences
Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages
Add `sorani_normalization` tokenfilter: normalizes kurdish text
Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text
Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization`
Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk`
Add support access to default Thai stopword set "_thai_"
Fix some bugs and broken links in documentation.
Closes #5935
2014-07-02 14:59:18 -04:00
|
|
|
{
|
|
|
|
"settings": {
|
|
|
|
"analysis": {
|
|
|
|
"filter": {
|
|
|
|
"sorani_stop": {
|
|
|
|
"type": "stop",
|
|
|
|
"stopwords": "_sorani_" <1>
|
|
|
|
},
|
|
|
|
"sorani_keywords": {
|
|
|
|
"type": "keyword_marker",
|
2017-04-01 13:59:54 -04:00
|
|
|
"keywords": ["mînak"] <2>
|
Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene
Add `irish` analyzer
Add `sorani` analyzer (Kurdish)
Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc.
Add `thai` tokenizer: segments thai text into words.
Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer
Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself
Add `german_normalization` tokenfilter: umlaut/sharp S normalization
Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences
Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages
Add `sorani_normalization` tokenfilter: normalizes kurdish text
Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text
Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization`
Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk`
Add support access to default Thai stopword set "_thai_"
Fix some bugs and broken links in documentation.
Closes #5935
2014-07-02 14:59:18 -04:00
|
|
|
},
|
|
|
|
"sorani_stemmer": {
|
|
|
|
"type": "stemmer",
|
|
|
|
"language": "sorani"
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"analyzer": {
|
2018-05-09 09:23:10 -04:00
|
|
|
"rebuilt_sorani": {
|
Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene
Add `irish` analyzer
Add `sorani` analyzer (Kurdish)
Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc.
Add `thai` tokenizer: segments thai text into words.
Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer
Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself
Add `german_normalization` tokenfilter: umlaut/sharp S normalization
Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences
Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages
Add `sorani_normalization` tokenfilter: normalizes kurdish text
Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text
Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization`
Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk`
Add support access to default Thai stopword set "_thai_"
Fix some bugs and broken links in documentation.
Closes #5935
2014-07-02 14:59:18 -04:00
|
|
|
"tokenizer": "standard",
|
|
|
|
"filter": [
|
|
|
|
"sorani_normalization",
|
|
|
|
"lowercase",
|
2018-05-09 09:23:10 -04:00
|
|
|
"decimal_digit",
|
Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene
Add `irish` analyzer
Add `sorani` analyzer (Kurdish)
Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc.
Add `thai` tokenizer: segments thai text into words.
Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer
Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself
Add `german_normalization` tokenfilter: umlaut/sharp S normalization
Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences
Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages
Add `sorani_normalization` tokenfilter: normalizes kurdish text
Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text
Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization`
Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk`
Add support access to default Thai stopword set "_thai_"
Fix some bugs and broken links in documentation.
Closes #5935
2014-07-02 14:59:18 -04:00
|
|
|
"sorani_stop",
|
|
|
|
"sorani_keywords",
|
|
|
|
"sorani_stemmer"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
// CONSOLE
|
2018-05-09 09:23:10 -04:00
|
|
|
// TEST[s/"sorani_keywords",//]
|
|
|
|
// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: sorani_example, first: sorani, second: rebuilt_sorani}\nendyaml\n/]
|
Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene
Add `irish` analyzer
Add `sorani` analyzer (Kurdish)
Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc.
Add `thai` tokenizer: segments thai text into words.
Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer
Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself
Add `german_normalization` tokenfilter: umlaut/sharp S normalization
Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences
Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages
Add `sorani_normalization` tokenfilter: normalizes kurdish text
Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text
Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization`
Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk`
Add support access to default Thai stopword set "_thai_"
Fix some bugs and broken links in documentation.
Closes #5935
2014-07-02 14:59:18 -04:00
|
|
|
<1> The default stopwords can be overridden with the `stopwords`
|
|
|
|
or `stopwords_path` parameters.
|
2014-07-07 04:06:18 -04:00
|
|
|
<2> This filter should be removed unless there are words which should
|
|
|
|
be excluded from stemming.
|
Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene
Add `irish` analyzer
Add `sorani` analyzer (Kurdish)
Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc.
Add `thai` tokenizer: segments thai text into words.
Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer
Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself
Add `german_normalization` tokenfilter: umlaut/sharp S normalization
Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences
Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages
Add `sorani_normalization` tokenfilter: normalizes kurdish text
Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text
Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization`
Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk`
Add support access to default Thai stopword set "_thai_"
Fix some bugs and broken links in documentation.
Closes #5935
2014-07-02 14:59:18 -04:00
|
|
|
|
2014-06-09 16:41:25 -04:00
|
|
|
[[spanish-analyzer]]
|
2014-07-07 04:06:18 -04:00
|
|
|
===== `spanish` analyzer
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
The `spanish` analyzer could be reimplemented as a `custom` analyzer as follows:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
PUT /spanish_example
|
2014-06-09 16:41:25 -04:00
|
|
|
{
|
|
|
|
"settings": {
|
|
|
|
"analysis": {
|
|
|
|
"filter": {
|
|
|
|
"spanish_stop": {
|
|
|
|
"type": "stop",
|
|
|
|
"stopwords": "_spanish_" <1>
|
|
|
|
},
|
|
|
|
"spanish_keywords": {
|
2014-06-11 07:43:02 -04:00
|
|
|
"type": "keyword_marker",
|
2017-04-01 13:59:54 -04:00
|
|
|
"keywords": ["ejemplo"] <2>
|
2014-06-09 16:41:25 -04:00
|
|
|
},
|
|
|
|
"spanish_stemmer": {
|
|
|
|
"type": "stemmer",
|
|
|
|
"language": "light_spanish"
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"analyzer": {
|
2018-05-09 09:23:10 -04:00
|
|
|
"rebuilt_spanish": {
|
2014-06-09 16:41:25 -04:00
|
|
|
"tokenizer": "standard",
|
|
|
|
"filter": [
|
|
|
|
"lowercase",
|
|
|
|
"spanish_stop",
|
|
|
|
"spanish_keywords",
|
|
|
|
"spanish_stemmer"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
// CONSOLE
|
2018-05-09 09:23:10 -04:00
|
|
|
// TEST[s/"spanish_keywords",//]
|
|
|
|
// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: spanish_example, first: spanish, second: rebuilt_spanish}\nendyaml\n/]
|
2014-06-09 16:41:25 -04:00
|
|
|
<1> The default stopwords can be overridden with the `stopwords`
|
|
|
|
or `stopwords_path` parameters.
|
2014-07-07 04:06:18 -04:00
|
|
|
<2> This filter should be removed unless there are words which should
|
|
|
|
be excluded from stemming.
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
[[swedish-analyzer]]
|
2014-07-07 04:06:18 -04:00
|
|
|
===== `swedish` analyzer
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
The `swedish` analyzer could be reimplemented as a `custom` analyzer as follows:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----------------------------------------------------
|
2018-01-18 03:51:53 -05:00
|
|
|
PUT /swedish_example
|
2014-06-09 16:41:25 -04:00
|
|
|
{
|
|
|
|
"settings": {
|
|
|
|
"analysis": {
|
|
|
|
"filter": {
|
|
|
|
"swedish_stop": {
|
|
|
|
"type": "stop",
|
|
|
|
"stopwords": "_swedish_" <1>
|
|
|
|
},
|
|
|
|
"swedish_keywords": {
|
2014-06-11 07:43:02 -04:00
|
|
|
"type": "keyword_marker",
|
2017-04-01 13:59:54 -04:00
|
|
|
"keywords": ["exempel"] <2>
|
2014-06-09 16:41:25 -04:00
|
|
|
},
|
|
|
|
"swedish_stemmer": {
|
|
|
|
"type": "stemmer",
|
|
|
|
"language": "swedish"
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"analyzer": {
|
2018-05-09 09:23:10 -04:00
|
|
|
"rebuilt_swedish": {
|
2014-06-09 16:41:25 -04:00
|
|
|
"tokenizer": "standard",
|
|
|
|
"filter": [
|
|
|
|
"lowercase",
|
|
|
|
"swedish_stop",
|
|
|
|
"swedish_keywords",
|
|
|
|
"swedish_stemmer"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
// CONSOLE
|
2018-05-09 09:23:10 -04:00
|
|
|
// TEST[s/"swedish_keywords",//]
|
|
|
|
// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: swedish_example, first: swedish, second: rebuilt_swedish}\nendyaml\n/]
|
2014-06-09 16:41:25 -04:00
|
|
|
<1> The default stopwords can be overridden with the `stopwords`
|
|
|
|
or `stopwords_path` parameters.
|
2014-07-07 04:06:18 -04:00
|
|
|
<2> This filter should be removed unless there are words which should
|
|
|
|
be excluded from stemming.
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
[[turkish-analyzer]]
|
2014-07-07 04:06:18 -04:00
|
|
|
===== `turkish` analyzer
|
2014-06-09 16:41:25 -04:00
|
|
|
|
Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene
Add `irish` analyzer
Add `sorani` analyzer (Kurdish)
Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc.
Add `thai` tokenizer: segments thai text into words.
Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer
Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself
Add `german_normalization` tokenfilter: umlaut/sharp S normalization
Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences
Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages
Add `sorani_normalization` tokenfilter: normalizes kurdish text
Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text
Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization`
Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk`
Add support access to default Thai stopword set "_thai_"
Fix some bugs and broken links in documentation.
Closes #5935
2014-07-02 14:59:18 -04:00
|
|
|
The `turkish` analyzer could be reimplemented as a `custom` analyzer as follows:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
PUT /turkish_example
|
Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene
Add `irish` analyzer
Add `sorani` analyzer (Kurdish)
Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc.
Add `thai` tokenizer: segments thai text into words.
Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer
Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself
Add `german_normalization` tokenfilter: umlaut/sharp S normalization
Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences
Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages
Add `sorani_normalization` tokenfilter: normalizes kurdish text
Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text
Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization`
Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk`
Add support access to default Thai stopword set "_thai_"
Fix some bugs and broken links in documentation.
Closes #5935
2014-07-02 14:59:18 -04:00
|
|
|
{
|
|
|
|
"settings": {
|
|
|
|
"analysis": {
|
|
|
|
"filter": {
|
|
|
|
"turkish_stop": {
|
|
|
|
"type": "stop",
|
|
|
|
"stopwords": "_turkish_" <1>
|
|
|
|
},
|
|
|
|
"turkish_lowercase": {
|
|
|
|
"type": "lowercase",
|
|
|
|
"language": "turkish"
|
|
|
|
},
|
|
|
|
"turkish_keywords": {
|
|
|
|
"type": "keyword_marker",
|
2017-04-01 13:59:54 -04:00
|
|
|
"keywords": ["örnek"] <2>
|
Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene
Add `irish` analyzer
Add `sorani` analyzer (Kurdish)
Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc.
Add `thai` tokenizer: segments thai text into words.
Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer
Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself
Add `german_normalization` tokenfilter: umlaut/sharp S normalization
Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences
Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages
Add `sorani_normalization` tokenfilter: normalizes kurdish text
Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text
Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization`
Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk`
Add support access to default Thai stopword set "_thai_"
Fix some bugs and broken links in documentation.
Closes #5935
2014-07-02 14:59:18 -04:00
|
|
|
},
|
|
|
|
"turkish_stemmer": {
|
|
|
|
"type": "stemmer",
|
|
|
|
"language": "turkish"
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"analyzer": {
|
2018-05-09 09:23:10 -04:00
|
|
|
"rebuilt_turkish": {
|
Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene
Add `irish` analyzer
Add `sorani` analyzer (Kurdish)
Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc.
Add `thai` tokenizer: segments thai text into words.
Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer
Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself
Add `german_normalization` tokenfilter: umlaut/sharp S normalization
Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences
Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages
Add `sorani_normalization` tokenfilter: normalizes kurdish text
Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text
Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization`
Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk`
Add support access to default Thai stopword set "_thai_"
Fix some bugs and broken links in documentation.
Closes #5935
2014-07-02 14:59:18 -04:00
|
|
|
"tokenizer": "standard",
|
|
|
|
"filter": [
|
|
|
|
"apostrophe",
|
|
|
|
"turkish_lowercase",
|
|
|
|
"turkish_stop",
|
|
|
|
"turkish_keywords",
|
|
|
|
"turkish_stemmer"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
// CONSOLE
|
2018-05-09 09:23:10 -04:00
|
|
|
// TEST[s/"turkish_keywords",//]
|
|
|
|
// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: turkish_example, first: turkish, second: rebuilt_turkish}\nendyaml\n/]
|
Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene
Add `irish` analyzer
Add `sorani` analyzer (Kurdish)
Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc.
Add `thai` tokenizer: segments thai text into words.
Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer
Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself
Add `german_normalization` tokenfilter: umlaut/sharp S normalization
Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences
Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages
Add `sorani_normalization` tokenfilter: normalizes kurdish text
Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text
Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization`
Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk`
Add support access to default Thai stopword set "_thai_"
Fix some bugs and broken links in documentation.
Closes #5935
2014-07-02 14:59:18 -04:00
|
|
|
<1> The default stopwords can be overridden with the `stopwords`
|
|
|
|
or `stopwords_path` parameters.
|
2014-07-07 04:06:18 -04:00
|
|
|
<2> This filter should be removed unless there are words which should
|
|
|
|
be excluded from stemming.
|
2014-06-09 16:41:25 -04:00
|
|
|
|
|
|
|
[[thai-analyzer]]
|
2014-07-07 04:06:18 -04:00
|
|
|
===== `thai` analyzer
|
2014-06-09 16:41:25 -04:00
|
|
|
|
Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene
Add `irish` analyzer
Add `sorani` analyzer (Kurdish)
Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc.
Add `thai` tokenizer: segments thai text into words.
Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer
Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself
Add `german_normalization` tokenfilter: umlaut/sharp S normalization
Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences
Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages
Add `sorani_normalization` tokenfilter: normalizes kurdish text
Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text
Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization`
Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk`
Add support access to default Thai stopword set "_thai_"
Fix some bugs and broken links in documentation.
Closes #5935
2014-07-02 14:59:18 -04:00
|
|
|
The `thai` analyzer could be reimplemented as a `custom` analyzer as follows:
|
2014-06-09 16:41:25 -04:00
|
|
|
|
Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene
Add `irish` analyzer
Add `sorani` analyzer (Kurdish)
Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc.
Add `thai` tokenizer: segments thai text into words.
Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer
Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself
Add `german_normalization` tokenfilter: umlaut/sharp S normalization
Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences
Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages
Add `sorani_normalization` tokenfilter: normalizes kurdish text
Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text
Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization`
Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk`
Add support access to default Thai stopword set "_thai_"
Fix some bugs and broken links in documentation.
Closes #5935
2014-07-02 14:59:18 -04:00
|
|
|
[source,js]
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
PUT /thai_example
|
Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene
Add `irish` analyzer
Add `sorani` analyzer (Kurdish)
Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc.
Add `thai` tokenizer: segments thai text into words.
Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer
Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself
Add `german_normalization` tokenfilter: umlaut/sharp S normalization
Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences
Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages
Add `sorani_normalization` tokenfilter: normalizes kurdish text
Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text
Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization`
Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk`
Add support access to default Thai stopword set "_thai_"
Fix some bugs and broken links in documentation.
Closes #5935
2014-07-02 14:59:18 -04:00
|
|
|
{
|
|
|
|
"settings": {
|
|
|
|
"analysis": {
|
|
|
|
"filter": {
|
|
|
|
"thai_stop": {
|
|
|
|
"type": "stop",
|
|
|
|
"stopwords": "_thai_" <1>
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"analyzer": {
|
2018-05-09 09:23:10 -04:00
|
|
|
"rebuilt_thai": {
|
Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene
Add `irish` analyzer
Add `sorani` analyzer (Kurdish)
Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc.
Add `thai` tokenizer: segments thai text into words.
Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer
Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself
Add `german_normalization` tokenfilter: umlaut/sharp S normalization
Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences
Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages
Add `sorani_normalization` tokenfilter: normalizes kurdish text
Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text
Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization`
Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk`
Add support access to default Thai stopword set "_thai_"
Fix some bugs and broken links in documentation.
Closes #5935
2014-07-02 14:59:18 -04:00
|
|
|
"tokenizer": "thai",
|
|
|
|
"filter": [
|
|
|
|
"lowercase",
|
2018-05-09 09:23:10 -04:00
|
|
|
"decimal_digit",
|
Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene
Add `irish` analyzer
Add `sorani` analyzer (Kurdish)
Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc.
Add `thai` tokenizer: segments thai text into words.
Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer
Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself
Add `german_normalization` tokenfilter: umlaut/sharp S normalization
Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences
Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages
Add `sorani_normalization` tokenfilter: normalizes kurdish text
Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text
Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization`
Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk`
Add support access to default Thai stopword set "_thai_"
Fix some bugs and broken links in documentation.
Closes #5935
2014-07-02 14:59:18 -04:00
|
|
|
"thai_stop"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
----------------------------------------------------
|
2017-04-01 13:59:54 -04:00
|
|
|
// CONSOLE
|
2018-05-09 09:23:10 -04:00
|
|
|
// TEST[s/"thai_keywords",//]
|
|
|
|
// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: thai_example, first: thai, second: rebuilt_thai}\nendyaml\n/]
|
Analysis: Add additional Analyzers, Tokenizers, and TokenFilters from Lucene
Add `irish` analyzer
Add `sorani` analyzer (Kurdish)
Add `classic` tokenizer: specific to english text and tries to recognize hostnames, companies, acronyms, etc.
Add `thai` tokenizer: segments thai text into words.
Add `classic` tokenfilter: cleans up acronyms and possessives from classic tokenizer
Add `apostrophe` tokenfilter: removes text after apostrophe and the apostrophe itself
Add `german_normalization` tokenfilter: umlaut/sharp S normalization
Add `hindi_normalization` tokenfilter: accounts for hindi spelling differences
Add `indic_normalization` tokenfilter: accounts for different unicode representations in Indian languages
Add `sorani_normalization` tokenfilter: normalizes kurdish text
Add `scandinavian_normalization` tokenfilter: normalizes Norwegian, Danish, Swedish text
Add `scandinavian_folding` tokenfilter: much more aggressive form of `scandinavian_normalization`
Add additional languages to stemmer tokenfilter: `galician`, `minimal_galician`, `irish`, `sorani`, `light_nynorsk`, `minimal_nynorsk`
Add support access to default Thai stopword set "_thai_"
Fix some bugs and broken links in documentation.
Closes #5935
2014-07-02 14:59:18 -04:00
|
|
|
<1> The default stopwords can be overridden with the `stopwords`
|
|
|
|
or `stopwords_path` parameters.
|