OpenSearch/docs/reference/analysis/analyzers/lang-analyzer.asciidoc

1563 lines
39 KiB
Plaintext

[[analysis-lang-analyzer]]
=== Language Analyzers
A set of analyzers aimed at analyzing specific language text. The
following types are supported:
<<arabic-analyzer,`arabic`>>,
<<armenian-analyzer,`armenian`>>,
<<basque-analyzer,`basque`>>,
<<brazilian-analyzer,`brazilian`>>,
<<bulgarian-analyzer,`bulgarian`>>,
<<catalan-analyzer,`catalan`>>,
<<cjk-analyzer,`cjk`>>,
<<czech-analyzer,`czech`>>,
<<danish-analyzer,`danish`>>,
<<dutch-analyzer,`dutch`>>,
<<english-analyzer,`english`>>,
<<finnish-analyzer,`finnish`>>,
<<french-analyzer,`french`>>,
<<galician-analyzer,`galician`>>,
<<german-analyzer,`german`>>,
<<greek-analyzer,`greek`>>,
<<hindi-analyzer,`hindi`>>,
<<hungarian-analyzer,`hungarian`>>,
<<indonesian-analyzer,`indonesian`>>,
<<irish-analyzer,`irish`>>,
<<italian-analyzer,`italian`>>,
<<latvian-analyzer,`latvian`>>,
<<lithuanian-analyzer,`lithuanian`>>,
<<norwegian-analyzer,`norwegian`>>,
<<persian-analyzer,`persian`>>,
<<portuguese-analyzer,`portuguese`>>,
<<romanian-analyzer,`romanian`>>,
<<russian-analyzer,`russian`>>,
<<sorani-analyzer,`sorani`>>,
<<spanish-analyzer,`spanish`>>,
<<swedish-analyzer,`swedish`>>,
<<turkish-analyzer,`turkish`>>,
<<thai-analyzer,`thai`>>.
==== Configuring language analyzers
===== Stopwords
All analyzers support setting custom `stopwords` either internally in
the config, or by using an external stopwords file by setting
`stopwords_path`. Check <<analysis-stop-analyzer,Stop Analyzer>> for
more details.
===== Excluding words from stemming
The `stem_exclusion` parameter allows you to specify an array
of lowercase words that should not be stemmed. Internally, this
functionality is implemented by adding the
<<analysis-keyword-marker-tokenfilter,`keyword_marker` token filter>>
with the `keywords` set to the value of the `stem_exclusion` parameter.
The following analyzers support setting custom `stem_exclusion` list:
`arabic`, `armenian`, `basque`, `catalan`, `bulgarian`, `catalan`,
`czech`, `finnish`, `dutch`, `english`, `finnish`, `french`, `galician`,
`german`, `irish`, `hindi`, `hungarian`, `indonesian`, `italian`, `latvian`,
`lithuanian`, `norwegian`, `portuguese`, `romanian`, `russian`, `sorani`,
`spanish`, `swedish`, `turkish`.
==== Reimplementing language analyzers
The built-in language analyzers can be reimplemented as `custom` analyzers
(as described below) in order to customize their behaviour.
NOTE: If you do not intend to exclude words from being stemmed (the
equivalent of the `stem_exclusion` parameter above), then you should remove
the `keyword_marker` token filter from the custom analyzer configuration.
[[arabic-analyzer]]
===== `arabic` analyzer
The `arabic` analyzer could be reimplemented as a `custom` analyzer as follows:
[source,js]
----------------------------------------------------
{
"settings": {
"analysis": {
"filter": {
"arabic_stop": {
"type": "stop",
"stopwords": "_arabic_" <1>
},
"arabic_keywords": {
"type": "keyword_marker",
"keywords": [] <2>
},
"arabic_stemmer": {
"type": "stemmer",
"language": "arabic"
}
},
"analyzer": {
"arabic": {
"tokenizer": "standard",
"filter": [
"lowercase",
"arabic_stop",
"arabic_normalization",
"arabic_keywords",
"arabic_stemmer"
]
}
}
}
}
}
----------------------------------------------------
<1> The default stopwords can be overridden with the `stopwords`
or `stopwords_path` parameters.
<2> This filter should be removed unless there are words which should
be excluded from stemming.
[[armenian-analyzer]]
===== `armenian` analyzer
The `armenian` analyzer could be reimplemented as a `custom` analyzer as follows:
[source,js]
----------------------------------------------------
{
"settings": {
"analysis": {
"filter": {
"armenian_stop": {
"type": "stop",
"stopwords": "_armenian_" <1>
},
"armenian_keywords": {
"type": "keyword_marker",
"keywords": [] <2>
},
"armenian_stemmer": {
"type": "stemmer",
"language": "armenian"
}
},
"analyzer": {
"armenian": {
"tokenizer": "standard",
"filter": [
"lowercase",
"armenian_stop",
"armenian_keywords",
"armenian_stemmer"
]
}
}
}
}
}
----------------------------------------------------
<1> The default stopwords can be overridden with the `stopwords`
or `stopwords_path` parameters.
<2> This filter should be removed unless there are words which should
be excluded from stemming.
[[basque-analyzer]]
===== `basque` analyzer
The `basque` analyzer could be reimplemented as a `custom` analyzer as follows:
[source,js]
----------------------------------------------------
{
"settings": {
"analysis": {
"filter": {
"basque_stop": {
"type": "stop",
"stopwords": "_basque_" <1>
},
"basque_keywords": {
"type": "keyword_marker",
"keywords": [] <2>
},
"basque_stemmer": {
"type": "stemmer",
"language": "basque"
}
},
"analyzer": {
"basque": {
"tokenizer": "standard",
"filter": [
"lowercase",
"basque_stop",
"basque_keywords",
"basque_stemmer"
]
}
}
}
}
}
----------------------------------------------------
<1> The default stopwords can be overridden with the `stopwords`
or `stopwords_path` parameters.
<2> This filter should be removed unless there are words which should
be excluded from stemming.
[[brazilian-analyzer]]
===== `brazilian` analyzer
The `brazilian` analyzer could be reimplemented as a `custom` analyzer as follows:
[source,js]
----------------------------------------------------
{
"settings": {
"analysis": {
"filter": {
"brazilian_stop": {
"type": "stop",
"stopwords": "_brazilian_" <1>
},
"brazilian_keywords": {
"type": "keyword_marker",
"keywords": [] <2>
},
"brazilian_stemmer": {
"type": "stemmer",
"language": "brazilian"
}
},
"analyzer": {
"brazilian": {
"tokenizer": "standard",
"filter": [
"lowercase",
"brazilian_stop",
"brazilian_keywords",
"brazilian_stemmer"
]
}
}
}
}
}
----------------------------------------------------
<1> The default stopwords can be overridden with the `stopwords`
or `stopwords_path` parameters.
<2> This filter should be removed unless there are words which should
be excluded from stemming.
[[bulgarian-analyzer]]
===== `bulgarian` analyzer
The `bulgarian` analyzer could be reimplemented as a `custom` analyzer as follows:
[source,js]
----------------------------------------------------
{
"settings": {
"analysis": {
"filter": {
"bulgarian_stop": {
"type": "stop",
"stopwords": "_bulgarian_" <1>
},
"bulgarian_keywords": {
"type": "keyword_marker",
"keywords": [] <2>
},
"bulgarian_stemmer": {
"type": "stemmer",
"language": "bulgarian"
}
},
"analyzer": {
"bulgarian": {
"tokenizer": "standard",
"filter": [
"lowercase",
"bulgarian_stop",
"bulgarian_keywords",
"bulgarian_stemmer"
]
}
}
}
}
}
----------------------------------------------------
<1> The default stopwords can be overridden with the `stopwords`
or `stopwords_path` parameters.
<2> This filter should be removed unless there are words which should
be excluded from stemming.
[[catalan-analyzer]]
===== `catalan` analyzer
The `catalan` analyzer could be reimplemented as a `custom` analyzer as follows:
[source,js]
----------------------------------------------------
{
"settings": {
"analysis": {
"filter": {
"catalan_elision": {
"type": "elision",
"articles": [ "d", "l", "m", "n", "s", "t"]
},
"catalan_stop": {
"type": "stop",
"stopwords": "_catalan_" <1>
},
"catalan_keywords": {
"type": "keyword_marker",
"keywords": [] <2>
},
"catalan_stemmer": {
"type": "stemmer",
"language": "catalan"
}
},
"analyzer": {
"catalan": {
"tokenizer": "standard",
"filter": [
"catalan_elision",
"lowercase",
"catalan_stop",
"catalan_keywords",
"catalan_stemmer"
]
}
}
}
}
}
----------------------------------------------------
<1> The default stopwords can be overridden with the `stopwords`
or `stopwords_path` parameters.
<2> This filter should be removed unless there are words which should
be excluded from stemming.
[[cjk-analyzer]]
===== `cjk` analyzer
The `cjk` analyzer could be reimplemented as a `custom` analyzer as follows:
[source,js]
----------------------------------------------------
{
"settings": {
"analysis": {
"filter": {
"english_stop": {
"type": "stop",
"stopwords": "_english_" <1>
}
},
"analyzer": {
"cjk": {
"tokenizer": "standard",
"filter": [
"cjk_width",
"lowercase",
"cjk_bigram",
"english_stop"
]
}
}
}
}
}
----------------------------------------------------
<1> The default stopwords can be overridden with the `stopwords`
or `stopwords_path` parameters.
[[czech-analyzer]]
===== `czech` analyzer
The `czech` analyzer could be reimplemented as a `custom` analyzer as follows:
[source,js]
----------------------------------------------------
{
"settings": {
"analysis": {
"filter": {
"czech_stop": {
"type": "stop",
"stopwords": "_czech_" <1>
},
"czech_keywords": {
"type": "keyword_marker",
"keywords": [] <2>
},
"czech_stemmer": {
"type": "stemmer",
"language": "czech"
}
},
"analyzer": {
"czech": {
"tokenizer": "standard",
"filter": [
"lowercase",
"czech_stop",
"czech_keywords",
"czech_stemmer"
]
}
}
}
}
}
----------------------------------------------------
<1> The default stopwords can be overridden with the `stopwords`
or `stopwords_path` parameters.
<2> This filter should be removed unless there are words which should
be excluded from stemming.
[[danish-analyzer]]
===== `danish` analyzer
The `danish` analyzer could be reimplemented as a `custom` analyzer as follows:
[source,js]
----------------------------------------------------
{
"settings": {
"analysis": {
"filter": {
"danish_stop": {
"type": "stop",
"stopwords": "_danish_" <1>
},
"danish_keywords": {
"type": "keyword_marker",
"keywords": [] <2>
},
"danish_stemmer": {
"type": "stemmer",
"language": "danish"
}
},
"analyzer": {
"danish": {
"tokenizer": "standard",
"filter": [
"lowercase",
"danish_stop",
"danish_keywords",
"danish_stemmer"
]
}
}
}
}
}
----------------------------------------------------
<1> The default stopwords can be overridden with the `stopwords`
or `stopwords_path` parameters.
<2> This filter should be removed unless there are words which should
be excluded from stemming.
[[dutch-analyzer]]
===== `dutch` analyzer
The `dutch` analyzer could be reimplemented as a `custom` analyzer as follows:
[source,js]
----------------------------------------------------
{
"settings": {
"analysis": {
"filter": {
"dutch_stop": {
"type": "stop",
"stopwords": "_dutch_" <1>
},
"dutch_keywords": {
"type": "keyword_marker",
"keywords": [] <2>
},
"dutch_stemmer": {
"type": "stemmer",
"language": "dutch"
},
"dutch_override": {
"type": "stemmer_override",
"rules": [
"fiets=>fiets",
"bromfiets=>bromfiets",
"ei=>eier",
"kind=>kinder"
]
}
},
"analyzer": {
"dutch": {
"tokenizer": "standard",
"filter": [
"lowercase",
"dutch_stop",
"dutch_keywords",
"dutch_override",
"dutch_stemmer"
]
}
}
}
}
}
----------------------------------------------------
<1> The default stopwords can be overridden with the `stopwords`
or `stopwords_path` parameters.
<2> This filter should be removed unless there are words which should
be excluded from stemming.
[[english-analyzer]]
===== `english` analyzer
The `english` analyzer could be reimplemented as a `custom` analyzer as follows:
[source,js]
----------------------------------------------------
{
"settings": {
"analysis": {
"filter": {
"english_stop": {
"type": "stop",
"stopwords": "_english_" <1>
},
"english_keywords": {
"type": "keyword_marker",
"keywords": [] <2>
},
"english_stemmer": {
"type": "stemmer",
"language": "english"
},
"english_possessive_stemmer": {
"type": "stemmer",
"language": "possessive_english"
}
},
"analyzer": {
"english": {
"tokenizer": "standard",
"filter": [
"english_possessive_stemmer",
"lowercase",
"english_stop",
"english_keywords",
"english_stemmer"
]
}
}
}
}
}
----------------------------------------------------
<1> The default stopwords can be overridden with the `stopwords`
or `stopwords_path` parameters.
<2> This filter should be removed unless there are words which should
be excluded from stemming.
[[finnish-analyzer]]
===== `finnish` analyzer
The `finnish` analyzer could be reimplemented as a `custom` analyzer as follows:
[source,js]
----------------------------------------------------
{
"settings": {
"analysis": {
"filter": {
"finnish_stop": {
"type": "stop",
"stopwords": "_finnish_" <1>
},
"finnish_keywords": {
"type": "keyword_marker",
"keywords": [] <2>
},
"finnish_stemmer": {
"type": "stemmer",
"language": "finnish"
}
},
"analyzer": {
"finnish": {
"tokenizer": "standard",
"filter": [
"lowercase",
"finnish_stop",
"finnish_keywords",
"finnish_stemmer"
]
}
}
}
}
}
----------------------------------------------------
<1> The default stopwords can be overridden with the `stopwords`
or `stopwords_path` parameters.
<2> This filter should be removed unless there are words which should
be excluded from stemming.
[[french-analyzer]]
===== `french` analyzer
The `french` analyzer could be reimplemented as a `custom` analyzer as follows:
[source,js]
----------------------------------------------------
{
"settings": {
"analysis": {
"filter": {
"french_elision": {
"type": "elision",
"articles": [ "l", "m", "t", "qu", "n", "s",
"j", "d", "c", "jusqu", "quoiqu",
"lorsqu", "puisqu"
]
},
"french_stop": {
"type": "stop",
"stopwords": "_french_" <1>
},
"french_keywords": {
"type": "keyword_marker",
"keywords": [] <2>
},
"french_stemmer": {
"type": "stemmer",
"language": "light_french"
}
},
"analyzer": {
"french": {
"tokenizer": "standard",
"filter": [
"french_elision",
"lowercase",
"french_stop",
"french_keywords",
"french_stemmer"
]
}
}
}
}
}
----------------------------------------------------
<1> The default stopwords can be overridden with the `stopwords`
or `stopwords_path` parameters.
<2> This filter should be removed unless there are words which should
be excluded from stemming.
[[galician-analyzer]]
===== `galician` analyzer
The `galician` analyzer could be reimplemented as a `custom` analyzer as follows:
[source,js]
----------------------------------------------------
{
"settings": {
"analysis": {
"filter": {
"galician_stop": {
"type": "stop",
"stopwords": "_galician_" <1>
},
"galician_keywords": {
"type": "keyword_marker",
"keywords": [] <2>
},
"galician_stemmer": {
"type": "stemmer",
"language": "galician"
}
},
"analyzer": {
"galician": {
"tokenizer": "standard",
"filter": [
"lowercase",
"galician_stop",
"galician_keywords",
"galician_stemmer"
]
}
}
}
}
}
----------------------------------------------------
<1> The default stopwords can be overridden with the `stopwords`
or `stopwords_path` parameters.
<2> This filter should be removed unless there are words which should
be excluded from stemming.
[[german-analyzer]]
===== `german` analyzer
The `german` analyzer could be reimplemented as a `custom` analyzer as follows:
[source,js]
----------------------------------------------------
{
"settings": {
"analysis": {
"filter": {
"german_stop": {
"type": "stop",
"stopwords": "_german_" <1>
},
"german_keywords": {
"type": "keyword_marker",
"keywords": [] <2>
},
"german_stemmer": {
"type": "stemmer",
"language": "light_german"
}
},
"analyzer": {
"german": {
"tokenizer": "standard",
"filter": [
"lowercase",
"german_stop",
"german_keywords",
"german_normalization",
"german_stemmer"
]
}
}
}
}
}
----------------------------------------------------
<1> The default stopwords can be overridden with the `stopwords`
or `stopwords_path` parameters.
<2> This filter should be removed unless there are words which should
be excluded from stemming.
[[greek-analyzer]]
===== `greek` analyzer
The `greek` analyzer could be reimplemented as a `custom` analyzer as follows:
[source,js]
----------------------------------------------------
{
"settings": {
"analysis": {
"filter": {
"greek_stop": {
"type": "stop",
"stopwords": "_greek_" <1>
},
"greek_lowercase": {
"type": "lowercase",
"language": "greek"
},
"greek_keywords": {
"type": "keyword_marker",
"keywords": [] <2>
},
"greek_stemmer": {
"type": "stemmer",
"language": "greek"
}
},
"analyzer": {
"greek": {
"tokenizer": "standard",
"filter": [
"greek_lowercase",
"greek_stop",
"greek_keywords",
"greek_stemmer"
]
}
}
}
}
}
----------------------------------------------------
<1> The default stopwords can be overridden with the `stopwords`
or `stopwords_path` parameters.
<2> This filter should be removed unless there are words which should
be excluded from stemming.
[[hindi-analyzer]]
===== `hindi` analyzer
The `hindi` analyzer could be reimplemented as a `custom` analyzer as follows:
[source,js]
----------------------------------------------------
{
"settings": {
"analysis": {
"filter": {
"hindi_stop": {
"type": "stop",
"stopwords": "_hindi_" <1>
},
"hindi_keywords": {
"type": "keyword_marker",
"keywords": [] <2>
},
"hindi_stemmer": {
"type": "stemmer",
"language": "hindi"
}
},
"analyzer": {
"hindi": {
"tokenizer": "standard",
"filter": [
"lowercase",
"indic_normalization",
"hindi_normalization",
"hindi_stop",
"hindi_keywords",
"hindi_stemmer"
]
}
}
}
}
}
----------------------------------------------------
<1> The default stopwords can be overridden with the `stopwords`
or `stopwords_path` parameters.
<2> This filter should be removed unless there are words which should
be excluded from stemming.
[[hungarian-analyzer]]
===== `hungarian` analyzer
The `hungarian` analyzer could be reimplemented as a `custom` analyzer as follows:
[source,js]
----------------------------------------------------
{
"settings": {
"analysis": {
"filter": {
"hungarian_stop": {
"type": "stop",
"stopwords": "_hungarian_" <1>
},
"hungarian_keywords": {
"type": "keyword_marker",
"keywords": [] <2>
},
"hungarian_stemmer": {
"type": "stemmer",
"language": "hungarian"
}
},
"analyzer": {
"hungarian": {
"tokenizer": "standard",
"filter": [
"lowercase",
"hungarian_stop",
"hungarian_keywords",
"hungarian_stemmer"
]
}
}
}
}
}
----------------------------------------------------
<1> The default stopwords can be overridden with the `stopwords`
or `stopwords_path` parameters.
<2> This filter should be removed unless there are words which should
be excluded from stemming.
[[indonesian-analyzer]]
===== `indonesian` analyzer
The `indonesian` analyzer could be reimplemented as a `custom` analyzer as follows:
[source,js]
----------------------------------------------------
{
"settings": {
"analysis": {
"filter": {
"indonesian_stop": {
"type": "stop",
"stopwords": "_indonesian_" <1>
},
"indonesian_keywords": {
"type": "keyword_marker",
"keywords": [] <2>
},
"indonesian_stemmer": {
"type": "stemmer",
"language": "indonesian"
}
},
"analyzer": {
"indonesian": {
"tokenizer": "standard",
"filter": [
"lowercase",
"indonesian_stop",
"indonesian_keywords",
"indonesian_stemmer"
]
}
}
}
}
}
----------------------------------------------------
<1> The default stopwords can be overridden with the `stopwords`
or `stopwords_path` parameters.
<2> This filter should be removed unless there are words which should
be excluded from stemming.
[[irish-analyzer]]
===== `irish` analyzer
The `irish` analyzer could be reimplemented as a `custom` analyzer as follows:
[source,js]
----------------------------------------------------
{
"settings": {
"analysis": {
"filter": {
"irish_elision": {
"type": "elision",
"articles": [ "h", "n", "t" ]
},
"irish_stop": {
"type": "stop",
"stopwords": "_irish_" <1>
},
"irish_lowercase": {
"type": "lowercase",
"language": "irish"
},
"irish_keywords": {
"type": "keyword_marker",
"keywords": [] <2>
},
"irish_stemmer": {
"type": "stemmer",
"language": "irish"
}
},
"analyzer": {
"irish": {
"tokenizer": "standard",
"filter": [
"irish_stop",
"irish_elision",
"irish_lowercase",
"irish_keywords",
"irish_stemmer"
]
}
}
}
}
}
----------------------------------------------------
<1> The default stopwords can be overridden with the `stopwords`
or `stopwords_path` parameters.
<2> This filter should be removed unless there are words which should
be excluded from stemming.
[[italian-analyzer]]
===== `italian` analyzer
The `italian` analyzer could be reimplemented as a `custom` analyzer as follows:
[source,js]
----------------------------------------------------
{
"settings": {
"analysis": {
"filter": {
"italian_elision": {
"type": "elision",
"articles": [
"c", "l", "all", "dall", "dell",
"nell", "sull", "coll", "pell",
"gl", "agl", "dagl", "degl", "negl",
"sugl", "un", "m", "t", "s", "v", "d"
]
},
"italian_stop": {
"type": "stop",
"stopwords": "_italian_" <1>
},
"italian_keywords": {
"type": "keyword_marker",
"keywords": [] <2>
},
"italian_stemmer": {
"type": "stemmer",
"language": "light_italian"
}
},
"analyzer": {
"italian": {
"tokenizer": "standard",
"filter": [
"italian_elision",
"lowercase",
"italian_stop",
"italian_keywords",
"italian_stemmer"
]
}
}
}
}
}
----------------------------------------------------
<1> The default stopwords can be overridden with the `stopwords`
or `stopwords_path` parameters.
<2> This filter should be removed unless there are words which should
be excluded from stemming.
[[latvian-analyzer]]
===== `latvian` analyzer
The `latvian` analyzer could be reimplemented as a `custom` analyzer as follows:
[source,js]
----------------------------------------------------
{
"settings": {
"analysis": {
"filter": {
"latvian_stop": {
"type": "stop",
"stopwords": "_latvian_" <1>
},
"latvian_keywords": {
"type": "keyword_marker",
"keywords": [] <2>
},
"latvian_stemmer": {
"type": "stemmer",
"language": "latvian"
}
},
"analyzer": {
"latvian": {
"tokenizer": "standard",
"filter": [
"lowercase",
"latvian_stop",
"latvian_keywords",
"latvian_stemmer"
]
}
}
}
}
}
----------------------------------------------------
<1> The default stopwords can be overridden with the `stopwords`
or `stopwords_path` parameters.
<2> This filter should be removed unless there are words which should
be excluded from stemming.
[[lithuanian-analyzer]]
===== `lithuanian` analyzer
The `lithuanian` analyzer could be reimplemented as a `custom` analyzer as follows:
[source,js]
----------------------------------------------------
{
"settings": {
"analysis": {
"filter": {
"lithuanian_stop": {
"type": "stop",
"stopwords": "_lithuanian_" <1>
},
"lithuanian_keywords": {
"type": "keyword_marker",
"keywords": [] <2>
},
"lithuanian_stemmer": {
"type": "stemmer",
"language": "lithuanian"
}
},
"analyzer": {
"lithuanian": {
"tokenizer": "standard",
"filter": [
"lowercase",
"lithuanian_stop",
"lithuanian_keywords",
"lithuanian_stemmer"
]
}
}
}
}
}
----------------------------------------------------
<1> The default stopwords can be overridden with the `stopwords`
or `stopwords_path` parameters.
<2> This filter should be removed unless there are words which should
be excluded from stemming.
[[norwegian-analyzer]]
===== `norwegian` analyzer
The `norwegian` analyzer could be reimplemented as a `custom` analyzer as follows:
[source,js]
----------------------------------------------------
{
"settings": {
"analysis": {
"filter": {
"norwegian_stop": {
"type": "stop",
"stopwords": "_norwegian_" <1>
},
"norwegian_keywords": {
"type": "keyword_marker",
"keywords": [] <2>
},
"norwegian_stemmer": {
"type": "stemmer",
"language": "norwegian"
}
},
"analyzer": {
"norwegian": {
"tokenizer": "standard",
"filter": [
"lowercase",
"norwegian_stop",
"norwegian_keywords",
"norwegian_stemmer"
]
}
}
}
}
}
----------------------------------------------------
<1> The default stopwords can be overridden with the `stopwords`
or `stopwords_path` parameters.
<2> This filter should be removed unless there are words which should
be excluded from stemming.
[[persian-analyzer]]
===== `persian` analyzer
The `persian` analyzer could be reimplemented as a `custom` analyzer as follows:
[source,js]
----------------------------------------------------
{
"settings": {
"analysis": {
"char_filter": {
"zero_width_spaces": {
"type": "mapping",
"mappings": [ "\\u200C=> "] <1>
}
},
"filter": {
"persian_stop": {
"type": "stop",
"stopwords": "_persian_" <2>
}
},
"analyzer": {
"persian": {
"tokenizer": "standard",
"char_filter": [ "zero_width_spaces" ],
"filter": [
"lowercase",
"arabic_normalization",
"persian_normalization",
"persian_stop"
]
}
}
}
}
}
----------------------------------------------------
<1> Replaces zero-width non-joiners with an ASCII space.
<2> The default stopwords can be overridden with the `stopwords`
or `stopwords_path` parameters.
[[portuguese-analyzer]]
===== `portuguese` analyzer
The `portuguese` analyzer could be reimplemented as a `custom` analyzer as follows:
[source,js]
----------------------------------------------------
{
"settings": {
"analysis": {
"filter": {
"portuguese_stop": {
"type": "stop",
"stopwords": "_portuguese_" <1>
},
"portuguese_keywords": {
"type": "keyword_marker",
"keywords": [] <2>
},
"portuguese_stemmer": {
"type": "stemmer",
"language": "light_portuguese"
}
},
"analyzer": {
"portuguese": {
"tokenizer": "standard",
"filter": [
"lowercase",
"portuguese_stop",
"portuguese_keywords",
"portuguese_stemmer"
]
}
}
}
}
}
----------------------------------------------------
<1> The default stopwords can be overridden with the `stopwords`
or `stopwords_path` parameters.
<2> This filter should be removed unless there are words which should
be excluded from stemming.
[[romanian-analyzer]]
===== `romanian` analyzer
The `romanian` analyzer could be reimplemented as a `custom` analyzer as follows:
[source,js]
----------------------------------------------------
{
"settings": {
"analysis": {
"filter": {
"romanian_stop": {
"type": "stop",
"stopwords": "_romanian_" <1>
},
"romanian_keywords": {
"type": "keyword_marker",
"keywords": [] <2>
},
"romanian_stemmer": {
"type": "stemmer",
"language": "romanian"
}
},
"analyzer": {
"romanian": {
"tokenizer": "standard",
"filter": [
"lowercase",
"romanian_stop",
"romanian_keywords",
"romanian_stemmer"
]
}
}
}
}
}
----------------------------------------------------
<1> The default stopwords can be overridden with the `stopwords`
or `stopwords_path` parameters.
<2> This filter should be removed unless there are words which should
be excluded from stemming.
[[russian-analyzer]]
===== `russian` analyzer
The `russian` analyzer could be reimplemented as a `custom` analyzer as follows:
[source,js]
----------------------------------------------------
{
"settings": {
"analysis": {
"filter": {
"russian_stop": {
"type": "stop",
"stopwords": "_russian_" <1>
},
"russian_keywords": {
"type": "keyword_marker",
"keywords": [] <2>
},
"russian_stemmer": {
"type": "stemmer",
"language": "russian"
}
},
"analyzer": {
"russian": {
"tokenizer": "standard",
"filter": [
"lowercase",
"russian_stop",
"russian_keywords",
"russian_stemmer"
]
}
}
}
}
}
----------------------------------------------------
<1> The default stopwords can be overridden with the `stopwords`
or `stopwords_path` parameters.
<2> This filter should be removed unless there are words which should
be excluded from stemming.
[[sorani-analyzer]]
===== `sorani` analyzer
The `sorani` analyzer could be reimplemented as a `custom` analyzer as follows:
[source,js]
----------------------------------------------------
{
"settings": {
"analysis": {
"filter": {
"sorani_stop": {
"type": "stop",
"stopwords": "_sorani_" <1>
},
"sorani_keywords": {
"type": "keyword_marker",
"keywords": [] <2>
},
"sorani_stemmer": {
"type": "stemmer",
"language": "sorani"
}
},
"analyzer": {
"sorani": {
"tokenizer": "standard",
"filter": [
"sorani_normalization",
"lowercase",
"sorani_stop",
"sorani_keywords",
"sorani_stemmer"
]
}
}
}
}
}
----------------------------------------------------
<1> The default stopwords can be overridden with the `stopwords`
or `stopwords_path` parameters.
<2> This filter should be removed unless there are words which should
be excluded from stemming.
[[spanish-analyzer]]
===== `spanish` analyzer
The `spanish` analyzer could be reimplemented as a `custom` analyzer as follows:
[source,js]
----------------------------------------------------
{
"settings": {
"analysis": {
"filter": {
"spanish_stop": {
"type": "stop",
"stopwords": "_spanish_" <1>
},
"spanish_keywords": {
"type": "keyword_marker",
"keywords": [] <2>
},
"spanish_stemmer": {
"type": "stemmer",
"language": "light_spanish"
}
},
"analyzer": {
"spanish": {
"tokenizer": "standard",
"filter": [
"lowercase",
"spanish_stop",
"spanish_keywords",
"spanish_stemmer"
]
}
}
}
}
}
----------------------------------------------------
<1> The default stopwords can be overridden with the `stopwords`
or `stopwords_path` parameters.
<2> This filter should be removed unless there are words which should
be excluded from stemming.
[[swedish-analyzer]]
===== `swedish` analyzer
The `swedish` analyzer could be reimplemented as a `custom` analyzer as follows:
[source,js]
----------------------------------------------------
{
"settings": {
"analysis": {
"filter": {
"swedish_stop": {
"type": "stop",
"stopwords": "_swedish_" <1>
},
"swedish_keywords": {
"type": "keyword_marker",
"keywords": [] <2>
},
"swedish_stemmer": {
"type": "stemmer",
"language": "swedish"
}
},
"analyzer": {
"swedish": {
"tokenizer": "standard",
"filter": [
"lowercase",
"swedish_stop",
"swedish_keywords",
"swedish_stemmer"
]
}
}
}
}
}
----------------------------------------------------
<1> The default stopwords can be overridden with the `stopwords`
or `stopwords_path` parameters.
<2> This filter should be removed unless there are words which should
be excluded from stemming.
[[turkish-analyzer]]
===== `turkish` analyzer
The `turkish` analyzer could be reimplemented as a `custom` analyzer as follows:
[source,js]
----------------------------------------------------
{
"settings": {
"analysis": {
"filter": {
"turkish_stop": {
"type": "stop",
"stopwords": "_turkish_" <1>
},
"turkish_lowercase": {
"type": "lowercase",
"language": "turkish"
},
"turkish_keywords": {
"type": "keyword_marker",
"keywords": [] <2>
},
"turkish_stemmer": {
"type": "stemmer",
"language": "turkish"
}
},
"analyzer": {
"turkish": {
"tokenizer": "standard",
"filter": [
"apostrophe",
"turkish_lowercase",
"turkish_stop",
"turkish_keywords",
"turkish_stemmer"
]
}
}
}
}
}
----------------------------------------------------
<1> The default stopwords can be overridden with the `stopwords`
or `stopwords_path` parameters.
<2> This filter should be removed unless there are words which should
be excluded from stemming.
[[thai-analyzer]]
===== `thai` analyzer
The `thai` analyzer could be reimplemented as a `custom` analyzer as follows:
[source,js]
----------------------------------------------------
{
"settings": {
"analysis": {
"filter": {
"thai_stop": {
"type": "stop",
"stopwords": "_thai_" <1>
}
},
"analyzer": {
"thai": {
"tokenizer": "thai",
"filter": [
"lowercase",
"thai_stop"
]
}
}
}
}
}
----------------------------------------------------
<1> The default stopwords can be overridden with the `stopwords`
or `stopwords_path` parameters.