[Docs] Fix bad link

relates #30397
This commit is contained in:
Jim Ferenczi 2018-05-04 22:07:12 +02:00
parent d7c2a99347
commit ec187ed3be

View File

@ -15,7 +15,7 @@ The `nori` analyzer consists of the following tokenizer and token filters:
* <<analysis-nori-tokenizer,`nori_tokenizer`>>
* <<analysis-nori-speech,`nori_part_of_speech`>> token filter
* <<analysis-nori-reading,`nori_readingform`>> token filter
* <<analysis-nori-readingform,`nori_readingform`>> token filter
* {ref}/analysis-lowercase-tokenfilter.html[`lowercase`] token filter
It supports the `decompound_mode` and `user_dictionary` settings from
@ -379,12 +379,12 @@ PUT nori_sample
GET nori_sample/_analyze
{
"analyzer": "my_analyzer",
"text": "鄕歌" <1>
"text": "鄕歌" <1>
}
--------------------------------------------------
// CONSOLE
<1> Hyangga
<1> A token written in Hanja: Hyangga
Which responds with:
@ -392,7 +392,7 @@ Which responds with:
--------------------------------------------------
{
"tokens" : [ {
"token" : "향가", <2>
"token" : "향가", <1>
"start_offset" : 0,
"end_offset" : 2,
"type" : "word",
@ -402,5 +402,4 @@ Which responds with:
--------------------------------------------------
// TESTRESPONSE
<1> A token written in Hanja.
<2> The Hanja form is replaced by the Hangul translation.
<1> The Hanja form is replaced by the Hangul translation.