Docs: Fix language on a few snippets

They aren't `js`, they are their own thing.

Relates to #18160
This commit is contained in:
Nik Everett 2017-03-22 15:56:38 -04:00
parent 257a7d77ed
commit 1c1b29400b
5 changed files with 9 additions and 13 deletions

View File

@ -80,8 +80,6 @@ buildRestTests.expectedUnconvertedCandidates = [
'reference/analysis/tokenfilters/stop-tokenfilter.asciidoc',
'reference/analysis/tokenfilters/synonym-tokenfilter.asciidoc',
'reference/analysis/tokenfilters/synonym-graph-tokenfilter.asciidoc',
'reference/analysis/tokenfilters/word-delimiter-tokenfilter.asciidoc',
'reference/analysis/tokenfilters/word-delimiter-graph-tokenfilter.asciidoc',
'reference/cat/snapshots.asciidoc',
'reference/cat/templates.asciidoc',
'reference/cat/thread_pool.asciidoc',

View File

@ -3,7 +3,7 @@
experimental[]
The `synonym_graph` token filter allows to easily handle synonyms,
The `synonym_graph` token filter allows to easily handle synonyms,
including multi-word synonyms correctly during the analysis process.
In order to properly handle multi-word synonyms this token filter
@ -13,8 +13,8 @@ http://blog.mikemccandless.com/2012/04/lucenes-tokenstreams-are-actually.html[Lu
["NOTE",id="synonym-graph-index-note"]
===============================
This token filter is designed to be used as part of a search analyzer
only. If you want to apply synonyms during indexing please use the
This token filter is designed to be used as part of a search analyzer
only. If you want to apply synonyms during indexing please use the
standard <<analysis-synonym-tokenfilter,synonym token filter>>.
===============================
@ -45,8 +45,8 @@ Here is an example:
The above configures a `search_synonyms` filter, with a path of
`analysis/synonym.txt` (relative to the `config` location). The
`search_synonyms` analyzer is then configured with the filter.
Additional settings are: `ignore_case` (defaults to `false`), and
`search_synonyms` analyzer is then configured with the filter.
Additional settings are: `ignore_case` (defaults to `false`), and
`expand` (defaults to `true`).
The `tokenizer` parameter controls the tokenizers that will be used to
@ -106,7 +106,7 @@ configuration file (note use of `synonyms` instead of `synonyms_path`):
"synonyms" : [
"lol, laughing out loud",
"universe, cosmos"
]
]
}
}
}

View File

@ -75,7 +75,7 @@ Advance settings include:
A custom type mapping table, for example (when configured
using `type_table_path`):
[source,js]
[source,type_table]
--------------------------------------------------
# Map the $, %, '.', and ',' characters to DIGIT
# This might be useful for financial data.
@ -94,4 +94,3 @@ NOTE: Using a tokenizer like the `standard` tokenizer may interfere with
the `catenate_*` and `preserve_original` parameters, as the original
string may already have lost punctuation during tokenization. Instead,
you may want to use the `whitespace` tokenizer.

View File

@ -64,7 +64,7 @@ Advance settings include:
A custom type mapping table, for example (when configured
using `type_table_path`):
[source,js]
[source,type_table]
--------------------------------------------------
# Map the $, %, '.', and ',' characters to DIGIT
# This might be useful for financial data.
@ -83,4 +83,3 @@ NOTE: Using a tokenizer like the `standard` tokenizer may interfere with
the `catenate_*` and `preserve_original` parameters, as the original
string may already have lost punctuation during tokenization. Instead,
you may want to use the `whitespace` tokenizer.

View File

@ -33,7 +33,7 @@ After starting Elasticsearch, you can see whether this setting was applied
successfully by checking the value of `mlockall` in the output from this
request:
[source,sh]
[source,js]
--------------
GET _nodes?filter_path=**.mlockall
--------------