2020-07-17 10:57:00 -04:00
|
|
|
[[highlighting]]
|
|
|
|
=== Highlighting
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2017-07-12 00:15:35 -04:00
|
|
|
Highlighters enable you to get highlighted snippets from one or more fields
|
|
|
|
in your search results so you can show users where the query matches are.
|
|
|
|
When you request highlights, the response contains an additional `highlight`
|
|
|
|
element for each search hit that includes the highlighted fields and the
|
|
|
|
highlighted fragments.
|
|
|
|
|
2018-04-18 17:41:19 -04:00
|
|
|
NOTE: Highlighters don't reflect the boolean logic of a query when extracting
|
|
|
|
terms to highlight. Thus, for some complex boolean queries (e.g nested boolean
|
|
|
|
queries, queries using `minimum_should_match` etc.), parts of documents may be
|
|
|
|
highlighted that don't correspond to query matches.
|
|
|
|
|
2017-07-12 00:15:35 -04:00
|
|
|
Highlighting requires the actual content of a field. If the field is not
|
|
|
|
stored (the mapping does not set `store` to `true`), the actual `_source` is
|
|
|
|
loaded and the relevant field is extracted from `_source`.
|
|
|
|
|
|
|
|
For example, to get highlights for the `content` field in each search hit
|
|
|
|
using the default highlighter, include a `highlight` object in
|
|
|
|
the request body that specifies the `content` field:
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2019-09-09 12:35:50 -04:00
|
|
|
[source,console]
|
2013-08-28 19:24:34 -04:00
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
GET /_search
|
2013-08-28 19:24:34 -04:00
|
|
|
{
|
2020-07-21 15:49:58 -04:00
|
|
|
"query": {
|
|
|
|
"match": { "content": "kimchy" }
|
|
|
|
},
|
|
|
|
"highlight": {
|
|
|
|
"fields": {
|
|
|
|
"content": {}
|
2013-08-28 19:24:34 -04:00
|
|
|
}
|
2020-07-21 15:49:58 -04:00
|
|
|
}
|
2013-08-28 19:24:34 -04:00
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2017-04-05 17:38:34 -04:00
|
|
|
// TEST[setup:twitter]
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2017-07-12 19:36:07 -04:00
|
|
|
{es} supports three highlighters: `unified`, `plain`, and `fvh` (fast vector
|
|
|
|
highlighter). You can specify the highlighter `type` you want to use
|
|
|
|
for each field.
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2017-07-12 00:15:35 -04:00
|
|
|
[[unified-highlighter]]
|
2020-07-17 10:57:00 -04:00
|
|
|
==== Unified highlighter
|
2017-07-12 19:36:07 -04:00
|
|
|
The `unified` highlighter uses the Lucene Unified Highlighter. This
|
2017-07-12 00:15:35 -04:00
|
|
|
highlighter breaks the text into sentences and uses the BM25 algorithm to score
|
|
|
|
individual sentences as if they were documents in the corpus. It also supports
|
|
|
|
accurate phrase and multi-term (fuzzy, prefix, regex) highlighting. This is the
|
|
|
|
default highlighter.
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2017-07-12 00:15:35 -04:00
|
|
|
[[plain-highlighter]]
|
2020-07-17 10:57:00 -04:00
|
|
|
==== Plain highlighter
|
2017-07-12 19:36:07 -04:00
|
|
|
The `plain` highlighter uses the standard Lucene highlighter. It attempts to
|
2017-07-12 00:15:35 -04:00
|
|
|
reflect the query matching logic in terms of understanding word importance and
|
|
|
|
any word positioning criteria in phrase queries.
|
2017-07-12 19:36:07 -04:00
|
|
|
|
2017-07-12 00:15:35 -04:00
|
|
|
[WARNING]
|
|
|
|
The `plain` highlighter works best for highlighting simple query matches in a
|
|
|
|
single field. To accurately reflect query logic, it creates a tiny in-memory
|
|
|
|
index and re-runs the original query criteria through Lucene's query execution
|
|
|
|
planner to get access to low-level match information for the current document.
|
|
|
|
This is repeated for every field and every document that needs to be highlighted.
|
|
|
|
If you want to highlight a lot of fields in a lot of documents with complex
|
2017-07-13 09:38:58 -04:00
|
|
|
queries, we recommend using the `unified` highlighter on `postings` or `term_vector` fields.
|
2014-10-15 07:44:36 -04:00
|
|
|
|
2017-07-12 00:15:35 -04:00
|
|
|
[[fast-vector-highlighter]]
|
2020-07-17 10:57:00 -04:00
|
|
|
==== Fast vector highlighter
|
2017-07-12 19:36:07 -04:00
|
|
|
The `fvh` highlighter uses the Lucene Fast Vector highlighter.
|
2017-07-12 00:15:35 -04:00
|
|
|
This highlighter can be used on fields with `term_vector` set to
|
|
|
|
`with_positions_offsets` in the mapping. The fast vector highlighter:
|
Added third highlighter type based on lucene postings highlighter
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.
Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same
The lucene postings highlighter api is quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.
Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.
Closes #3704
2013-08-08 11:10:42 -04:00
|
|
|
|
2017-07-13 09:38:58 -04:00
|
|
|
* Can be customized with a <<boundary-scanners,`boundary_scanner`>>.
|
2017-07-12 19:36:07 -04:00
|
|
|
* Requires setting `term_vector` to `with_positions_offsets` which
|
2017-07-12 00:15:35 -04:00
|
|
|
increases the size of the index
|
2017-07-12 19:36:07 -04:00
|
|
|
* Can combine matches from multiple fields into one result. See
|
2017-07-12 00:15:35 -04:00
|
|
|
`matched_fields`
|
2017-07-12 19:36:07 -04:00
|
|
|
* Can assign different weights to matches at different positions allowing
|
2017-07-12 00:15:35 -04:00
|
|
|
for things like phrase matches being sorted above term matches when
|
|
|
|
highlighting a Boosting Query that boosts phrase matches over term matches
|
|
|
|
|
2018-11-08 05:09:03 -05:00
|
|
|
[WARNING]
|
|
|
|
The `fvh` highlighter does not support span queries. If you need support for
|
|
|
|
span queries, try an alternative highlighter, such as the `unified` highlighter.
|
|
|
|
|
2017-07-12 19:36:07 -04:00
|
|
|
[[offsets-strategy]]
|
2020-07-17 10:57:00 -04:00
|
|
|
==== Offsets strategy
|
2017-07-12 00:15:35 -04:00
|
|
|
To create meaningful search snippets from the terms being queried,
|
|
|
|
the highlighter needs to know the start and end character offsets of each word
|
|
|
|
in the original text. These offsets can be obtained from:
|
|
|
|
|
|
|
|
* The postings list. If `index_options` is set to `offsets` in the mapping,
|
|
|
|
the `unified` highlighter uses this information to highlight documents without
|
|
|
|
re-analyzing the text. It re-runs the original query directly on the postings
|
|
|
|
and extracts the matching offsets from the index, limiting the collection to
|
|
|
|
the highlighted documents. This is important if you have large fields because
|
|
|
|
it doesn't require reanalyzing the text to be highlighted. It also requires less
|
|
|
|
disk space than using `term_vectors`.
|
|
|
|
|
2018-04-18 19:56:09 -04:00
|
|
|
* Term vectors. If `term_vector` information is provided by setting
|
2017-07-12 00:15:35 -04:00
|
|
|
`term_vector` to `with_positions_offsets` in the mapping, the `unified`
|
|
|
|
highlighter automatically uses the `term_vector` to highlight the field.
|
2017-07-13 09:38:58 -04:00
|
|
|
It's fast especially for large fields (> `1MB`) and for highlighting multi-term queries like
|
|
|
|
`prefix` or `wildcard` because it can access the dictionary of terms for each document.
|
|
|
|
The `fvh` highlighter always uses term vectors.
|
2017-07-12 00:15:35 -04:00
|
|
|
|
2017-07-13 09:38:58 -04:00
|
|
|
* Plain highlighting. This mode is used by the `unified` when there is no other alternative.
|
2017-07-12 00:15:35 -04:00
|
|
|
It creates a tiny in-memory index and re-runs the original query criteria through
|
|
|
|
Lucene's query execution planner to get access to low-level match information on
|
|
|
|
the current document. This is repeated for every field and every document that
|
|
|
|
needs highlighting. The `plain` highlighter always uses plain highlighting.
|
2015-07-15 05:49:48 -04:00
|
|
|
|
2017-12-21 10:19:58 -05:00
|
|
|
[WARNING]
|
|
|
|
Plain highlighting for large texts may require substantial amount of time and memory.
|
|
|
|
To protect against this, the maximum number of text characters that will be analyzed has been
|
2018-03-02 11:09:05 -05:00
|
|
|
limited to 1000000. This default limit can be changed
|
2017-12-21 10:19:58 -05:00
|
|
|
for a particular index with the index setting `index.highlight.max_analyzed_offset`.
|
|
|
|
|
2017-07-12 00:15:35 -04:00
|
|
|
[[highlighting-settings]]
|
2020-07-17 10:57:00 -04:00
|
|
|
==== Highlighting settings
|
2015-07-15 05:49:48 -04:00
|
|
|
|
2017-07-12 00:15:35 -04:00
|
|
|
Highlighting settings can be set on a global level and overridden at
|
|
|
|
the field level.
|
Added third highlighter type based on lucene postings highlighter
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.
Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same
The lucene postings highlighter api is quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.
Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.
Closes #3704
2013-08-08 11:10:42 -04:00
|
|
|
|
2017-07-12 00:15:35 -04:00
|
|
|
boundary_chars:: A string that contains each boundary character.
|
|
|
|
Defaults to `.,!? \t\n`.
|
Added third highlighter type based on lucene postings highlighter
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.
Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same
The lucene postings highlighter api is quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.
Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.
Closes #3704
2013-08-08 11:10:42 -04:00
|
|
|
|
2017-07-12 00:15:35 -04:00
|
|
|
boundary_max_scan:: How far to scan for boundary characters. Defaults to `20`.
|
|
|
|
|
|
|
|
[[boundary-scanners]]
|
|
|
|
boundary_scanner:: Specifies how to break the highlighted fragments: `chars`,
|
|
|
|
`sentence`, or `word`. Only valid for the `unified` and `fvh` highlighters.
|
|
|
|
Defaults to `sentence` for the `unified` highlighter. Defaults to `chars` for
|
|
|
|
the `fvh` highlighter.
|
2017-07-12 19:36:07 -04:00
|
|
|
`chars`::: Use the characters specified by `boundary_chars` as highlighting
|
2017-07-12 00:15:35 -04:00
|
|
|
boundaries. The `boundary_max_scan` setting controls how far to scan for
|
|
|
|
boundary characters. Only valid for the `fvh` highlighter.
|
2017-07-12 19:36:07 -04:00
|
|
|
`sentence`::: Break highlighted fragments at the next sentence boundary, as
|
2018-04-18 19:56:09 -04:00
|
|
|
determined by Java's
|
2017-07-12 00:15:35 -04:00
|
|
|
https://docs.oracle.com/javase/8/docs/api/java/text/BreakIterator.html[BreakIterator].
|
|
|
|
You can specify the locale to use with `boundary_scanner_locale`.
|
|
|
|
+
|
|
|
|
NOTE: When used with the `unified` highlighter, the `sentence` scanner splits
|
|
|
|
sentences bigger than `fragment_size` at the first word boundary next to
|
|
|
|
`fragment_size`. You can set `fragment_size` to 0 to never split any sentence.
|
|
|
|
|
2017-07-12 19:36:07 -04:00
|
|
|
`word`::: Break highlighted fragments at the next word boundary, as determined
|
2017-07-12 00:15:35 -04:00
|
|
|
by Java's https://docs.oracle.com/javase/8/docs/api/java/text/BreakIterator.html[BreakIterator].
|
|
|
|
You can specify the locale to use with `boundary_scanner_locale`.
|
|
|
|
|
|
|
|
boundary_scanner_locale:: Controls which locale is used to search for sentence
|
2018-04-18 17:41:19 -04:00
|
|
|
and word boundaries. This parameter takes a form of a language tag,
|
|
|
|
e.g. `"en-US"`, `"fr-FR"`, `"ja-JP"`. More info can be found in the
|
|
|
|
https://docs.oracle.com/javase/8/docs/api/java/util/Locale.html#forLanguageTag-java.lang.String-[Locale Language Tag]
|
|
|
|
documentation. The default value is https://docs.oracle.com/javase/8/docs/api/java/util/Locale.html#ROOT[ Locale.ROOT].
|
2017-07-12 00:15:35 -04:00
|
|
|
|
2018-01-24 10:45:40 -05:00
|
|
|
encoder:: Indicates if the snippet should be HTML encoded:
|
|
|
|
`default` (no encoding) or `html` (HTML-escape the snippet text and then
|
|
|
|
insert the highlighting tags)
|
2017-07-12 00:15:35 -04:00
|
|
|
|
|
|
|
fields:: Specifies the fields to retrieve highlights for. You can use wildcards
|
|
|
|
to specify fields. For example, you could specify `comment_*` to
|
|
|
|
get highlights for all <<text,text>> and <<keyword,keyword>> fields
|
|
|
|
that start with `comment_`.
|
|
|
|
+
|
|
|
|
NOTE: Only text and keyword fields are highlighted when you use wildcards.
|
|
|
|
If you use a custom mapper and want to highlight on a field anyway, you
|
|
|
|
must explicitly specify that field name.
|
|
|
|
|
|
|
|
force_source:: Highlight based on the source even if the field is
|
|
|
|
stored separately. Defaults to `false`.
|
|
|
|
|
|
|
|
fragmenter:: Specifies how text should be broken up in highlight
|
|
|
|
snippets: `simple` or `span`. Only valid for the `plain` highlighter.
|
|
|
|
Defaults to `span`.
|
2017-07-12 19:36:07 -04:00
|
|
|
|
|
|
|
`simple`::: Breaks up text into same-sized fragments.
|
2019-08-20 07:27:27 -04:00
|
|
|
`span`::: Breaks up text into same-sized fragments, but tries to avoid
|
2017-07-12 00:15:35 -04:00
|
|
|
breaking up text between highlighted terms. This is helpful when you're
|
|
|
|
querying for phrases. Default.
|
|
|
|
|
|
|
|
fragment_offset:: Controls the margin from which you want to start
|
|
|
|
highlighting. Only valid when using the `fvh` highlighter.
|
|
|
|
|
|
|
|
fragment_size:: The size of the highlighted fragment in characters. Defaults
|
|
|
|
to 100.
|
|
|
|
|
|
|
|
highlight_query:: Highlight matches for a query other than the search
|
|
|
|
query. This is especially useful if you use a rescore query because
|
|
|
|
those are not taken into account by highlighting by default.
|
|
|
|
+
|
|
|
|
IMPORTANT: {es} does not validate that `highlight_query` contains
|
|
|
|
the search query in any way so it is possible to define it so
|
|
|
|
legitimate query results are not highlighted. Generally, you should
|
|
|
|
include the search query as part of the `highlight_query`.
|
|
|
|
|
|
|
|
matched_fields:: Combine matches on multiple fields to highlight a single field.
|
|
|
|
This is most intuitive for multifields that analyze the same string in different
|
|
|
|
ways. All `matched_fields` must have `term_vector` set to
|
|
|
|
`with_positions_offsets`, but only the field to which
|
|
|
|
the matches are combined is loaded so only that field benefits from having
|
|
|
|
`store` set to `yes`. Only valid for the `fvh` highlighter.
|
|
|
|
|
|
|
|
no_match_size:: The amount of text you want to return from the beginning
|
|
|
|
of the field if there are no matching fragments to highlight. Defaults
|
|
|
|
to 0 (nothing is returned).
|
|
|
|
|
|
|
|
number_of_fragments:: The maximum number of fragments to return. If the
|
|
|
|
number of fragments is set to 0, no fragments are returned. Instead,
|
|
|
|
the entire field contents are highlighted and returned. This can be
|
|
|
|
handy when you need to highlight short texts such as a title or
|
|
|
|
address, but fragmentation is not required. If `number_of_fragments`
|
|
|
|
is 0, `fragment_size` is ignored. Defaults to 5.
|
|
|
|
|
2018-04-18 17:41:19 -04:00
|
|
|
order:: Sorts highlighted fragments by score when set to `score`. By default,
|
|
|
|
fragments will be output in the order they appear in the field (order: `none`).
|
|
|
|
Setting this option to `score` will output the most relevant fragments first.
|
|
|
|
Each highlighter applies its own logic to compute relevancy scores. See
|
2020-07-17 10:57:00 -04:00
|
|
|
the document <<how-highlighters-work-internally, How highlighters work internally>>
|
2018-04-18 17:41:19 -04:00
|
|
|
for more details how different highlighters find the best fragments.
|
2017-07-12 00:15:35 -04:00
|
|
|
|
|
|
|
phrase_limit:: Controls the number of matching phrases in a document that are
|
|
|
|
considered. Prevents the `fvh` highlighter from analyzing too many phrases
|
2018-04-18 17:41:19 -04:00
|
|
|
and consuming too much memory. When using `matched_fields`, `phrase_limit`
|
2017-07-12 00:15:35 -04:00
|
|
|
phrases per matched field are considered. Raising the limit increases query
|
|
|
|
time and consumes more memory. Only supported by the `fvh` highlighter.
|
|
|
|
Defaults to 256.
|
|
|
|
|
|
|
|
pre_tags:: Use in conjunction with `post_tags` to define the HTML tags
|
|
|
|
to use for the highlighted text. By default, highlighted text is wrapped
|
2017-07-12 19:36:07 -04:00
|
|
|
in `<em>` and `</em>` tags. Specify as an array of strings.
|
2017-07-12 00:15:35 -04:00
|
|
|
|
|
|
|
post_tags:: Use in conjunction with `pre_tags` to define the HTML tags
|
|
|
|
to use for the highlighted text. By default, highlighted text is wrapped
|
|
|
|
in `<em>` and `</em>` tags. Specify as an array of strings.
|
|
|
|
|
|
|
|
require_field_match:: By default, only fields that contains a query match are
|
|
|
|
highlighted. Set `require_field_match` to `false` to highlight all fields.
|
|
|
|
Defaults to `true`.
|
|
|
|
|
|
|
|
tags_schema:: Set to `styled` to use the built-in tag schema. The `styled`
|
|
|
|
schema defines the following `pre_tags` and defines `post_tags` as
|
|
|
|
`</em>`.
|
|
|
|
+
|
|
|
|
[source,html]
|
|
|
|
--------------------------------------------------
|
|
|
|
<em class="hlt1">, <em class="hlt2">, <em class="hlt3">,
|
|
|
|
<em class="hlt4">, <em class="hlt5">, <em class="hlt6">,
|
|
|
|
<em class="hlt7">, <em class="hlt8">, <em class="hlt9">,
|
|
|
|
<em class="hlt10">
|
|
|
|
--------------------------------------------------
|
2017-06-09 08:09:57 -04:00
|
|
|
|
2017-07-12 00:15:35 -04:00
|
|
|
[[highlighter-type]]
|
|
|
|
type:: The highlighter to use: `unified`, `plain`, or `fvh`. Defaults to
|
|
|
|
`unified`.
|
2017-06-09 08:09:57 -04:00
|
|
|
|
2017-07-12 00:15:35 -04:00
|
|
|
[[highlighting-examples]]
|
2020-07-17 10:57:00 -04:00
|
|
|
==== Highlighting examples
|
Added third highlighter type based on lucene postings highlighter
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.
Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same
The lucene postings highlighter api is quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.
Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.
Closes #3704
2013-08-08 11:10:42 -04:00
|
|
|
|
2017-07-12 19:36:07 -04:00
|
|
|
* <<override-global-settings, Override global settings>>
|
|
|
|
* <<specify-highlight-query, Specify a highlight query>>
|
|
|
|
* <<set-highlighter-type, Set highlighter type>>
|
|
|
|
* <<configure-tags, Configure highlighting tags>>
|
|
|
|
* <<highlight-source, Highlight source>>
|
|
|
|
* <<highlight-all, Highlight all fields>>
|
|
|
|
* <<matched-fields, Combine matches on multiple fields>>
|
|
|
|
* <<explicit-field-order, Explicitly order highlighted fields>>
|
|
|
|
* <<control-highlighted-frags, Control highlighted fragments>>
|
|
|
|
* <<highlight-postings-list, Highlight using the postings list>>
|
|
|
|
* <<specify-fragmenter, Specify a fragmenter for the plain highlighter>>
|
|
|
|
|
|
|
|
[[override-global-settings]]
|
2020-07-23 12:42:33 -04:00
|
|
|
[discrete]
|
2020-07-17 10:57:00 -04:00
|
|
|
=== Override global settings
|
2017-07-12 19:36:07 -04:00
|
|
|
|
|
|
|
You can specify highlighter settings globally and selectively override them for
|
|
|
|
individual fields.
|
2017-06-09 08:09:57 -04:00
|
|
|
|
2019-09-09 12:35:50 -04:00
|
|
|
[source,console]
|
2017-06-09 08:09:57 -04:00
|
|
|
--------------------------------------------------
|
2017-07-12 19:36:07 -04:00
|
|
|
GET /_search
|
2017-06-09 08:09:57 -04:00
|
|
|
{
|
2020-07-21 15:49:58 -04:00
|
|
|
"query" : {
|
|
|
|
"match": { "user": "kimchy" }
|
|
|
|
},
|
|
|
|
"highlight" : {
|
|
|
|
"number_of_fragments" : 3,
|
|
|
|
"fragment_size" : 150,
|
|
|
|
"fields" : {
|
|
|
|
"body" : { "pre_tags" : ["<em>"], "post_tags" : ["</em>"] },
|
|
|
|
"blog.title" : { "number_of_fragments" : 0 },
|
|
|
|
"blog.author" : { "number_of_fragments" : 0 },
|
|
|
|
"blog.comment" : { "number_of_fragments" : 5, "order" : "score" }
|
2017-06-09 08:09:57 -04:00
|
|
|
}
|
2020-07-21 15:49:58 -04:00
|
|
|
}
|
2017-06-09 08:09:57 -04:00
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2017-07-12 19:36:07 -04:00
|
|
|
// TEST[setup:twitter]
|
2017-06-09 08:09:57 -04:00
|
|
|
|
2020-07-23 12:42:33 -04:00
|
|
|
[discrete]
|
2017-07-12 19:36:07 -04:00
|
|
|
[[specify-highlight-query]]
|
2020-07-17 10:57:00 -04:00
|
|
|
=== Specify a highlight query
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2017-07-12 19:36:07 -04:00
|
|
|
You can specify a `highlight_query` to take additional information into account
|
|
|
|
when highlighting. For example, the following query includes both the search
|
|
|
|
query and rescore query in the `highlight_query`. Without the `highlight_query`,
|
|
|
|
highlighting would only take the search query into account.
|
Added third highlighter type based on lucene postings highlighter
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.
Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same
The lucene postings highlighter api is quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.
Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.
Closes #3704
2013-08-08 11:10:42 -04:00
|
|
|
|
2019-09-09 12:35:50 -04:00
|
|
|
[source,console]
|
Added third highlighter type based on lucene postings highlighter
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.
Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same
The lucene postings highlighter api is quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.
Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.
Closes #3704
2013-08-08 11:10:42 -04:00
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
GET /_search
|
Added third highlighter type based on lucene postings highlighter
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.
Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same
The lucene postings highlighter api is quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.
Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.
Closes #3704
2013-08-08 11:10:42 -04:00
|
|
|
{
|
2020-07-21 15:49:58 -04:00
|
|
|
"query": {
|
|
|
|
"match": {
|
|
|
|
"comment": {
|
|
|
|
"query": "foo bar"
|
|
|
|
}
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"rescore": {
|
|
|
|
"window_size": 50,
|
|
|
|
"query": {
|
|
|
|
"rescore_query": {
|
|
|
|
"match_phrase": {
|
|
|
|
"comment": {
|
|
|
|
"query": "foo bar",
|
|
|
|
"slop": 1
|
|
|
|
}
|
2017-07-12 19:36:07 -04:00
|
|
|
}
|
2020-07-21 15:49:58 -04:00
|
|
|
},
|
|
|
|
"rescore_query_weight": 10
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"_source": false,
|
|
|
|
"highlight": {
|
|
|
|
"order": "score",
|
|
|
|
"fields": {
|
|
|
|
"comment": {
|
|
|
|
"fragment_size": 150,
|
|
|
|
"number_of_fragments": 3,
|
|
|
|
"highlight_query": {
|
|
|
|
"bool": {
|
|
|
|
"must": {
|
|
|
|
"match": {
|
|
|
|
"comment": {
|
|
|
|
"query": "foo bar"
|
2017-07-12 19:36:07 -04:00
|
|
|
}
|
2020-07-21 15:49:58 -04:00
|
|
|
}
|
2017-07-12 19:36:07 -04:00
|
|
|
},
|
2020-07-21 15:49:58 -04:00
|
|
|
"should": {
|
|
|
|
"match_phrase": {
|
|
|
|
"comment": {
|
|
|
|
"query": "foo bar",
|
|
|
|
"slop": 1,
|
|
|
|
"boost": 10.0
|
2017-07-12 19:36:07 -04:00
|
|
|
}
|
2020-07-21 15:49:58 -04:00
|
|
|
}
|
|
|
|
},
|
|
|
|
"minimum_should_match": 0
|
|
|
|
}
|
Added third highlighter type based on lucene postings highlighter
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.
Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same
The lucene postings highlighter api is quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.
Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.
Closes #3704
2013-08-08 11:10:42 -04:00
|
|
|
}
|
2020-07-21 15:49:58 -04:00
|
|
|
}
|
Added third highlighter type based on lucene postings highlighter
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.
Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same
The lucene postings highlighter api is quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.
Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.
Closes #3704
2013-08-08 11:10:42 -04:00
|
|
|
}
|
2020-07-21 15:49:58 -04:00
|
|
|
}
|
Added third highlighter type based on lucene postings highlighter
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.
Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same
The lucene postings highlighter api is quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.
Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.
Closes #3704
2013-08-08 11:10:42 -04:00
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2017-04-05 17:38:34 -04:00
|
|
|
// TEST[setup:twitter]
|
Added third highlighter type based on lucene postings highlighter
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.
Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same
The lucene postings highlighter api is quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.
Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.
Closes #3704
2013-08-08 11:10:42 -04:00
|
|
|
|
2020-07-23 12:42:33 -04:00
|
|
|
[discrete]
|
2017-07-12 19:36:07 -04:00
|
|
|
[[set-highlighter-type]]
|
2020-07-17 10:57:00 -04:00
|
|
|
=== Set highlighter type
|
2013-12-09 05:57:59 -05:00
|
|
|
|
2017-07-12 19:36:07 -04:00
|
|
|
The `type` field allows to force a specific highlighter type.
|
|
|
|
The allowed values are: `unified`, `plain` and `fvh`.
|
|
|
|
The following is an example that forces the use of the plain highlighter:
|
2013-12-09 05:57:59 -05:00
|
|
|
|
2019-09-09 12:35:50 -04:00
|
|
|
[source,console]
|
2013-12-09 05:57:59 -05:00
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
GET /_search
|
2013-12-09 05:57:59 -05:00
|
|
|
{
|
2020-07-21 15:49:58 -04:00
|
|
|
"query": {
|
|
|
|
"match": { "user": "kimchy" }
|
|
|
|
},
|
|
|
|
"highlight": {
|
|
|
|
"fields": {
|
|
|
|
"comment": { "type": "plain" }
|
2013-12-09 05:57:59 -05:00
|
|
|
}
|
2020-07-21 15:49:58 -04:00
|
|
|
}
|
2013-12-09 05:57:59 -05:00
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2017-04-05 17:38:34 -04:00
|
|
|
// TEST[setup:twitter]
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2017-07-12 19:36:07 -04:00
|
|
|
[[configure-tags]]
|
2020-07-23 12:42:33 -04:00
|
|
|
[discrete]
|
2020-07-17 10:57:00 -04:00
|
|
|
=== Configure highlighting tags
|
2013-08-28 19:24:34 -04:00
|
|
|
|
|
|
|
By default, the highlighting will wrap highlighted text in `<em>` and
|
|
|
|
`</em>`. This can be controlled by setting `pre_tags` and `post_tags`,
|
|
|
|
for example:
|
|
|
|
|
2019-09-09 12:35:50 -04:00
|
|
|
[source,console]
|
Added third highlighter type based on lucene postings highlighter
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.
Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same
The lucene postings highlighter api is quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.
Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.
Closes #3704
2013-08-08 11:10:42 -04:00
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
GET /_search
|
Added third highlighter type based on lucene postings highlighter
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.
Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same
The lucene postings highlighter api is quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.
Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.
Closes #3704
2013-08-08 11:10:42 -04:00
|
|
|
{
|
2020-07-21 15:49:58 -04:00
|
|
|
"query" : {
|
|
|
|
"match": { "user": "kimchy" }
|
|
|
|
},
|
|
|
|
"highlight" : {
|
|
|
|
"pre_tags" : ["<tag1>"],
|
|
|
|
"post_tags" : ["</tag1>"],
|
|
|
|
"fields" : {
|
|
|
|
"body" : {}
|
Added third highlighter type based on lucene postings highlighter
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.
Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same
The lucene postings highlighter api is quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.
Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.
Closes #3704
2013-08-08 11:10:42 -04:00
|
|
|
}
|
2020-07-21 15:49:58 -04:00
|
|
|
}
|
Added third highlighter type based on lucene postings highlighter
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.
Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same
The lucene postings highlighter api is quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.
Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.
Closes #3704
2013-08-08 11:10:42 -04:00
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2017-04-05 17:38:34 -04:00
|
|
|
// TEST[setup:twitter]
|
Added third highlighter type based on lucene postings highlighter
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.
Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same
The lucene postings highlighter api is quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.
Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.
Closes #3704
2013-08-08 11:10:42 -04:00
|
|
|
|
2017-07-12 00:15:35 -04:00
|
|
|
When using the fast vector highlighter, you can specify additional tags and the
|
|
|
|
"importance" is ordered.
|
Added third highlighter type based on lucene postings highlighter
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.
Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same
The lucene postings highlighter api is quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.
Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.
Closes #3704
2013-08-08 11:10:42 -04:00
|
|
|
|
2019-09-09 12:35:50 -04:00
|
|
|
[source,console]
|
2013-08-28 19:24:34 -04:00
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
GET /_search
|
2013-08-28 19:24:34 -04:00
|
|
|
{
|
2020-07-21 15:49:58 -04:00
|
|
|
"query" : {
|
|
|
|
"match": { "user": "kimchy" }
|
|
|
|
},
|
|
|
|
"highlight" : {
|
|
|
|
"pre_tags" : ["<tag1>", "<tag2>"],
|
|
|
|
"post_tags" : ["</tag1>", "</tag2>"],
|
|
|
|
"fields" : {
|
|
|
|
"body" : {}
|
2013-08-28 19:24:34 -04:00
|
|
|
}
|
2020-07-21 15:49:58 -04:00
|
|
|
}
|
2013-08-28 19:24:34 -04:00
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2017-04-05 17:38:34 -04:00
|
|
|
// TEST[setup:twitter]
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2017-07-12 00:15:35 -04:00
|
|
|
You can also use the built-in `styled` tag schema:
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2019-09-09 12:35:50 -04:00
|
|
|
[source,console]
|
2013-08-28 19:24:34 -04:00
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
GET /_search
|
2013-08-28 19:24:34 -04:00
|
|
|
{
|
2020-07-21 15:49:58 -04:00
|
|
|
"query" : {
|
|
|
|
"match": { "user": "kimchy" }
|
|
|
|
},
|
|
|
|
"highlight" : {
|
|
|
|
"tags_schema" : "styled",
|
|
|
|
"fields" : {
|
|
|
|
"comment" : {}
|
2013-08-28 19:24:34 -04:00
|
|
|
}
|
2020-07-21 15:49:58 -04:00
|
|
|
}
|
2013-08-28 19:24:34 -04:00
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2017-04-05 17:38:34 -04:00
|
|
|
// TEST[setup:twitter]
|
Added third highlighter type based on lucene postings highlighter
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.
Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same
The lucene postings highlighter api is quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.
Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.
Closes #3704
2013-08-08 11:10:42 -04:00
|
|
|
|
2020-07-23 12:42:33 -04:00
|
|
|
[discrete]
|
2017-07-12 19:36:07 -04:00
|
|
|
[[highlight-source]]
|
2020-07-17 10:57:00 -04:00
|
|
|
=== Highlight on source
|
Added third highlighter type based on lucene postings highlighter
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.
Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same
The lucene postings highlighter api is quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.
Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.
Closes #3704
2013-08-08 11:10:42 -04:00
|
|
|
|
2017-07-12 19:36:07 -04:00
|
|
|
Forces the highlighting to highlight fields based on the source even if fields
|
|
|
|
are stored separately. Defaults to `false`.
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2019-09-09 12:35:50 -04:00
|
|
|
[source,console]
|
2013-08-28 19:24:34 -04:00
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
GET /_search
|
2013-08-28 19:24:34 -04:00
|
|
|
{
|
2020-07-21 15:49:58 -04:00
|
|
|
"query" : {
|
|
|
|
"match": { "user": "kimchy" }
|
|
|
|
},
|
|
|
|
"highlight" : {
|
|
|
|
"fields" : {
|
|
|
|
"comment" : {"force_source" : true}
|
2013-08-28 19:24:34 -04:00
|
|
|
}
|
2020-07-21 15:49:58 -04:00
|
|
|
}
|
2013-08-28 19:24:34 -04:00
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2017-04-05 17:38:34 -04:00
|
|
|
// TEST[setup:twitter]
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2017-07-12 19:36:07 -04:00
|
|
|
|
|
|
|
[[highlight-all]]
|
2020-07-23 12:42:33 -04:00
|
|
|
[discrete]
|
2020-07-17 10:57:00 -04:00
|
|
|
=== Highlight in all fields
|
2017-07-12 19:36:07 -04:00
|
|
|
|
|
|
|
By default, only fields that contains a query match are highlighted. Set
|
|
|
|
`require_field_match` to `false` to highlight all fields.
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2019-09-09 12:35:50 -04:00
|
|
|
[source,console]
|
2013-08-28 19:24:34 -04:00
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
GET /_search
|
2013-08-28 19:24:34 -04:00
|
|
|
{
|
2020-07-21 15:49:58 -04:00
|
|
|
"query" : {
|
|
|
|
"match": { "user": "kimchy" }
|
|
|
|
},
|
|
|
|
"highlight" : {
|
|
|
|
"require_field_match": false,
|
|
|
|
"fields": {
|
|
|
|
"body" : { "pre_tags" : ["<em>"], "post_tags" : ["</em>"] }
|
2013-08-28 19:24:34 -04:00
|
|
|
}
|
2020-07-21 15:49:58 -04:00
|
|
|
}
|
2013-08-28 19:24:34 -04:00
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2017-04-05 17:38:34 -04:00
|
|
|
// TEST[setup:twitter]
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2017-07-12 19:36:07 -04:00
|
|
|
[[matched-fields]]
|
2020-07-23 12:42:33 -04:00
|
|
|
[discrete]
|
2020-07-17 10:57:00 -04:00
|
|
|
=== Combine matches on multiple fields
|
2017-07-12 19:36:07 -04:00
|
|
|
|
|
|
|
WARNING: This is only supported by the `fvh` highlighter
|
|
|
|
|
|
|
|
The Fast Vector Highlighter can combine matches on multiple fields to
|
|
|
|
highlight a single field. This is most intuitive for multifields that
|
|
|
|
analyze the same string in different ways. All `matched_fields` must have
|
|
|
|
`term_vector` set to `with_positions_offsets` but only the field to which
|
|
|
|
the matches are combined is loaded so only that field would benefit from having
|
|
|
|
`store` set to `yes`.
|
|
|
|
|
|
|
|
In the following examples, `comment` is analyzed by the `english`
|
|
|
|
analyzer and `comment.plain` is analyzed by the `standard` analyzer.
|
2013-10-18 12:03:31 -04:00
|
|
|
|
2019-09-09 12:35:50 -04:00
|
|
|
[source,console]
|
2013-10-18 12:03:31 -04:00
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
GET /_search
|
2013-10-18 12:03:31 -04:00
|
|
|
{
|
2020-07-21 15:49:58 -04:00
|
|
|
"query": {
|
|
|
|
"query_string": {
|
|
|
|
"query": "comment.plain:running scissors",
|
|
|
|
"fields": [ "comment" ]
|
2013-10-18 12:03:31 -04:00
|
|
|
}
|
2020-07-21 15:49:58 -04:00
|
|
|
},
|
|
|
|
"highlight": {
|
|
|
|
"order": "score",
|
|
|
|
"fields": {
|
|
|
|
"comment": {
|
|
|
|
"matched_fields": [ "comment", "comment.plain" ],
|
|
|
|
"type": "fvh"
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2013-10-18 12:03:31 -04:00
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2017-04-05 17:38:34 -04:00
|
|
|
// TEST[setup:twitter]
|
2013-10-18 12:03:31 -04:00
|
|
|
|
2017-07-12 19:36:07 -04:00
|
|
|
The above matches both "run with scissors" and "running with scissors"
|
|
|
|
and would highlight "running" and "scissors" but not "run". If both
|
|
|
|
phrases appear in a large document then "running with scissors" is
|
|
|
|
sorted above "run with scissors" in the fragments list because there
|
|
|
|
are more matches in that fragment.
|
2013-09-03 14:25:58 -04:00
|
|
|
|
2019-09-09 12:35:50 -04:00
|
|
|
[source,console]
|
2013-09-03 14:25:58 -04:00
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
GET /_search
|
2013-09-03 14:25:58 -04:00
|
|
|
{
|
2020-07-21 15:49:58 -04:00
|
|
|
"query": {
|
|
|
|
"query_string": {
|
|
|
|
"query": "running scissors",
|
|
|
|
"fields": ["comment", "comment.plain^10"]
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"highlight": {
|
|
|
|
"order": "score",
|
|
|
|
"fields": {
|
|
|
|
"comment": {
|
|
|
|
"matched_fields": ["comment", "comment.plain"],
|
|
|
|
"type" : "fvh"
|
|
|
|
}
|
2013-09-03 14:25:58 -04:00
|
|
|
}
|
2020-07-21 15:49:58 -04:00
|
|
|
}
|
2013-09-03 14:25:58 -04:00
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2017-04-05 17:38:34 -04:00
|
|
|
// TEST[setup:twitter]
|
2013-09-03 14:25:58 -04:00
|
|
|
|
2017-07-12 19:36:07 -04:00
|
|
|
The above highlights "run" as well as "running" and "scissors" but
|
|
|
|
still sorts "running with scissors" above "run with scissors" because
|
|
|
|
the plain match ("running") is boosted.
|
2017-04-17 14:00:24 -04:00
|
|
|
|
2019-09-09 12:35:50 -04:00
|
|
|
[source,console]
|
2017-04-17 14:00:24 -04:00
|
|
|
--------------------------------------------------
|
2017-07-12 19:36:07 -04:00
|
|
|
GET /_search
|
2017-04-17 14:00:24 -04:00
|
|
|
{
|
2020-07-21 15:49:58 -04:00
|
|
|
"query": {
|
|
|
|
"query_string": {
|
|
|
|
"query": "running scissors",
|
|
|
|
"fields": [ "comment", "comment.plain^10" ]
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"highlight": {
|
|
|
|
"order": "score",
|
|
|
|
"fields": {
|
|
|
|
"comment": {
|
|
|
|
"matched_fields": [ "comment.plain" ],
|
|
|
|
"type": "fvh"
|
|
|
|
}
|
2017-04-17 14:00:24 -04:00
|
|
|
}
|
2020-07-21 15:49:58 -04:00
|
|
|
}
|
2017-04-17 14:00:24 -04:00
|
|
|
}
|
|
|
|
--------------------------------------------------
|
|
|
|
// TEST[setup:twitter]
|
|
|
|
|
2017-07-12 19:36:07 -04:00
|
|
|
The above query wouldn't highlight "run" or "scissor" but shows that
|
|
|
|
it is just fine not to list the field to which the matches are combined
|
|
|
|
(`comment`) in the matched fields.
|
|
|
|
|
|
|
|
[NOTE]
|
|
|
|
Technically it is also fine to add fields to `matched_fields` that
|
|
|
|
don't share the same underlying string as the field to which the matches
|
|
|
|
are combined. The results might not make much sense and if one of the
|
|
|
|
matches is off the end of the text then the whole query will fail.
|
2017-04-17 14:00:24 -04:00
|
|
|
|
2017-07-12 19:36:07 -04:00
|
|
|
[NOTE]
|
|
|
|
===================================================================
|
|
|
|
There is a small amount of overhead involved with setting
|
|
|
|
`matched_fields` to a non-empty array so always prefer
|
2017-04-17 14:00:24 -04:00
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
2017-07-12 19:36:07 -04:00
|
|
|
"highlight": {
|
|
|
|
"fields": {
|
|
|
|
"comment": {}
|
|
|
|
}
|
2017-04-17 14:00:24 -04:00
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2017-07-12 19:36:07 -04:00
|
|
|
// NOTCONSOLE
|
|
|
|
to
|
2017-04-17 14:00:24 -04:00
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
2017-07-12 19:36:07 -04:00
|
|
|
"highlight": {
|
|
|
|
"fields": {
|
|
|
|
"comment": {
|
|
|
|
"matched_fields": ["comment"],
|
|
|
|
"type" : "fvh"
|
2017-04-17 14:00:24 -04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2017-07-12 19:36:07 -04:00
|
|
|
// NOTCONSOLE
|
|
|
|
===================================================================
|
2017-04-17 14:00:24 -04:00
|
|
|
|
2017-07-12 19:36:07 -04:00
|
|
|
|
|
|
|
[[explicit-field-order]]
|
2020-07-23 12:42:33 -04:00
|
|
|
[discrete]
|
2020-07-17 10:57:00 -04:00
|
|
|
=== Explicitly order highlighted fields
|
2017-07-12 19:36:07 -04:00
|
|
|
Elasticsearch highlights the fields in the order that they are sent, but per the
|
|
|
|
JSON spec, objects are unordered. If you need to be explicit about the order
|
|
|
|
in which fields are highlighted specify the `fields` as an array:
|
2017-04-17 14:00:24 -04:00
|
|
|
|
2019-09-09 12:35:50 -04:00
|
|
|
[source,console]
|
2017-04-17 14:00:24 -04:00
|
|
|
--------------------------------------------------
|
2017-07-12 19:36:07 -04:00
|
|
|
GET /_search
|
2017-04-17 14:00:24 -04:00
|
|
|
{
|
2020-07-21 15:49:58 -04:00
|
|
|
"highlight": {
|
|
|
|
"fields": [
|
|
|
|
{ "title": {} },
|
|
|
|
{ "text": {} }
|
|
|
|
]
|
|
|
|
}
|
2017-04-17 14:00:24 -04:00
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2017-07-12 19:36:07 -04:00
|
|
|
// TEST[setup:twitter]
|
|
|
|
|
|
|
|
None of the highlighters built into Elasticsearch care about the order that the
|
|
|
|
fields are highlighted but a plugin might.
|
|
|
|
|
2017-04-17 14:00:24 -04:00
|
|
|
|
2017-07-12 00:15:35 -04:00
|
|
|
|
|
|
|
|
2020-07-23 12:42:33 -04:00
|
|
|
[discrete]
|
2017-07-12 19:36:07 -04:00
|
|
|
[[control-highlighted-frags]]
|
2020-07-17 10:57:00 -04:00
|
|
|
=== Control highlighted fragments
|
2017-07-12 19:36:07 -04:00
|
|
|
|
|
|
|
Each field highlighted can control the size of the highlighted fragment
|
|
|
|
in characters (defaults to `100`), and the maximum number of fragments
|
|
|
|
to return (defaults to `5`).
|
|
|
|
For example:
|
2017-07-12 00:15:35 -04:00
|
|
|
|
2019-09-09 12:35:50 -04:00
|
|
|
[source,console]
|
2013-09-05 12:39:01 -04:00
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
GET /_search
|
2013-09-05 12:39:01 -04:00
|
|
|
{
|
2020-07-21 15:49:58 -04:00
|
|
|
"query" : {
|
|
|
|
"match": { "user": "kimchy" }
|
|
|
|
},
|
|
|
|
"highlight" : {
|
|
|
|
"fields" : {
|
|
|
|
"comment" : {"fragment_size" : 150, "number_of_fragments" : 3}
|
2013-09-05 12:39:01 -04:00
|
|
|
}
|
2020-07-21 15:49:58 -04:00
|
|
|
}
|
2013-09-05 12:39:01 -04:00
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2017-04-05 17:38:34 -04:00
|
|
|
// TEST[setup:twitter]
|
2013-09-05 12:39:01 -04:00
|
|
|
|
2017-07-12 19:36:07 -04:00
|
|
|
On top of this it is possible to specify that highlighted fragments need
|
|
|
|
to be sorted by score:
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2019-09-09 12:35:50 -04:00
|
|
|
[source,console]
|
2013-08-28 19:24:34 -04:00
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
GET /_search
|
2013-08-28 19:24:34 -04:00
|
|
|
{
|
2020-07-21 15:49:58 -04:00
|
|
|
"query" : {
|
|
|
|
"match": { "user": "kimchy" }
|
|
|
|
},
|
|
|
|
"highlight" : {
|
|
|
|
"order" : "score",
|
|
|
|
"fields" : {
|
|
|
|
"comment" : {"fragment_size" : 150, "number_of_fragments" : 3}
|
2013-08-28 19:24:34 -04:00
|
|
|
}
|
2020-07-21 15:49:58 -04:00
|
|
|
}
|
2013-08-28 19:24:34 -04:00
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2017-04-05 17:38:34 -04:00
|
|
|
// TEST[setup:twitter]
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2017-07-12 19:36:07 -04:00
|
|
|
If the `number_of_fragments` value is set to `0` then no fragments are
|
|
|
|
produced, instead the whole content of the field is returned, and of
|
|
|
|
course it is highlighted. This can be very handy if short texts (like
|
|
|
|
document title or address) need to be highlighted but no fragmentation
|
|
|
|
is required. Note that `fragment_size` is ignored in this case.
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2019-09-09 12:35:50 -04:00
|
|
|
[source,console]
|
2016-02-28 15:32:51 -05:00
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
GET /_search
|
2016-02-28 15:32:51 -05:00
|
|
|
{
|
2020-07-21 15:49:58 -04:00
|
|
|
"query" : {
|
|
|
|
"match": { "user": "kimchy" }
|
|
|
|
},
|
|
|
|
"highlight" : {
|
|
|
|
"fields" : {
|
|
|
|
"body" : {},
|
|
|
|
"blog.title" : {"number_of_fragments" : 0}
|
2016-02-28 15:32:51 -05:00
|
|
|
}
|
2020-07-21 15:49:58 -04:00
|
|
|
}
|
2016-02-28 15:32:51 -05:00
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2017-04-05 17:38:34 -04:00
|
|
|
// TEST[setup:twitter]
|
2016-02-28 15:32:51 -05:00
|
|
|
|
2017-07-12 19:36:07 -04:00
|
|
|
When using `fvh` one can use `fragment_offset`
|
|
|
|
parameter to control the margin to start highlighting from.
|
2013-09-23 10:17:26 -04:00
|
|
|
|
2017-07-12 19:36:07 -04:00
|
|
|
In the case where there is no matching fragment to highlight, the default is
|
|
|
|
to not return anything. Instead, we can return a snippet of text from the
|
|
|
|
beginning of the field by setting `no_match_size` (default `0`) to the length
|
|
|
|
of the text that you want returned. The actual length may be shorter or longer than
|
|
|
|
specified as it tries to break on a word boundary.
|
2013-09-23 10:17:26 -04:00
|
|
|
|
2019-09-09 12:35:50 -04:00
|
|
|
[source,console]
|
2013-09-23 10:17:26 -04:00
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
GET /_search
|
2013-09-23 10:17:26 -04:00
|
|
|
{
|
2020-07-21 15:49:58 -04:00
|
|
|
"query": {
|
|
|
|
"match": { "user": "kimchy" }
|
|
|
|
},
|
|
|
|
"highlight": {
|
|
|
|
"fields": {
|
|
|
|
"comment": {
|
|
|
|
"fragment_size": 150,
|
|
|
|
"number_of_fragments": 3,
|
|
|
|
"no_match_size": 150
|
|
|
|
}
|
2013-09-23 10:17:26 -04:00
|
|
|
}
|
2020-07-21 15:49:58 -04:00
|
|
|
}
|
2013-09-23 10:17:26 -04:00
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2017-04-05 17:38:34 -04:00
|
|
|
// TEST[setup:twitter]
|
|
|
|
|
2020-07-23 12:42:33 -04:00
|
|
|
[discrete]
|
2017-07-12 19:36:07 -04:00
|
|
|
[[highlight-postings-list]]
|
2020-07-17 10:57:00 -04:00
|
|
|
=== Highlight using the postings list
|
2017-07-12 19:36:07 -04:00
|
|
|
|
|
|
|
Here is an example of setting the `comment` field in the index mapping to
|
|
|
|
allow for highlighting using the postings:
|
2013-09-23 10:17:26 -04:00
|
|
|
|
2019-09-09 12:35:50 -04:00
|
|
|
[source,console]
|
2013-09-23 10:17:26 -04:00
|
|
|
--------------------------------------------------
|
2019-01-18 03:34:11 -05:00
|
|
|
PUT /example
|
2013-09-23 10:17:26 -04:00
|
|
|
{
|
2017-07-12 19:36:07 -04:00
|
|
|
"mappings": {
|
2019-01-18 03:34:11 -05:00
|
|
|
"properties": {
|
|
|
|
"comment" : {
|
|
|
|
"type": "text",
|
|
|
|
"index_options" : "offsets"
|
2017-07-12 19:36:07 -04:00
|
|
|
}
|
2013-09-23 10:17:26 -04:00
|
|
|
}
|
2017-07-12 19:36:07 -04:00
|
|
|
}
|
2013-09-23 10:17:26 -04:00
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2017-04-05 17:38:34 -04:00
|
|
|
|
2017-07-12 19:36:07 -04:00
|
|
|
Here is an example of setting the `comment` field to allow for
|
|
|
|
highlighting using the `term_vectors` (this will cause the index to be bigger):
|
2013-09-23 10:17:26 -04:00
|
|
|
|
2019-09-09 12:35:50 -04:00
|
|
|
[source,console]
|
2013-09-23 10:17:26 -04:00
|
|
|
--------------------------------------------------
|
2019-01-18 03:34:11 -05:00
|
|
|
PUT /example
|
2013-09-23 10:17:26 -04:00
|
|
|
{
|
2017-07-12 19:36:07 -04:00
|
|
|
"mappings": {
|
2019-01-18 03:34:11 -05:00
|
|
|
"properties": {
|
|
|
|
"comment" : {
|
|
|
|
"type": "text",
|
|
|
|
"term_vector" : "with_positions_offsets"
|
2017-07-12 19:36:07 -04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
|
|
|
|
2020-07-23 12:42:33 -04:00
|
|
|
[discrete]
|
2017-07-12 19:36:07 -04:00
|
|
|
[[specify-fragmenter]]
|
2020-07-17 10:57:00 -04:00
|
|
|
=== Specify a fragmenter for the plain highlighter
|
2017-07-12 19:36:07 -04:00
|
|
|
|
|
|
|
When using the `plain` highlighter, you can choose between the `simple` and
|
|
|
|
`span` fragmenters:
|
|
|
|
|
2019-09-09 12:35:50 -04:00
|
|
|
[source,console]
|
2017-07-12 19:36:07 -04:00
|
|
|
--------------------------------------------------
|
2017-12-14 11:47:53 -05:00
|
|
|
GET twitter/_search
|
2017-07-12 19:36:07 -04:00
|
|
|
{
|
2020-07-21 15:49:58 -04:00
|
|
|
"query": {
|
|
|
|
"match_phrase": { "message": "number 1" }
|
|
|
|
},
|
|
|
|
"highlight": {
|
|
|
|
"fields": {
|
|
|
|
"message": {
|
|
|
|
"type": "plain",
|
|
|
|
"fragment_size": 15,
|
|
|
|
"number_of_fragments": 3,
|
|
|
|
"fragmenter": "simple"
|
|
|
|
}
|
2013-09-23 10:17:26 -04:00
|
|
|
}
|
2020-07-21 15:49:58 -04:00
|
|
|
}
|
2013-09-23 10:17:26 -04:00
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2017-04-05 17:38:34 -04:00
|
|
|
// TEST[setup:twitter]
|
2016-05-17 15:00:15 -04:00
|
|
|
|
2017-07-12 19:36:07 -04:00
|
|
|
Response:
|
2013-09-23 10:17:26 -04:00
|
|
|
|
2019-09-06 16:09:09 -04:00
|
|
|
[source,console-result]
|
2013-09-23 10:17:26 -04:00
|
|
|
--------------------------------------------------
|
2017-07-12 19:36:07 -04:00
|
|
|
{
|
2020-07-21 15:49:58 -04:00
|
|
|
...
|
|
|
|
"hits": {
|
|
|
|
"total": {
|
|
|
|
"value": 1,
|
|
|
|
"relation": "eq"
|
|
|
|
},
|
|
|
|
"max_score": 1.6011951,
|
|
|
|
"hits": [
|
|
|
|
{
|
|
|
|
"_index": "twitter",
|
|
|
|
"_type": "_doc",
|
|
|
|
"_id": "1",
|
|
|
|
"_score": 1.6011951,
|
|
|
|
"_source": {
|
|
|
|
"user": "test",
|
|
|
|
"message": "some message with the number 1",
|
|
|
|
"date": "2009-11-15T14:12:12",
|
|
|
|
"likes": 1
|
2018-12-05 13:49:06 -05:00
|
|
|
},
|
2020-07-21 15:49:58 -04:00
|
|
|
"highlight": {
|
|
|
|
"message": [
|
|
|
|
" with the <em>number</em>",
|
|
|
|
" <em>1</em>"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
]
|
|
|
|
}
|
2017-07-12 19:36:07 -04:00
|
|
|
}
|
2013-09-23 10:17:26 -04:00
|
|
|
--------------------------------------------------
|
2017-07-12 19:36:07 -04:00
|
|
|
// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,/]
|
|
|
|
|
2019-09-09 12:35:50 -04:00
|
|
|
[source,console]
|
2013-09-23 10:17:26 -04:00
|
|
|
--------------------------------------------------
|
2017-12-14 11:47:53 -05:00
|
|
|
GET twitter/_search
|
2017-07-12 19:36:07 -04:00
|
|
|
{
|
2020-07-21 15:49:58 -04:00
|
|
|
"query": {
|
|
|
|
"match_phrase": { "message": "number 1" }
|
|
|
|
},
|
|
|
|
"highlight": {
|
|
|
|
"fields": {
|
|
|
|
"message": {
|
|
|
|
"type": "plain",
|
|
|
|
"fragment_size": 15,
|
|
|
|
"number_of_fragments": 3,
|
|
|
|
"fragmenter": "span"
|
|
|
|
}
|
2013-09-23 10:17:26 -04:00
|
|
|
}
|
2020-07-21 15:49:58 -04:00
|
|
|
}
|
2017-07-12 19:36:07 -04:00
|
|
|
}
|
2013-09-23 10:17:26 -04:00
|
|
|
--------------------------------------------------
|
2017-07-12 19:36:07 -04:00
|
|
|
// TEST[setup:twitter]
|
2014-01-07 13:37:25 -05:00
|
|
|
|
2017-07-12 19:36:07 -04:00
|
|
|
Response:
|
2017-07-12 00:15:35 -04:00
|
|
|
|
2019-09-06 16:09:09 -04:00
|
|
|
[source,console-result]
|
2014-05-14 15:20:59 -04:00
|
|
|
--------------------------------------------------
|
2017-04-05 17:38:34 -04:00
|
|
|
{
|
2020-07-21 15:49:58 -04:00
|
|
|
...
|
|
|
|
"hits": {
|
|
|
|
"total": {
|
|
|
|
"value": 1,
|
|
|
|
"relation": "eq"
|
|
|
|
},
|
|
|
|
"max_score": 1.6011951,
|
|
|
|
"hits": [
|
|
|
|
{
|
|
|
|
"_index": "twitter",
|
|
|
|
"_type": "_doc",
|
|
|
|
"_id": "1",
|
|
|
|
"_score": 1.6011951,
|
|
|
|
"_source": {
|
|
|
|
"user": "test",
|
|
|
|
"message": "some message with the number 1",
|
|
|
|
"date": "2009-11-15T14:12:12",
|
|
|
|
"likes": 1
|
2018-12-05 13:49:06 -05:00
|
|
|
},
|
2020-07-21 15:49:58 -04:00
|
|
|
"highlight": {
|
|
|
|
"message": [
|
|
|
|
" with the <em>number</em> <em>1</em>"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
]
|
|
|
|
}
|
2017-04-05 17:38:34 -04:00
|
|
|
}
|
2014-05-14 15:20:59 -04:00
|
|
|
--------------------------------------------------
|
2017-07-12 19:36:07 -04:00
|
|
|
// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,/]
|
2017-04-05 17:38:34 -04:00
|
|
|
|
2017-07-12 19:36:07 -04:00
|
|
|
If the `number_of_fragments` option is set to `0`,
|
|
|
|
`NullFragmenter` is used which does not fragment the text at all.
|
|
|
|
This is useful for highlighting the entire contents of a document or field.
|
2018-04-18 17:41:19 -04:00
|
|
|
|
|
|
|
|
2018-04-18 19:56:09 -04:00
|
|
|
include::highlighters-internal.asciidoc[]
|