2013-08-28 19:24:34 -04:00
|
|
|
[[search-request-highlighting]]
|
|
|
|
=== Highlighting
|
|
|
|
|
2017-07-12 00:15:35 -04:00
|
|
|
Highlighters enable you to get highlighted snippets from one or more fields
|
|
|
|
in your search results so you can show users where the query matches are.
|
|
|
|
When you request highlights, the response contains an additional `highlight`
|
|
|
|
element for each search hit that includes the highlighted fields and the
|
|
|
|
highlighted fragments.
|
|
|
|
|
|
|
|
Highlighting requires the actual content of a field. If the field is not
|
|
|
|
stored (the mapping does not set `store` to `true`), the actual `_source` is
|
|
|
|
loaded and the relevant field is extracted from `_source`.
|
|
|
|
|
|
|
|
NOTE: The `_all` field cannot be extracted from `_source`, so it can only
|
|
|
|
be used for highlighting if it is explicitly stored.
|
|
|
|
|
|
|
|
For example, to get highlights for the `content` field in each search hit
|
|
|
|
using the default highlighter, include a `highlight` object in
|
|
|
|
the request body that specifies the `content` field:
|
2013-08-28 19:24:34 -04:00
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
GET /_search
|
2013-08-28 19:24:34 -04:00
|
|
|
{
|
2016-05-17 15:00:15 -04:00
|
|
|
"query" : {
|
2017-01-17 06:20:03 -05:00
|
|
|
"match": { "content": "kimchy" }
|
2016-05-17 15:00:15 -04:00
|
|
|
},
|
2013-08-28 19:24:34 -04:00
|
|
|
"highlight" : {
|
|
|
|
"fields" : {
|
2016-10-06 23:46:55 -04:00
|
|
|
"comment" : {}
|
2013-08-28 19:24:34 -04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
// CONSOLE
|
2017-04-05 17:38:34 -04:00
|
|
|
// TEST[setup:twitter]
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2017-07-12 00:15:35 -04:00
|
|
|
{es} supports three highlighters:
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2017-07-12 00:15:35 -04:00
|
|
|
[[unified-highlighter]]
|
|
|
|
* The `unified` highlighter uses the Lucene Unified Highlighter. This
|
|
|
|
highlighter breaks the text into sentences and uses the BM25 algorithm to score
|
|
|
|
individual sentences as if they were documents in the corpus. It also supports
|
|
|
|
accurate phrase and multi-term (fuzzy, prefix, regex) highlighting. This is the
|
|
|
|
default highlighter.
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2017-07-12 00:15:35 -04:00
|
|
|
[[plain-highlighter]]
|
|
|
|
* The `plain` highlighter uses the standard Lucene highlighter. It attempts to
|
|
|
|
reflect the query matching logic in terms of understanding word importance and
|
|
|
|
any word positioning criteria in phrase queries.
|
|
|
|
+
|
|
|
|
[WARNING]
|
|
|
|
The `plain` highlighter works best for highlighting simple query matches in a
|
|
|
|
single field. To accurately reflect query logic, it creates a tiny in-memory
|
|
|
|
index and re-runs the original query criteria through Lucene's query execution
|
|
|
|
planner to get access to low-level match information for the current document.
|
|
|
|
This is repeated for every field and every document that needs to be highlighted.
|
|
|
|
If you want to highlight a lot of fields in a lot of documents with complex
|
|
|
|
queries, we recommend using one of the other highlighters.
|
2014-10-15 07:44:36 -04:00
|
|
|
|
2017-07-12 00:15:35 -04:00
|
|
|
[[fast-vector-highlighter]]
|
|
|
|
* The `fvh` highlighter uses the Lucene Fast Vector highlighter.
|
|
|
|
This highlighter can be used on fields with `term_vector` set to
|
|
|
|
`with_positions_offsets` in the mapping. The fast vector highlighter:
|
Added third highlighter type based on lucene postings highlighter
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.
Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same
The lucene postings highlighter api is quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.
Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.
Closes #3704
2013-08-08 11:10:42 -04:00
|
|
|
|
2017-07-12 00:15:35 -04:00
|
|
|
** Is faster especially for large fields (> `1MB`)
|
|
|
|
** Can be customized with a <<boundary-scanners,`boundary_scanner`>>.
|
|
|
|
** Requires setting `term_vector` to `with_positions_offsets` which
|
|
|
|
increases the size of the index
|
|
|
|
** Can combine matches from multiple fields into one result. See
|
|
|
|
`matched_fields`
|
|
|
|
** Can assign different weights to matches at different positions allowing
|
|
|
|
for things like phrase matches being sorted above term matches when
|
|
|
|
highlighting a Boosting Query that boosts phrase matches over term matches
|
|
|
|
|
|
|
|
To create meaningful search snippets from the terms being queried,
|
|
|
|
the highlighter needs to know the start and end character offsets of each word
|
|
|
|
in the original text. These offsets can be obtained from:
|
|
|
|
|
|
|
|
* The postings list. If `index_options` is set to `offsets` in the mapping,
|
|
|
|
the `unified` highlighter uses this information to highlight documents without
|
|
|
|
re-analyzing the text. It re-runs the original query directly on the postings
|
|
|
|
and extracts the matching offsets from the index, limiting the collection to
|
|
|
|
the highlighted documents. This is important if you have large fields because
|
|
|
|
it doesn't require reanalyzing the text to be highlighted. It also requires less
|
|
|
|
disk space than using `term_vectors`.
|
|
|
|
|
|
|
|
* Term vectors. If `term_vector` information is provided by setting
|
|
|
|
`term_vector` to `with_positions_offsets` in the mapping, the `unified`
|
|
|
|
highlighter automatically uses the `term_vector` to highlight the field.
|
|
|
|
Term vector highlighting is faster for highlighting multi-term queries like
|
|
|
|
`prefix` or `wildcard` because it can access the dictionary of terms for
|
|
|
|
each document, but it can be slower than using the postings list. The `fvh`
|
|
|
|
highlighter always uses term vectors.
|
|
|
|
|
|
|
|
* Plain highlighting. This mode is used when there is no other alternative.
|
|
|
|
It creates a tiny in-memory index and re-runs the original query criteria through
|
|
|
|
Lucene's query execution planner to get access to low-level match information on
|
|
|
|
the current document. This is repeated for every field and every document that
|
|
|
|
needs highlighting. The `plain` highlighter always uses plain highlighting.
|
2015-07-15 05:49:48 -04:00
|
|
|
|
2017-07-12 00:15:35 -04:00
|
|
|
You can specify the highlighter `type` you want to use
|
|
|
|
for each field.
|
2015-07-15 05:49:48 -04:00
|
|
|
|
2017-07-12 00:15:35 -04:00
|
|
|
[[highlighting-settings]]
|
|
|
|
==== Highlighting Settings
|
2015-07-15 05:49:48 -04:00
|
|
|
|
2017-07-12 00:15:35 -04:00
|
|
|
Highlighting settings can be set on a global level and overridden at
|
|
|
|
the field level.
|
Added third highlighter type based on lucene postings highlighter
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.
Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same
The lucene postings highlighter api is quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.
Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.
Closes #3704
2013-08-08 11:10:42 -04:00
|
|
|
|
2017-07-12 00:15:35 -04:00
|
|
|
boundary_chars:: A string that contains each boundary character.
|
|
|
|
Defaults to `.,!? \t\n`.
|
Added third highlighter type based on lucene postings highlighter
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.
Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same
The lucene postings highlighter api is quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.
Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.
Closes #3704
2013-08-08 11:10:42 -04:00
|
|
|
|
2017-07-12 00:15:35 -04:00
|
|
|
boundary_max_scan:: How far to scan for boundary characters. Defaults to `20`.
|
|
|
|
|
|
|
|
[[boundary-scanners]]
|
|
|
|
boundary_scanner:: Specifies how to break the highlighted fragments: `chars`,
|
|
|
|
`sentence`, or `word`. Only valid for the `unified` and `fvh` highlighters.
|
|
|
|
Defaults to `sentence` for the `unified` highlighter. Defaults to `chars` for
|
|
|
|
the `fvh` highlighter.
|
|
|
|
+
|
|
|
|
* `chars` Use the characters specified by `boundary_chars` as highlighting
|
|
|
|
boundaries. The `boundary_max_scan` setting controls how far to scan for
|
|
|
|
boundary characters. Only valid for the `fvh` highlighter.
|
|
|
|
* `sentence` Break highlighted fragments at the next sentence boundary, as
|
|
|
|
determined by Java's
|
|
|
|
https://docs.oracle.com/javase/8/docs/api/java/text/BreakIterator.html[BreakIterator].
|
|
|
|
You can specify the locale to use with `boundary_scanner_locale`.
|
|
|
|
+
|
|
|
|
NOTE: When used with the `unified` highlighter, the `sentence` scanner splits
|
|
|
|
sentences bigger than `fragment_size` at the first word boundary next to
|
|
|
|
`fragment_size`. You can set `fragment_size` to 0 to never split any sentence.
|
|
|
|
|
|
|
|
* `word` Break highlighted fragments at the next word boundary, as determined
|
|
|
|
by Java's https://docs.oracle.com/javase/8/docs/api/java/text/BreakIterator.html[BreakIterator].
|
|
|
|
You can specify the locale to use with `boundary_scanner_locale`.
|
|
|
|
|
|
|
|
boundary_scanner_locale:: Controls which locale is used to search for sentence
|
|
|
|
and word boundaries.
|
|
|
|
|
|
|
|
encoder:: Indicates if the highlighted text should be HTML encoded:
|
|
|
|
`default` (no encoding) or `html` (escapes HTML highlighting tags).
|
|
|
|
|
|
|
|
fields:: Specifies the fields to retrieve highlights for. You can use wildcards
|
|
|
|
to specify fields. For example, you could specify `comment_*` to
|
|
|
|
get highlights for all <<text,text>> and <<keyword,keyword>> fields
|
|
|
|
that start with `comment_`.
|
|
|
|
+
|
|
|
|
NOTE: Only text and keyword fields are highlighted when you use wildcards.
|
|
|
|
If you use a custom mapper and want to highlight on a field anyway, you
|
|
|
|
must explicitly specify that field name.
|
|
|
|
|
|
|
|
force_source:: Highlight based on the source even if the field is
|
|
|
|
stored separately. Defaults to `false`.
|
|
|
|
|
|
|
|
fragmenter:: Specifies how text should be broken up in highlight
|
|
|
|
snippets: `simple` or `span`. Only valid for the `plain` highlighter.
|
|
|
|
Defaults to `span`.
|
|
|
|
+
|
|
|
|
* `simple` Breaks up text into same-sized fragments.
|
|
|
|
* `span` Breaks up text into same-sized fragments, but tried to avoid
|
|
|
|
breaking up text between highlighted terms. This is helpful when you're
|
|
|
|
querying for phrases. Default.
|
|
|
|
|
|
|
|
fragment_offset:: Controls the margin from which you want to start
|
|
|
|
highlighting. Only valid when using the `fvh` highlighter.
|
|
|
|
|
|
|
|
fragment_size:: The size of the highlighted fragment in characters. Defaults
|
|
|
|
to 100.
|
|
|
|
|
|
|
|
highlight_query:: Highlight matches for a query other than the search
|
|
|
|
query. This is especially useful if you use a rescore query because
|
|
|
|
those are not taken into account by highlighting by default.
|
|
|
|
+
|
|
|
|
IMPORTANT: {es} does not validate that `highlight_query` contains
|
|
|
|
the search query in any way so it is possible to define it so
|
|
|
|
legitimate query results are not highlighted. Generally, you should
|
|
|
|
include the search query as part of the `highlight_query`.
|
|
|
|
|
|
|
|
matched_fields:: Combine matches on multiple fields to highlight a single field.
|
|
|
|
This is most intuitive for multifields that analyze the same string in different
|
|
|
|
ways. All `matched_fields` must have `term_vector` set to
|
|
|
|
`with_positions_offsets`, but only the field to which
|
|
|
|
the matches are combined is loaded so only that field benefits from having
|
|
|
|
`store` set to `yes`. Only valid for the `fvh` highlighter.
|
|
|
|
|
|
|
|
no_match_size:: The amount of text you want to return from the beginning
|
|
|
|
of the field if there are no matching fragments to highlight. Defaults
|
|
|
|
to 0 (nothing is returned).
|
|
|
|
|
|
|
|
number_of_fragments:: The maximum number of fragments to return. If the
|
|
|
|
number of fragments is set to 0, no fragments are returned. Instead,
|
|
|
|
the entire field contents are highlighted and returned. This can be
|
|
|
|
handy when you need to highlight short texts such as a title or
|
|
|
|
address, but fragmentation is not required. If `number_of_fragments`
|
|
|
|
is 0, `fragment_size` is ignored. Defaults to 5.
|
|
|
|
|
|
|
|
order:: Sorts highlighted fragments by score when set to `score`. Only valid for
|
|
|
|
the `unified` highlighter.
|
|
|
|
|
|
|
|
phrase_limit:: Controls the number of matching phrases in a document that are
|
|
|
|
considered. Prevents the `fvh` highlighter from analyzing too many phrases
|
|
|
|
and consuming too much memory. When using `matched_fields, `phrase_limit`
|
|
|
|
phrases per matched field are considered. Raising the limit increases query
|
|
|
|
time and consumes more memory. Only supported by the `fvh` highlighter.
|
|
|
|
Defaults to 256.
|
|
|
|
|
|
|
|
pre_tags:: Use in conjunction with `post_tags` to define the HTML tags
|
|
|
|
to use for the highlighted text. By default, highlighted text is wrapped
|
|
|
|
in `<em>` and </em>` tags. Specify as an array of strings.
|
|
|
|
|
|
|
|
post_tags:: Use in conjunction with `pre_tags` to define the HTML tags
|
|
|
|
to use for the highlighted text. By default, highlighted text is wrapped
|
|
|
|
in `<em>` and `</em>` tags. Specify as an array of strings.
|
|
|
|
|
|
|
|
require_field_match:: By default, only fields that contains a query match are
|
|
|
|
highlighted. Set `require_field_match` to `false` to highlight all fields.
|
|
|
|
Defaults to `true`.
|
|
|
|
|
|
|
|
tags_schema:: Set to `styled` to use the built-in tag schema. The `styled`
|
|
|
|
schema defines the following `pre_tags` and defines `post_tags` as
|
|
|
|
`</em>`.
|
|
|
|
+
|
|
|
|
[source,html]
|
|
|
|
--------------------------------------------------
|
|
|
|
<em class="hlt1">, <em class="hlt2">, <em class="hlt3">,
|
|
|
|
<em class="hlt4">, <em class="hlt5">, <em class="hlt6">,
|
|
|
|
<em class="hlt7">, <em class="hlt8">, <em class="hlt9">,
|
|
|
|
<em class="hlt10">
|
|
|
|
--------------------------------------------------
|
2017-06-09 08:09:57 -04:00
|
|
|
|
|
|
|
|
2017-07-12 00:15:35 -04:00
|
|
|
[[highlighter-type]]
|
|
|
|
type:: The highlighter to use: `unified`, `plain`, or `fvh`. Defaults to
|
|
|
|
`unified`.
|
2017-06-09 08:09:57 -04:00
|
|
|
|
2017-07-12 00:15:35 -04:00
|
|
|
[[highlighting-examples]]
|
|
|
|
==== Highlighting Examples
|
Added third highlighter type based on lucene postings highlighter
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.
Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same
The lucene postings highlighter api is quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.
Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.
Closes #3704
2013-08-08 11:10:42 -04:00
|
|
|
|
2016-10-06 23:46:55 -04:00
|
|
|
Here is an example of setting the `comment` field in the index mapping to allow for
|
2017-06-09 08:09:57 -04:00
|
|
|
highlighting using the postings:
|
Added third highlighter type based on lucene postings highlighter
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.
Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same
The lucene postings highlighter api is quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.
Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.
Closes #3704
2013-08-08 11:10:42 -04:00
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
2017-04-05 17:38:34 -04:00
|
|
|
PUT /example
|
Added third highlighter type based on lucene postings highlighter
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.
Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same
The lucene postings highlighter api is quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.
Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.
Closes #3704
2013-08-08 11:10:42 -04:00
|
|
|
{
|
2017-04-05 17:38:34 -04:00
|
|
|
"mappings": {
|
|
|
|
"doc" : {
|
|
|
|
"properties": {
|
|
|
|
"comment" : {
|
|
|
|
"type": "text",
|
|
|
|
"index_options" : "offsets"
|
|
|
|
}
|
|
|
|
}
|
Added third highlighter type based on lucene postings highlighter
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.
Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same
The lucene postings highlighter api is quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.
Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.
Closes #3704
2013-08-08 11:10:42 -04:00
|
|
|
}
|
2017-04-05 17:38:34 -04:00
|
|
|
}
|
Added third highlighter type based on lucene postings highlighter
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.
Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same
The lucene postings highlighter api is quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.
Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.
Closes #3704
2013-08-08 11:10:42 -04:00
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2017-04-05 17:38:34 -04:00
|
|
|
// CONSOLE
|
Added third highlighter type based on lucene postings highlighter
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.
Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same
The lucene postings highlighter api is quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.
Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.
Closes #3704
2013-08-08 11:10:42 -04:00
|
|
|
|
2017-06-09 08:09:57 -04:00
|
|
|
Here is an example of setting the `comment` field to allow for
|
|
|
|
highlighting using the `term_vectors` (this will cause the index to be bigger):
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
|
|
|
PUT /example
|
|
|
|
{
|
|
|
|
"mappings": {
|
|
|
|
"doc" : {
|
|
|
|
"properties": {
|
|
|
|
"comment" : {
|
|
|
|
"type": "text",
|
|
|
|
"term_vector" : "with_positions_offsets"
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
|
|
|
// CONSOLE
|
|
|
|
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2017-07-12 00:15:35 -04:00
|
|
|
===== Force highlighter type
|
Added third highlighter type based on lucene postings highlighter
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.
Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same
The lucene postings highlighter api is quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.
Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.
Closes #3704
2013-08-08 11:10:42 -04:00
|
|
|
|
2017-06-09 08:09:57 -04:00
|
|
|
The `type` field allows to force a specific highlighter type.
|
|
|
|
The allowed values are: `unified`, `plain` and `fvh`.
|
Added third highlighter type based on lucene postings highlighter
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.
Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same
The lucene postings highlighter api is quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.
Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.
Closes #3704
2013-08-08 11:10:42 -04:00
|
|
|
The following is an example that forces the use of the plain highlighter:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
GET /_search
|
Added third highlighter type based on lucene postings highlighter
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.
Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same
The lucene postings highlighter api is quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.
Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.
Closes #3704
2013-08-08 11:10:42 -04:00
|
|
|
{
|
2016-05-17 15:00:15 -04:00
|
|
|
"query" : {
|
2016-05-18 09:52:08 -04:00
|
|
|
"match": { "user": "kimchy" }
|
2016-05-17 15:00:15 -04:00
|
|
|
},
|
Added third highlighter type based on lucene postings highlighter
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.
Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same
The lucene postings highlighter api is quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.
Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.
Closes #3704
2013-08-08 11:10:42 -04:00
|
|
|
"highlight" : {
|
|
|
|
"fields" : {
|
2016-10-06 23:46:55 -04:00
|
|
|
"comment" : {"type" : "plain"}
|
Added third highlighter type based on lucene postings highlighter
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.
Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same
The lucene postings highlighter api is quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.
Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.
Closes #3704
2013-08-08 11:10:42 -04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
// CONSOLE
|
2017-04-05 17:38:34 -04:00
|
|
|
// TEST[setup:twitter]
|
Added third highlighter type based on lucene postings highlighter
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.
Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same
The lucene postings highlighter api is quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.
Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.
Closes #3704
2013-08-08 11:10:42 -04:00
|
|
|
|
2017-07-12 00:15:35 -04:00
|
|
|
===== Force highlighting on source
|
2013-12-09 05:57:59 -05:00
|
|
|
|
2017-07-12 00:15:35 -04:00
|
|
|
Forces the highlighting to highlight fields based on the source even if fields
|
|
|
|
are stored separately. Defaults to `false`.
|
2013-12-09 05:57:59 -05:00
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
GET /_search
|
2013-12-09 05:57:59 -05:00
|
|
|
{
|
2016-05-17 15:00:15 -04:00
|
|
|
"query" : {
|
2016-05-18 09:52:08 -04:00
|
|
|
"match": { "user": "kimchy" }
|
2016-05-17 15:00:15 -04:00
|
|
|
},
|
2013-12-09 05:57:59 -05:00
|
|
|
"highlight" : {
|
|
|
|
"fields" : {
|
2016-10-06 23:46:55 -04:00
|
|
|
"comment" : {"force_source" : true}
|
2013-12-09 05:57:59 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
// CONSOLE
|
2017-04-05 17:38:34 -04:00
|
|
|
// TEST[setup:twitter]
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2013-09-25 12:17:40 -04:00
|
|
|
[[tags]]
|
2017-07-12 00:15:35 -04:00
|
|
|
===== Configure highlighting tags
|
2013-08-28 19:24:34 -04:00
|
|
|
|
|
|
|
By default, the highlighting will wrap highlighted text in `<em>` and
|
|
|
|
`</em>`. This can be controlled by setting `pre_tags` and `post_tags`,
|
|
|
|
for example:
|
|
|
|
|
Added third highlighter type based on lucene postings highlighter
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.
Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same
The lucene postings highlighter api is quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.
Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.
Closes #3704
2013-08-08 11:10:42 -04:00
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
GET /_search
|
Added third highlighter type based on lucene postings highlighter
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.
Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same
The lucene postings highlighter api is quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.
Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.
Closes #3704
2013-08-08 11:10:42 -04:00
|
|
|
{
|
2016-05-17 15:00:15 -04:00
|
|
|
"query" : {
|
2016-05-18 09:52:08 -04:00
|
|
|
"match": { "user": "kimchy" }
|
2016-05-17 15:00:15 -04:00
|
|
|
},
|
Added third highlighter type based on lucene postings highlighter
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.
Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same
The lucene postings highlighter api is quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.
Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.
Closes #3704
2013-08-08 11:10:42 -04:00
|
|
|
"highlight" : {
|
|
|
|
"pre_tags" : ["<tag1>"],
|
|
|
|
"post_tags" : ["</tag1>"],
|
|
|
|
"fields" : {
|
|
|
|
"_all" : {}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
// CONSOLE
|
2017-04-05 17:38:34 -04:00
|
|
|
// TEST[setup:twitter]
|
Added third highlighter type based on lucene postings highlighter
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.
Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same
The lucene postings highlighter api is quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.
Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.
Closes #3704
2013-08-08 11:10:42 -04:00
|
|
|
|
2017-07-12 00:15:35 -04:00
|
|
|
When using the fast vector highlighter, you can specify additional tags and the
|
|
|
|
"importance" is ordered.
|
Added third highlighter type based on lucene postings highlighter
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.
Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same
The lucene postings highlighter api is quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.
Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.
Closes #3704
2013-08-08 11:10:42 -04:00
|
|
|
|
2013-08-28 19:24:34 -04:00
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
GET /_search
|
2013-08-28 19:24:34 -04:00
|
|
|
{
|
2016-05-17 15:00:15 -04:00
|
|
|
"query" : {
|
2016-05-18 09:52:08 -04:00
|
|
|
"match": { "user": "kimchy" }
|
2016-05-17 15:00:15 -04:00
|
|
|
},
|
2013-08-28 19:24:34 -04:00
|
|
|
"highlight" : {
|
|
|
|
"pre_tags" : ["<tag1>", "<tag2>"],
|
|
|
|
"post_tags" : ["</tag1>", "</tag2>"],
|
|
|
|
"fields" : {
|
|
|
|
"_all" : {}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
// CONSOLE
|
2017-04-05 17:38:34 -04:00
|
|
|
// TEST[setup:twitter]
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2017-07-12 00:15:35 -04:00
|
|
|
You can also use the built-in `styled` tag schema:
|
2013-08-28 19:24:34 -04:00
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
GET /_search
|
2013-08-28 19:24:34 -04:00
|
|
|
{
|
2016-05-17 15:00:15 -04:00
|
|
|
"query" : {
|
2016-05-18 09:52:08 -04:00
|
|
|
"match": { "user": "kimchy" }
|
2016-05-17 15:00:15 -04:00
|
|
|
},
|
2013-08-28 19:24:34 -04:00
|
|
|
"highlight" : {
|
|
|
|
"tags_schema" : "styled",
|
|
|
|
"fields" : {
|
2016-10-06 23:46:55 -04:00
|
|
|
"comment" : {}
|
2013-08-28 19:24:34 -04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
// CONSOLE
|
2017-04-05 17:38:34 -04:00
|
|
|
// TEST[setup:twitter]
|
Added third highlighter type based on lucene postings highlighter
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.
Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same
The lucene postings highlighter api is quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.
Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.
Closes #3704
2013-08-08 11:10:42 -04:00
|
|
|
|
|
|
|
|
2017-07-12 00:15:35 -04:00
|
|
|
===== Controlling highlighted fragments
|
2013-08-28 19:24:34 -04:00
|
|
|
|
|
|
|
Each field highlighted can control the size of the highlighted fragment
|
|
|
|
in characters (defaults to `100`), and the maximum number of fragments
|
Added third highlighter type based on lucene postings highlighter
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.
Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same
The lucene postings highlighter api is quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.
Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.
Closes #3704
2013-08-08 11:10:42 -04:00
|
|
|
to return (defaults to `5`).
|
|
|
|
For example:
|
2013-08-28 19:24:34 -04:00
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
GET /_search
|
2013-08-28 19:24:34 -04:00
|
|
|
{
|
2016-05-17 15:00:15 -04:00
|
|
|
"query" : {
|
2016-05-18 09:52:08 -04:00
|
|
|
"match": { "user": "kimchy" }
|
2016-05-17 15:00:15 -04:00
|
|
|
},
|
2013-08-28 19:24:34 -04:00
|
|
|
"highlight" : {
|
|
|
|
"fields" : {
|
2016-10-06 23:46:55 -04:00
|
|
|
"comment" : {"fragment_size" : 150, "number_of_fragments" : 3}
|
2013-08-28 19:24:34 -04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
// CONSOLE
|
2017-04-05 17:38:34 -04:00
|
|
|
// TEST[setup:twitter]
|
2013-08-28 19:24:34 -04:00
|
|
|
|
Added third highlighter type based on lucene postings highlighter
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.
Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same
The lucene postings highlighter api is quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.
Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.
Closes #3704
2013-08-08 11:10:42 -04:00
|
|
|
On top of this it is possible to specify that highlighted fragments need
|
|
|
|
to be sorted by score:
|
2013-08-28 19:24:34 -04:00
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
GET /_search
|
2013-08-28 19:24:34 -04:00
|
|
|
{
|
2016-05-17 15:00:15 -04:00
|
|
|
"query" : {
|
2016-05-18 09:52:08 -04:00
|
|
|
"match": { "user": "kimchy" }
|
2016-05-17 15:00:15 -04:00
|
|
|
},
|
2013-08-28 19:24:34 -04:00
|
|
|
"highlight" : {
|
|
|
|
"order" : "score",
|
|
|
|
"fields" : {
|
2016-10-06 23:46:55 -04:00
|
|
|
"comment" : {"fragment_size" : 150, "number_of_fragments" : 3}
|
2013-08-28 19:24:34 -04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
// CONSOLE
|
2017-04-05 17:38:34 -04:00
|
|
|
// TEST[setup:twitter]
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2013-10-24 08:30:14 -04:00
|
|
|
If the `number_of_fragments` value is set to `0` then no fragments are
|
2013-10-18 12:03:31 -04:00
|
|
|
produced, instead the whole content of the field is returned, and of
|
|
|
|
course it is highlighted. This can be very handy if short texts (like
|
|
|
|
document title or address) need to be highlighted but no fragmentation
|
|
|
|
is required. Note that `fragment_size` is ignored in this case.
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
GET /_search
|
2013-10-18 12:03:31 -04:00
|
|
|
{
|
2016-05-17 15:00:15 -04:00
|
|
|
"query" : {
|
2016-05-18 09:52:08 -04:00
|
|
|
"match": { "user": "kimchy" }
|
2016-05-17 15:00:15 -04:00
|
|
|
},
|
2013-10-18 12:03:31 -04:00
|
|
|
"highlight" : {
|
|
|
|
"fields" : {
|
|
|
|
"_all" : {},
|
2016-10-06 23:46:55 -04:00
|
|
|
"blog.title" : {"number_of_fragments" : 0}
|
2013-10-18 12:03:31 -04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
// CONSOLE
|
2017-04-05 17:38:34 -04:00
|
|
|
// TEST[setup:twitter]
|
2013-10-18 12:03:31 -04:00
|
|
|
|
2015-10-27 11:25:29 -04:00
|
|
|
When using `fvh` one can use `fragment_offset`
|
2013-10-18 12:03:31 -04:00
|
|
|
parameter to control the margin to start highlighting from.
|
|
|
|
|
2013-10-24 08:30:14 -04:00
|
|
|
In the case where there is no matching fragment to highlight, the default is
|
|
|
|
to not return anything. Instead, we can return a snippet of text from the
|
|
|
|
beginning of the field by setting `no_match_size` (default `0`) to the length
|
2017-03-17 13:10:13 -04:00
|
|
|
of the text that you want returned. The actual length may be shorter or longer than
|
2017-06-09 08:09:57 -04:00
|
|
|
specified as it tries to break on a word boundary.
|
2013-09-03 14:25:58 -04:00
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
GET /_search
|
2013-09-03 14:25:58 -04:00
|
|
|
{
|
2016-05-17 15:00:15 -04:00
|
|
|
"query" : {
|
2016-05-18 09:52:08 -04:00
|
|
|
"match": { "user": "kimchy" }
|
2016-05-17 15:00:15 -04:00
|
|
|
},
|
2013-09-03 14:25:58 -04:00
|
|
|
"highlight" : {
|
|
|
|
"fields" : {
|
2016-10-06 23:46:55 -04:00
|
|
|
"comment" : {
|
2013-09-03 14:25:58 -04:00
|
|
|
"fragment_size" : 150,
|
|
|
|
"number_of_fragments" : 3,
|
|
|
|
"no_match_size": 150
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
// CONSOLE
|
2017-04-05 17:38:34 -04:00
|
|
|
// TEST[setup:twitter]
|
2013-09-03 14:25:58 -04:00
|
|
|
|
2017-07-12 00:15:35 -04:00
|
|
|
===== Specifying a fragmenter for the plain highlighter
|
2017-04-17 14:00:24 -04:00
|
|
|
|
2017-07-12 00:15:35 -04:00
|
|
|
When using the `plain` highlighter, you can choose between the `simple` and
|
|
|
|
`span` fragmenters:
|
2017-04-17 14:00:24 -04:00
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
|
|
|
GET twitter/tweet/_search
|
|
|
|
{
|
|
|
|
"query" : {
|
|
|
|
"match_phrase": { "message": "number 1" }
|
|
|
|
},
|
|
|
|
"highlight" : {
|
|
|
|
"fields" : {
|
|
|
|
"message" : {
|
2017-06-09 08:09:57 -04:00
|
|
|
"type": "plain",
|
2017-04-17 14:00:24 -04:00
|
|
|
"fragment_size" : 15,
|
|
|
|
"number_of_fragments" : 3,
|
|
|
|
"fragmenter": "simple"
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
|
|
|
// CONSOLE
|
|
|
|
// TEST[setup:twitter]
|
|
|
|
|
|
|
|
Response:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
|
|
|
{
|
|
|
|
...
|
|
|
|
"hits": {
|
|
|
|
"total": 1,
|
2017-06-15 03:52:07 -04:00
|
|
|
"max_score": 1.601195,
|
2017-04-17 14:00:24 -04:00
|
|
|
"hits": [
|
|
|
|
{
|
|
|
|
"_index": "twitter",
|
|
|
|
"_type": "tweet",
|
|
|
|
"_id": "1",
|
2017-06-15 03:52:07 -04:00
|
|
|
"_score": 1.601195,
|
2017-04-17 14:00:24 -04:00
|
|
|
"_source": {
|
|
|
|
"user": "test",
|
|
|
|
"message": "some message with the number 1",
|
|
|
|
"date": "2009-11-15T14:12:12",
|
|
|
|
"likes": 1
|
|
|
|
},
|
|
|
|
"highlight": {
|
|
|
|
"message": [
|
|
|
|
" with the <em>number</em>",
|
|
|
|
" <em>1</em>"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
|
|
|
// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,/]
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
|
|
|
GET twitter/tweet/_search
|
|
|
|
{
|
|
|
|
"query" : {
|
|
|
|
"match_phrase": { "message": "number 1" }
|
|
|
|
},
|
|
|
|
"highlight" : {
|
|
|
|
"fields" : {
|
|
|
|
"message" : {
|
2017-06-09 08:09:57 -04:00
|
|
|
"type": "plain",
|
2017-04-17 14:00:24 -04:00
|
|
|
"fragment_size" : 15,
|
|
|
|
"number_of_fragments" : 3,
|
|
|
|
"fragmenter": "span"
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
|
|
|
// CONSOLE
|
|
|
|
// TEST[setup:twitter]
|
|
|
|
|
|
|
|
Response:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
|
|
|
{
|
|
|
|
...
|
|
|
|
"hits": {
|
|
|
|
"total": 1,
|
2017-06-15 03:52:07 -04:00
|
|
|
"max_score": 1.601195,
|
2017-04-17 14:00:24 -04:00
|
|
|
"hits": [
|
|
|
|
{
|
|
|
|
"_index": "twitter",
|
|
|
|
"_type": "tweet",
|
|
|
|
"_id": "1",
|
2017-06-15 03:52:07 -04:00
|
|
|
"_score": 1.601195,
|
2017-04-17 14:00:24 -04:00
|
|
|
"_source": {
|
|
|
|
"user": "test",
|
|
|
|
"message": "some message with the number 1",
|
|
|
|
"date": "2009-11-15T14:12:12",
|
|
|
|
"likes": 1
|
|
|
|
},
|
|
|
|
"highlight": {
|
|
|
|
"message": [
|
|
|
|
"some message with the <em>number</em> <em>1</em>"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
|
|
|
// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,/]
|
|
|
|
|
|
|
|
If the `number_of_fragments` option is set to `0`,
|
|
|
|
`NullFragmenter` is used which does not fragment the text at all.
|
2017-07-12 00:15:35 -04:00
|
|
|
This is useful for highlighting the entire contents of a document or field.
|
|
|
|
|
|
|
|
===== Specifying a highlight query
|
|
|
|
|
|
|
|
Here is an example of including both the search
|
2013-09-05 12:39:01 -04:00
|
|
|
query and the rescore query in `highlight_query`.
|
2017-07-12 00:15:35 -04:00
|
|
|
|
2013-09-05 12:39:01 -04:00
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
GET /_search
|
2013-09-05 12:39:01 -04:00
|
|
|
{
|
2016-06-21 05:27:27 -04:00
|
|
|
"stored_fields": [ "_id" ],
|
2013-09-05 12:39:01 -04:00
|
|
|
"query" : {
|
|
|
|
"match": {
|
2016-10-06 23:46:55 -04:00
|
|
|
"comment": {
|
2013-09-05 12:39:01 -04:00
|
|
|
"query": "foo bar"
|
|
|
|
}
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"rescore": {
|
|
|
|
"window_size": 50,
|
|
|
|
"query": {
|
|
|
|
"rescore_query" : {
|
|
|
|
"match_phrase": {
|
2016-10-06 23:46:55 -04:00
|
|
|
"comment": {
|
2013-09-05 12:39:01 -04:00
|
|
|
"query": "foo bar",
|
2016-08-02 17:35:31 -04:00
|
|
|
"slop": 1
|
2013-09-05 12:39:01 -04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"rescore_query_weight" : 10
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"highlight" : {
|
|
|
|
"order" : "score",
|
|
|
|
"fields" : {
|
2016-10-06 23:46:55 -04:00
|
|
|
"comment" : {
|
2013-09-05 12:39:01 -04:00
|
|
|
"fragment_size" : 150,
|
|
|
|
"number_of_fragments" : 3,
|
|
|
|
"highlight_query": {
|
|
|
|
"bool": {
|
|
|
|
"must": {
|
|
|
|
"match": {
|
2016-10-06 23:46:55 -04:00
|
|
|
"comment": {
|
2013-09-05 12:39:01 -04:00
|
|
|
"query": "foo bar"
|
|
|
|
}
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"should": {
|
|
|
|
"match_phrase": {
|
2016-10-06 23:46:55 -04:00
|
|
|
"comment": {
|
2013-09-05 12:39:01 -04:00
|
|
|
"query": "foo bar",
|
2016-08-02 17:35:31 -04:00
|
|
|
"slop": 1,
|
2013-09-05 12:39:01 -04:00
|
|
|
"boost": 10.0
|
|
|
|
}
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"minimum_should_match": 0
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
// CONSOLE
|
2017-04-05 17:38:34 -04:00
|
|
|
// TEST[setup:twitter]
|
2013-09-05 12:39:01 -04:00
|
|
|
|
2017-07-12 00:15:35 -04:00
|
|
|
[[overriding-global-settings]]
|
|
|
|
===== Overriding global settings
|
2013-08-28 19:24:34 -04:00
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
GET /_search
|
2013-08-28 19:24:34 -04:00
|
|
|
{
|
2016-05-17 15:00:15 -04:00
|
|
|
"query" : {
|
2016-05-18 09:52:08 -04:00
|
|
|
"match": { "user": "kimchy" }
|
2016-05-17 15:00:15 -04:00
|
|
|
},
|
2013-08-28 19:24:34 -04:00
|
|
|
"highlight" : {
|
|
|
|
"number_of_fragments" : 3,
|
|
|
|
"fragment_size" : 150,
|
|
|
|
"fields" : {
|
|
|
|
"_all" : { "pre_tags" : ["<em>"], "post_tags" : ["</em>"] },
|
2016-10-06 23:46:55 -04:00
|
|
|
"blog.title" : { "number_of_fragments" : 0 },
|
|
|
|
"blog.author" : { "number_of_fragments" : 0 },
|
|
|
|
"blog.comment" : { "number_of_fragments" : 5, "order" : "score" }
|
2013-08-28 19:24:34 -04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
// CONSOLE
|
2017-04-05 17:38:34 -04:00
|
|
|
// TEST[setup:twitter]
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2013-09-25 12:17:40 -04:00
|
|
|
[[field-match]]
|
2017-07-12 00:15:35 -04:00
|
|
|
===== Highlighting in all fields
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2017-07-12 00:15:35 -04:00
|
|
|
By default, only fields that contains a query match are highlighted. Set
|
|
|
|
`require_field_match` to `false` to highlight all fields.
|
2013-08-28 19:24:34 -04:00
|
|
|
|
2016-02-28 15:32:51 -05:00
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
GET /_search
|
2016-02-28 15:32:51 -05:00
|
|
|
{
|
2016-05-17 15:00:15 -04:00
|
|
|
"query" : {
|
2016-05-18 09:52:08 -04:00
|
|
|
"match": { "user": "kimchy" }
|
2016-05-17 15:00:15 -04:00
|
|
|
},
|
2016-02-28 15:32:51 -05:00
|
|
|
"highlight" : {
|
2016-05-17 15:00:15 -04:00
|
|
|
"require_field_match": false,
|
|
|
|
"fields": {
|
|
|
|
"_all" : { "pre_tags" : ["<em>"], "post_tags" : ["</em>"] }
|
|
|
|
}
|
2016-02-28 15:32:51 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
// CONSOLE
|
2017-04-05 17:38:34 -04:00
|
|
|
// TEST[setup:twitter]
|
2016-02-28 15:32:51 -05:00
|
|
|
|
2013-09-23 10:17:26 -04:00
|
|
|
[[matched-fields]]
|
2017-07-12 00:15:35 -04:00
|
|
|
===== Combining matches on multiple fields
|
2017-06-09 08:09:57 -04:00
|
|
|
|
|
|
|
WARNING: This is only supported by the `fvh` highlighter
|
|
|
|
|
2013-09-23 10:17:26 -04:00
|
|
|
The Fast Vector Highlighter can combine matches on multiple fields to
|
2017-07-12 00:15:35 -04:00
|
|
|
highlight a single field. This is most intuitive for multifields that
|
|
|
|
analyze the same string in different ways. All `matched_fields` must have
|
|
|
|
`term_vector` set to `with_positions_offsets` but only the field to which
|
|
|
|
the matches are combined is loaded so only that field would benefit from having
|
2013-09-23 10:17:26 -04:00
|
|
|
`store` set to `yes`.
|
|
|
|
|
2017-07-12 00:15:35 -04:00
|
|
|
In the following examples, `comment` is analyzed by the `english`
|
2016-10-06 23:46:55 -04:00
|
|
|
analyzer and `comment.plain` is analyzed by the `standard` analyzer.
|
2013-09-23 10:17:26 -04:00
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
GET /_search
|
2013-09-23 10:17:26 -04:00
|
|
|
{
|
|
|
|
"query": {
|
|
|
|
"query_string": {
|
2016-10-06 23:46:55 -04:00
|
|
|
"query": "comment.plain:running scissors",
|
|
|
|
"fields": ["comment"]
|
2013-09-23 10:17:26 -04:00
|
|
|
}
|
|
|
|
},
|
|
|
|
"highlight": {
|
|
|
|
"order": "score",
|
|
|
|
"fields": {
|
2016-10-06 23:46:55 -04:00
|
|
|
"comment": {
|
|
|
|
"matched_fields": ["comment", "comment.plain"],
|
2013-09-23 10:17:26 -04:00
|
|
|
"type" : "fvh"
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
// CONSOLE
|
2017-04-05 17:38:34 -04:00
|
|
|
// TEST[setup:twitter]
|
|
|
|
|
2013-09-23 10:17:26 -04:00
|
|
|
The above matches both "run with scissors" and "running with scissors"
|
|
|
|
and would highlight "running" and "scissors" but not "run". If both
|
|
|
|
phrases appear in a large document then "running with scissors" is
|
|
|
|
sorted above "run with scissors" in the fragments list because there
|
|
|
|
are more matches in that fragment.
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
GET /_search
|
2013-09-23 10:17:26 -04:00
|
|
|
{
|
|
|
|
"query": {
|
|
|
|
"query_string": {
|
|
|
|
"query": "running scissors",
|
2016-10-06 23:46:55 -04:00
|
|
|
"fields": ["comment", "comment.plain^10"]
|
2013-09-23 10:17:26 -04:00
|
|
|
}
|
|
|
|
},
|
|
|
|
"highlight": {
|
|
|
|
"order": "score",
|
|
|
|
"fields": {
|
2016-10-06 23:46:55 -04:00
|
|
|
"comment": {
|
|
|
|
"matched_fields": ["comment", "comment.plain"],
|
2013-09-23 10:17:26 -04:00
|
|
|
"type" : "fvh"
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
// CONSOLE
|
2017-04-05 17:38:34 -04:00
|
|
|
// TEST[setup:twitter]
|
|
|
|
|
2013-09-23 10:17:26 -04:00
|
|
|
The above highlights "run" as well as "running" and "scissors" but
|
|
|
|
still sorts "running with scissors" above "run with scissors" because
|
|
|
|
the plain match ("running") is boosted.
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
GET /_search
|
2013-09-23 10:17:26 -04:00
|
|
|
{
|
|
|
|
"query": {
|
|
|
|
"query_string": {
|
|
|
|
"query": "running scissors",
|
2016-10-06 23:46:55 -04:00
|
|
|
"fields": ["comment", "comment.plain^10"]
|
2013-09-23 10:17:26 -04:00
|
|
|
}
|
|
|
|
},
|
|
|
|
"highlight": {
|
|
|
|
"order": "score",
|
|
|
|
"fields": {
|
2016-10-06 23:46:55 -04:00
|
|
|
"comment": {
|
|
|
|
"matched_fields": ["comment.plain"],
|
2013-09-23 10:17:26 -04:00
|
|
|
"type" : "fvh"
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2016-05-17 15:00:15 -04:00
|
|
|
// CONSOLE
|
2017-04-05 17:38:34 -04:00
|
|
|
// TEST[setup:twitter]
|
2016-05-17 15:00:15 -04:00
|
|
|
|
2013-09-23 10:17:26 -04:00
|
|
|
The above query wouldn't highlight "run" or "scissor" but shows that
|
|
|
|
it is just fine not to list the field to which the matches are combined
|
2016-10-06 23:46:55 -04:00
|
|
|
(`comment`) in the matched fields.
|
2013-09-23 10:17:26 -04:00
|
|
|
|
|
|
|
[NOTE]
|
|
|
|
Technically it is also fine to add fields to `matched_fields` that
|
|
|
|
don't share the same underlying string as the field to which the matches
|
|
|
|
are combined. The results might not make much sense and if one of the
|
2014-03-28 12:09:56 -04:00
|
|
|
matches is off the end of the text then the whole query will fail.
|
2013-09-23 10:17:26 -04:00
|
|
|
|
|
|
|
[NOTE]
|
|
|
|
===================================================================
|
|
|
|
There is a small amount of overhead involved with setting
|
|
|
|
`matched_fields` to a non-empty array so always prefer
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
|
|
|
"highlight": {
|
|
|
|
"fields": {
|
2016-10-06 23:46:55 -04:00
|
|
|
"comment": {}
|
2013-09-23 10:17:26 -04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2017-04-05 17:38:34 -04:00
|
|
|
// NOTCONSOLE
|
2013-09-23 10:17:26 -04:00
|
|
|
to
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
|
|
|
"highlight": {
|
|
|
|
"fields": {
|
2016-10-06 23:46:55 -04:00
|
|
|
"comment": {
|
|
|
|
"matched_fields": ["comment"],
|
2013-09-23 10:17:26 -04:00
|
|
|
"type" : "fvh"
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2017-04-05 17:38:34 -04:00
|
|
|
// NOTCONSOLE
|
2013-09-23 10:17:26 -04:00
|
|
|
===================================================================
|
2014-01-07 13:37:25 -05:00
|
|
|
|
|
|
|
|
2014-05-14 15:20:59 -04:00
|
|
|
[[explicit-field-order]]
|
2017-07-12 00:15:35 -04:00
|
|
|
===== Explicitly ordering highlighted fields
|
|
|
|
Elasticsearch highlights the fields in the order that they are sent, but per the
|
|
|
|
JSON spec, objects are unordered. If you need to be explicit about the order
|
|
|
|
in which fields are highlighted specify the `fields` as an array:
|
|
|
|
|
2014-05-14 15:20:59 -04:00
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
2017-04-05 17:38:34 -04:00
|
|
|
GET /_search
|
|
|
|
{
|
2014-05-14 15:20:59 -04:00
|
|
|
"highlight": {
|
|
|
|
"fields": [
|
2017-04-05 17:38:34 -04:00
|
|
|
{ "title": {} },
|
|
|
|
{ "text": {} }
|
2014-05-14 15:20:59 -04:00
|
|
|
]
|
|
|
|
}
|
2017-04-05 17:38:34 -04:00
|
|
|
}
|
2014-05-14 15:20:59 -04:00
|
|
|
--------------------------------------------------
|
2017-04-05 17:38:34 -04:00
|
|
|
// CONSOLE
|
|
|
|
// TEST[setup:twitter]
|
|
|
|
|
2014-05-14 15:20:59 -04:00
|
|
|
None of the highlighters built into Elasticsearch care about the order that the
|
2017-07-12 00:15:35 -04:00
|
|
|
fields are highlighted but a plugin might.
|