Our current default behaviour to ignore unrated documents when calculating the
precision seems a bit counter intuitive. Instead we should treat those documents
as "irrelevant" by default and provide an optional parameter to ignore those
documents if that is the behaviour the user wants.
There's a currently unhandled edge case for the precion@ metric. When none of
the search hits in the result are rated, we have neither true nor false
positives which currently leads to division by zero. We should return a precion
of 0.0 in this case.
When multiple ratings for the same document (identified by _index, _type,
_id) are specified in the request we should throw an error. This change adds a
check for this in the RatedRequest setter (and ctor that uses that setter).
Closes#20997
This adds support for templating in rank eval requests.
Relates to #20231
Problem: In it's current state the rank-eval request API forces the user to repeat complete queries for each test request. In most use cases the structure of the query to test will be stable with only parameters changing across requests, so this looks like lots of boilerplate json for something that could be expressed in a more concise way.
Uses templating/ ScriptServices to enable users to submit only one test request template and let them only specify template parameters on a per test request basis.
The unknown document section in the response for each query can be rendered
using the rated hits that are now also part of the response by just filtering
the documents without a rating.
Currently each implementation of RankedListQualityMetric does some initial
joining operation that links the input search hits with a rated document rating,
if available. Also all metrics collect unknown docs and now also need to add the
list of rated search hits to the partial query evaluation. This change
centralizes this work in some new helper methods in RankedListQualityMetric.
This change adds a `hits` section to the response part for each ranking
evaluation query, containing a list of documents (index/type/id) and ratings (if
the document was rated in the request). This section can be used to better
understand the calculation of the ranking quality of this particular query, but
it can also be used to identify the "unknown" (that is unrated) documents that
were part of the seach hits, for example because a UI later wants to present
those documents to the user to get a rating for them.
If the user specifies a set of field names using a parameter called
`summary_fields` in the request, those fields are also included as part of the
response in addition to "_index", "_type", "_id".
In order to understand how well particular queries in a joint ranking evaluation
request work we want to break down the overall metric into its components, each
contributed by a particular query. The response structure now has a
`details` section under which we can summarize this information. Each
sub-section is keyed by the query-id and currently only contains the partial
metric and the unknown_docs section for each query.
To be consitent with the output of the search API, we should use the same field
names for specifying the document ("_index", "_type", "_id") when providing the
rated documents in the `rank_eval` request.
Currently the top level spec_id serves as a human-readable description of the
ranking evaluation API call. Since there is only one id possible, it can be
dropped to simplify the request.
Closes#20438
Every rated document needs an index/type/id parameter, so adding a "key" object
like we currently do only leads to an additional unneeded level of nesting in
the rest request.
Closes#20417
PrecisionAtN and ReciprocalRank are binary evaluation metrics by default that
only distiguish between relevant/irrelevant search results. So far we assumed
that relevant documents are labaled with 1 (irrelevant docs with 0) in the
evaluation request, but this is cumbersome if the ratings are provided on a
larger integer scale and would need to get mapped to a 0/1 value.
This change introduces a threshold parameter on the PrecisionAtN and
ReciprocalRank metric than can be used to set the threshold from which on a
document is considered "relevant". It defaults to 1, so in case of 0/1 ratings
the threshold doesn't have to be set and only ratings with value 0 are
considered to be irrelevant.
This adds roundtrip testing to RankEvalSpec, fixes issues
introduced with the previous roundtrip tests, splits
xcontent generation/parsing from actually checking the resulting
objects to deal with e.g. all evaluation metrics needing
some extra treatment.
Renames QuerySpec to RatedRequest, renames newly introduced xcontent
generation helper to conform with naming conventions.
Fixes several lines that were too long, adds missing types where
needed.
This factors the roundtripping out of RatedDocumentTests. Makes
RankedListQualityMetric and RatedDocument implement FromXContenBuilder
to be able to do the aforementioned refactoring in a generic way. Adds
a roundtrip test to DiscountedCumulativeGainAt.
Open questions:
DiscountedCumulativeGain didn't have a constructor that accepted all possible
parameters as arguments. Added one. I guess we still want to keep the one
that only requires the position argument?
To make roundtripping work I had to change the NAME parameter when generating
XContent for DiscountedCumulativeGainAt - all remaining unit tests seem to be
passing (haven't checked the REST tests yet) - need to figure out why that was
there to begin with.
For the two current metrics Prec@ and reciprocal rank we currently average the
partial results in the transport action. If other metric later need a different
behaviour or want to parametrize this, this operation should be part of the
metric itself, so this change moves it there. Also removing on of the two test
packages, main code is also in one package only.
This adds a second query evaluation metric alongside precision_at. Reciprocal
Rank is defined as 1/rank, where rank is the position of the first relevant
document in the search result. The results are averaged across all queries
across the sample of queries, according to
https://en.wikipedia.org/wiki/Mean_reciprocal_rank