Extending `_rank_eval` documentation
This commit is contained in:
parent
0a6c6ac360
commit
d9e67a2c95
|
@ -1,149 +1,150 @@
|
|||
[[rank-eval]]
|
||||
= Ranking Evaluation
|
||||
[[search-rank-eval]]
|
||||
== Ranking Evaluation API
|
||||
|
||||
[partintro]
|
||||
--
|
||||
experimental[]
|
||||
|
||||
Imagine having built and deployed a search application: Users are happily
|
||||
entering queries into your search frontend. Your application takes these
|
||||
queries and creates a dedicated Elasticsearch query from that, and returns its
|
||||
results back to the user. Imagine further that you are tasked with tweaking the
|
||||
Elasticsearch query that is being created to return specific results for a
|
||||
certain set of queries without breaking others. How should that be done?
|
||||
The ranking evaluation API allows to evaluate the quality of ranked search
|
||||
results over a set of typical search queries. Given this set of queries and a
|
||||
list or manually rated documents, the `_rank_eval` endpoint calculates and
|
||||
returns typical information retrieval metrics like _mean reciprocal rank_,
|
||||
_precision_ or _discounted cumulative gain_.
|
||||
|
||||
One possible solution is to gather a sample of user queries representative of
|
||||
how the search application is used, retrieve the search results that are being
|
||||
returned. As a next step these search results would be manually annotated for
|
||||
their relevancy to the original user query. Based on this set of rated requests
|
||||
we can compute a couple of metrics telling us more about how many relevant
|
||||
search results are being returned.
|
||||
=== Overview
|
||||
|
||||
This is a nice approximation for how well our translation from user query to
|
||||
Elasticsearch query works for providing the user with relevant search results.
|
||||
Elasticsearch provides a ranking evaluation API that lets you compute scores for
|
||||
your current ranking function based on annotated search results.
|
||||
--
|
||||
Search quality evaluation starts with looking at the users of your search application, and the things that they are searching for.
|
||||
Users have a specific _information need_, e.g. they are looking for gift in a web shop or want to book a flight for their next holiday.
|
||||
They usually enters some search terms into a search box or some other web form.
|
||||
All of this information, together with meta information about the user (e.g. the browser, location, earlier preferences etc...) then gets translated into a query to the underlying search system.
|
||||
|
||||
== Plain ranking evaluation
|
||||
The challenge for search engineers is to tweak this translation process from user entries to a concrete query in such a way, that the search results contain the most relevant information with respect to the users information_need.
|
||||
This can only be done if the search result quality is evaluated constantly across a representative test suit of typical user queries, so that improvements in the rankings for one particular query doesn't negatively effect the ranking for other types of queries.
|
||||
|
||||
In its most simple form, for each query a set of ratings can be supplied:
|
||||
In order to get started with search quality evaluation, three basic things a needed:
|
||||
|
||||
. a collection of document you want to evaluate your query performance against, usually one or more indices
|
||||
. a collection of typical search requests that users enter into your system
|
||||
. a set of document ratings that judge the documents relevance with respect to a search request+
|
||||
It is important to note that one set of document ratings is needed per test query, and that
|
||||
the relevance judgements are based on the _information_need_ of the user that entered the query.
|
||||
|
||||
The ranking evaluation API provides a convenient way to use this information in a ranking evaluation request to calculate different search evaluation metrics. This gives a first estimation of your overall search quality and give you a measurement to optimize against when fine-tuning various aspect of the query generation in your application.
|
||||
|
||||
== Ranking evaluation request structure
|
||||
|
||||
In its most basic form, a request to the `_rank_eval` endpoint has two sections:
|
||||
|
||||
[source,js]
|
||||
-----------------------------
|
||||
GET /twitter/tweet/_rank_eval
|
||||
GET /my_index/_rank_eval
|
||||
{
|
||||
"requests": [
|
||||
{
|
||||
"id": "JFK query", <1>
|
||||
"request": {
|
||||
"query": {
|
||||
"match": {
|
||||
"title": {
|
||||
"query": "JFK"}}}}, <2>
|
||||
"ratings": [ <3>
|
||||
{
|
||||
"rating": 1.5, <4>
|
||||
"_type": "tweet", <5>
|
||||
"_id": "13736278", <6>
|
||||
"_index": "twitter" <7>
|
||||
},
|
||||
{
|
||||
"rating": 1,
|
||||
"_type": "tweet",
|
||||
"_id": "30900421",
|
||||
"_index": "twitter"
|
||||
}],
|
||||
"summary_fields": ["title"] <8>
|
||||
}],
|
||||
"metric": { <9>
|
||||
"reciprocal_rank": {}
|
||||
},
|
||||
"max_concurrent_searches": 10 <10>
|
||||
"requests": [ ... ], <1>
|
||||
"metric": { <2>
|
||||
"reciprocal_rank": { ... } <3>
|
||||
}
|
||||
}
|
||||
------------------------------
|
||||
// CONSOLE
|
||||
// TEST[setup:twitter]
|
||||
// NOTCONSOLE
|
||||
|
||||
<1> A human readable id for the rated query (that will be re-used in the response to provide further details).
|
||||
<2> The actual Elasticsearch query to execute.
|
||||
<3> A set of ratings for how well a certain document fits as response for the query.
|
||||
<4> A rating expressing how well the document fits the query, higher is better, are treated as int values.
|
||||
<5> The type where the rated document lives.
|
||||
<6> The id of the rated document.
|
||||
<7> The index where the rated document lives.
|
||||
<8> For a verbose response, specify which properties of a search hit should be returned in addition to index/type/id.
|
||||
<9> A metric to use for evaluation. See below for a list.
|
||||
<10> Maximum number of search requests to execute in parallel. Set to 10 by
|
||||
default.
|
||||
<1> a set of typical search requests to your system
|
||||
<2> definition of the evaluation metric to calculate
|
||||
<3> a specific metric and its parameters
|
||||
|
||||
The request section contains several search requests typical to your application, along with the document ratings for each particular search request, e.g.
|
||||
|
||||
[source,js]
|
||||
-----------------------------
|
||||
"requests": [
|
||||
{
|
||||
"id": "amsterdam_query", <1>
|
||||
"request": { <2>
|
||||
"query": { "match": { "text": "amsterdam" }}
|
||||
},
|
||||
"ratings": [ <3>
|
||||
{ "_index": "my_index", "_id": "doc1", "rating": 0 },
|
||||
{ "_index": "my_index", "_id": "doc2", "rating": 3},
|
||||
{ "_index": "my_index", _id": "doc3", "rating": 1 }
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "berlin_query",
|
||||
"request": {
|
||||
"query": { "match": { "text": "berlin" }}
|
||||
},
|
||||
"ratings": [
|
||||
{ "_index": "my_index", "_id": "doc1", "rating": 1 }
|
||||
]
|
||||
}
|
||||
]
|
||||
------------------------------
|
||||
// NOTCONSOLE
|
||||
|
||||
<1> the search requests id, used to group result details later
|
||||
<2> the query that is being evaluated
|
||||
<3> a list of document ratings, each entry containing the documents `_index` and `_id` together with
|
||||
the rating of the documents relevance with regards to this search request
|
||||
|
||||
A document `rating` can be any integer value that expresses the relevance of the document on a user defined scale. For some of the metrics, just giving a binary rating (e.g. `0` for irrelevant and `1` for relevant) will be sufficient, other metrics can use a more fine grained scale.
|
||||
|
||||
== Template based ranking evaluation
|
||||
|
||||
As an alternative to having to provide a single query per test request, it is possible to specify query templates in the evaluation request and later refer to them. Queries with similar structure that only differ in their parameters don't have to be repeated all the time in the `requests` section this way. In typical search systems where user inputs usually get filled into a small set of query templates, this helps making the evaluation request more succinct.
|
||||
|
||||
[source,js]
|
||||
--------------------------------
|
||||
GET /twitter/tweet/_rank_eval
|
||||
GET /my_index/_rank_eval
|
||||
{
|
||||
"templates": [{
|
||||
"id": "match_query",
|
||||
"template": {
|
||||
[...]
|
||||
"templates": [
|
||||
{
|
||||
"id": "match_one_field_query", <1>
|
||||
"template": { <2>
|
||||
"inline": {
|
||||
"query": {
|
||||
"match": {
|
||||
"{{field}}": {
|
||||
"query": "{{query_string}}"}}}}}}], <1>
|
||||
"match": { "{{field}}": { "query": "{{query_string}}" }}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"requests": [
|
||||
{
|
||||
"id": "JFK query",
|
||||
"ratings": [
|
||||
{
|
||||
"rating": 1.5,
|
||||
"_type": "tweet",
|
||||
"_id": "13736278",
|
||||
"_index": "twitter"
|
||||
},
|
||||
{
|
||||
"rating": 1,
|
||||
"_type": "tweet",
|
||||
"_id": "30900421",
|
||||
"_index": "twitter"
|
||||
}],
|
||||
"params": {
|
||||
"query_string": "JFK", <2>
|
||||
"field": "opening_text" <2>
|
||||
},
|
||||
"template_id": "match_query"
|
||||
}],
|
||||
"metric": {
|
||||
"precision": {
|
||||
"relevant_rating_threshold": 2
|
||||
}
|
||||
}
|
||||
{
|
||||
"id": "amsterdam_query"
|
||||
"ratings": [ ... ],
|
||||
"template_id": "match_one_field_query", <3>
|
||||
"params": { <4>
|
||||
"query_string": "amsterdam",
|
||||
"field": "text"
|
||||
}
|
||||
},
|
||||
[...]
|
||||
}
|
||||
--------------------------------
|
||||
// CONSOLE
|
||||
// TEST[setup:twitter]
|
||||
// NOTCONSOLE
|
||||
|
||||
<1> The template to use for every rated search request.
|
||||
<2> The parameters to use to fill the template above.
|
||||
<1> the template id
|
||||
<2> the template definition to use
|
||||
<3> a reference to a previously defined temlate
|
||||
<4> the parameters to use to fill the template
|
||||
|
||||
== Available evaluation metrics
|
||||
|
||||
== Valid evaluation metrics
|
||||
The `metric` section determines which of the available evaluation metrics is going to be used.
|
||||
Currently, the following metrics are supported:
|
||||
|
||||
=== Precision
|
||||
=== Precision at k (Prec@k)
|
||||
|
||||
Citing from https://en.wikipedia.org/wiki/Information_retrieval#Precision[Precision
|
||||
page at Wikipedia]:
|
||||
"Precision is the fraction of the documents retrieved that are relevant to the
|
||||
user's information need."
|
||||
This metric measures the number of relevant results in the top k search results. Its a form of the well known https://en.wikipedia.org/wiki/Information_retrieval#Precision[Precision] metric that only looks at the top k documents. It is the fraction of relevant documents in those first k
|
||||
search. A precision at 10 (prec@10) value of 0.6 then means six out of the 10 top hits where
|
||||
relevant with respect to the users information need.
|
||||
|
||||
Works well as an easy to explain evaluation metric. Caveat: All result positions
|
||||
are treated equally. So a ranking of ten results that contains one relevant
|
||||
This metric works well as a first and an easy to explain evaluation metric.
|
||||
Documents in the collection need to be rated either as relevant or irrelevant with respect to the current query. Prec@k does not take into account where in the top k results the relevant documents occur, so a ranking of ten results that contains one relevant
|
||||
result in position 10 is equally good as a ranking of ten results that contains
|
||||
one relevant result in position 1.
|
||||
|
||||
[source,js]
|
||||
--------------------------------
|
||||
GET /twitter/tweet/_rank_eval
|
||||
GET /twitter/_rank_eval
|
||||
{
|
||||
"requests": [
|
||||
{
|
||||
|
@ -153,8 +154,8 @@ GET /twitter/tweet/_rank_eval
|
|||
}],
|
||||
"metric": {
|
||||
"precision": {
|
||||
"relevant_rating_threshold": 1, <1>
|
||||
"ignore_unlabeled": false <2>
|
||||
"relevant_rating_threshold": 1,
|
||||
"ignore_unlabeled": false
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -162,23 +163,27 @@ GET /twitter/tweet/_rank_eval
|
|||
// CONSOLE
|
||||
// TEST[setup:twitter]
|
||||
|
||||
<1> For graded relevance ratings only ratings above this threshold are
|
||||
considered as relevant results for the given query. By default this is set to 1.
|
||||
The `precision` metric takes the following optional parameters
|
||||
|
||||
<2> All documents retrieved by the rated request that have no ratings
|
||||
assigned are treated unrelevant by default. Set to true in order to drop them
|
||||
from the precision computation entirely.
|
||||
[cols="<,<",options="header",]
|
||||
|=======================================================================
|
||||
|Parameter |Description
|
||||
|`relevant_rating_threshold` |Sets the rating threshold from which on documents are considered to be
|
||||
"relevant". Defaults to `1`.
|
||||
|`ignore_unlabeled` |controls how unlabeled documents in the search results are counted.
|
||||
If set to 'true', unlabeled documents are ignored and neither count as relevant or irrelevant. Set to 'false' (the default), they are treated as irrelevant.
|
||||
|=======================================================================
|
||||
|
||||
=== Mean reciprocal rank
|
||||
|
||||
=== Reciprocal rank
|
||||
|
||||
For any given query this is the reciprocal of the rank of the
|
||||
first relevant document retrieved. For example finding the first relevant result
|
||||
in position 3 means Reciprocal Rank is going to be 1/3.
|
||||
For every query in the test suite, this metric calculates the reciprocal of the rank of the
|
||||
first relevant document. For example finding the first relevant result
|
||||
in position 3 means the reciprocal rank is 1/3. The reciprocal rank for each query
|
||||
is averaged across all queries in the test suite to give the https://en.wikipedia.org/wiki/Mean_reciprocal_rank[mean reciprocal rank].
|
||||
|
||||
[source,js]
|
||||
--------------------------------
|
||||
GET /twitter/tweet/_rank_eval
|
||||
GET /twitter/_rank_eval
|
||||
{
|
||||
"requests": [
|
||||
{
|
||||
|
@ -187,25 +192,31 @@ GET /twitter/tweet/_rank_eval
|
|||
"ratings": []
|
||||
}],
|
||||
"metric": {
|
||||
"reciprocal_rank": {}
|
||||
"mean_reciprocal_rank": {}
|
||||
}
|
||||
}
|
||||
--------------------------------
|
||||
// CONSOLE
|
||||
// TEST[setup:twitter]
|
||||
|
||||
=== Normalized discounted cumulative gain
|
||||
The `mean_reciprocal_rank` metric takes the following optional parameters
|
||||
|
||||
In contrast to the two metrics above this takes both, the grade of the result
|
||||
found as well as the position of the document returned into account.
|
||||
[cols="<,<",options="header",]
|
||||
|=======================================================================
|
||||
|Parameter |Description
|
||||
|`relevant_rating_threshold` |Sets the rating threshold from which on documents are considered to be
|
||||
"relevant". Defaults to `1`.
|
||||
|=======================================================================
|
||||
|
||||
For more details also check the explanation on
|
||||
https://en.wikipedia.org/wiki/Discounted_cumulative_gain[Wikipedia].
|
||||
=== Discounted cumulative gain (DCG)
|
||||
|
||||
In contrast to the two metrics above, https://en.wikipedia.org/wiki/Discounted_cumulative_gain[discounted cumulative gain] takes both, the rank and the rating of the search results, into account.
|
||||
|
||||
The assumption is that highly relevant documents are more useful for the user when appearing at the top of the result list. Therefore, the DCG formula reduces the contribution that high ratings for documents on lower search ranks have on the overall DCG metric.
|
||||
|
||||
[source,js]
|
||||
--------------------------------
|
||||
GET /twitter/tweet/_rank_eval
|
||||
GET /twitter/_rank_eval
|
||||
{
|
||||
"requests": [
|
||||
{
|
||||
|
@ -215,7 +226,7 @@ GET /twitter/tweet/_rank_eval
|
|||
}],
|
||||
"metric": {
|
||||
"dcg": {
|
||||
"normalize": false <1>
|
||||
"normalize": false
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -223,9 +234,15 @@ GET /twitter/tweet/_rank_eval
|
|||
// CONSOLE
|
||||
// TEST[setup:twitter]
|
||||
|
||||
<1> Set to true to compute nDCG instead of DCG, default is false.
|
||||
The `dcg` metric takes the following optional parameters:
|
||||
|
||||
[cols="<,<",options="header",]
|
||||
|=======================================================================
|
||||
|Parameter |Description
|
||||
|`normalize` | If set to `true`, this metric will calculate the https://en.wikipedia.org/wiki/Discounted_cumulative_gain#Normalized_DCG[Normalized DCG].
|
||||
|=======================================================================
|
||||
|
||||
== Other parameters
|
||||
|
||||
== Response format
|
||||
|
||||
Setting normalize to true makes DCG values better comparable across different
|
||||
result set sizes. See also
|
||||
https://en.wikipedia.org/wiki/Discounted_cumulative_gain#Normalized_DCG[Wikipedia
|
||||
nDCG] for more details.
|
||||
|
|
|
@ -37,13 +37,13 @@ import javax.naming.directory.SearchResult;
|
|||
import static org.elasticsearch.index.rankeval.RankedListQualityMetric.joinHitsWithRatings;
|
||||
|
||||
/**
|
||||
* Evaluate reciprocal rank. By default documents with a rating equal or bigger
|
||||
* Evaluate mean reciprocal rank. By default documents with a rating equal or bigger
|
||||
* than 1 are considered to be "relevant" for the reciprocal rank calculation.
|
||||
* This value can be changes using the "relevant_rating_threshold" parameter.
|
||||
*/
|
||||
public class ReciprocalRank implements RankedListQualityMetric {
|
||||
public class MeanReciprocalRank implements RankedListQualityMetric {
|
||||
|
||||
public static final String NAME = "reciprocal_rank";
|
||||
public static final String NAME = "mean_reciprocal_rank";
|
||||
|
||||
/** ratings equal or above this value will be considered relevant. */
|
||||
private int relevantRatingThreshhold = 1;
|
||||
|
@ -51,11 +51,11 @@ public class ReciprocalRank implements RankedListQualityMetric {
|
|||
/**
|
||||
* Initializes maxAcceptableRank with 10
|
||||
*/
|
||||
public ReciprocalRank() {
|
||||
public MeanReciprocalRank() {
|
||||
// use defaults
|
||||
}
|
||||
|
||||
public ReciprocalRank(StreamInput in) throws IOException {
|
||||
public MeanReciprocalRank(StreamInput in) throws IOException {
|
||||
this.relevantRatingThreshhold = in.readVInt();
|
||||
}
|
||||
|
||||
|
@ -121,14 +121,14 @@ public class ReciprocalRank implements RankedListQualityMetric {
|
|||
|
||||
private static final ParseField RELEVANT_RATING_FIELD = new ParseField(
|
||||
"relevant_rating_threshold");
|
||||
private static final ObjectParser<ReciprocalRank, Void> PARSER = new ObjectParser<>(
|
||||
"reciprocal_rank", () -> new ReciprocalRank());
|
||||
private static final ObjectParser<MeanReciprocalRank, Void> PARSER = new ObjectParser<>(
|
||||
"reciprocal_rank", () -> new MeanReciprocalRank());
|
||||
|
||||
static {
|
||||
PARSER.declareInt(ReciprocalRank::setRelevantRatingThreshhold, RELEVANT_RATING_FIELD);
|
||||
PARSER.declareInt(MeanReciprocalRank::setRelevantRatingThreshhold, RELEVANT_RATING_FIELD);
|
||||
}
|
||||
|
||||
public static ReciprocalRank fromXContent(XContentParser parser) {
|
||||
public static MeanReciprocalRank fromXContent(XContentParser parser) {
|
||||
return PARSER.apply(parser, null);
|
||||
}
|
||||
|
||||
|
@ -150,7 +150,7 @@ public class ReciprocalRank implements RankedListQualityMetric {
|
|||
if (obj == null || getClass() != obj.getClass()) {
|
||||
return false;
|
||||
}
|
||||
ReciprocalRank other = (ReciprocalRank) obj;
|
||||
MeanReciprocalRank other = (MeanReciprocalRank) obj;
|
||||
return Objects.equals(relevantRatingThreshhold, other.relevantRatingThreshhold);
|
||||
}
|
||||
|
||||
|
@ -200,7 +200,7 @@ public class ReciprocalRank implements RankedListQualityMetric {
|
|||
if (obj == null || getClass() != obj.getClass()) {
|
||||
return false;
|
||||
}
|
||||
ReciprocalRank.Breakdown other = (ReciprocalRank.Breakdown) obj;
|
||||
MeanReciprocalRank.Breakdown other = (MeanReciprocalRank.Breakdown) obj;
|
||||
return Objects.equals(firstRelevantRank, other.firstRelevantRank);
|
||||
}
|
||||
|
|
@ -42,15 +42,15 @@ import static org.elasticsearch.index.rankeval.RankedListQualityMetric.joinHitsW
|
|||
* considered to be "relevant" for the precision calculation. This value can be
|
||||
* changes using the "relevant_rating_threshold" parameter.
|
||||
*/
|
||||
public class Precision implements RankedListQualityMetric {
|
||||
public class PrecisionAtK implements RankedListQualityMetric {
|
||||
|
||||
public static final String NAME = "precision";
|
||||
|
||||
private static final ParseField RELEVANT_RATING_FIELD = new ParseField(
|
||||
"relevant_rating_threshold");
|
||||
private static final ParseField IGNORE_UNLABELED_FIELD = new ParseField("ignore_unlabeled");
|
||||
private static final ObjectParser<Precision, Void> PARSER = new ObjectParser<>(NAME,
|
||||
Precision::new);
|
||||
private static final ObjectParser<PrecisionAtK, Void> PARSER = new ObjectParser<>(NAME,
|
||||
PrecisionAtK::new);
|
||||
|
||||
/**
|
||||
* This setting controls how unlabeled documents in the search hits are
|
||||
|
@ -63,16 +63,16 @@ public class Precision implements RankedListQualityMetric {
|
|||
/** ratings equal or above this value will be considered relevant. */
|
||||
private int relevantRatingThreshhold = 1;
|
||||
|
||||
public Precision() {
|
||||
public PrecisionAtK() {
|
||||
// needed for supplier in parser
|
||||
}
|
||||
|
||||
static {
|
||||
PARSER.declareInt(Precision::setRelevantRatingThreshhold, RELEVANT_RATING_FIELD);
|
||||
PARSER.declareBoolean(Precision::setIgnoreUnlabeled, IGNORE_UNLABELED_FIELD);
|
||||
PARSER.declareInt(PrecisionAtK::setRelevantRatingThreshhold, RELEVANT_RATING_FIELD);
|
||||
PARSER.declareBoolean(PrecisionAtK::setIgnoreUnlabeled, IGNORE_UNLABELED_FIELD);
|
||||
}
|
||||
|
||||
public Precision(StreamInput in) throws IOException {
|
||||
public PrecisionAtK(StreamInput in) throws IOException {
|
||||
relevantRatingThreshhold = in.readOptionalVInt();
|
||||
ignoreUnlabeled = in.readOptionalBoolean();
|
||||
}
|
||||
|
@ -122,7 +122,7 @@ public class Precision implements RankedListQualityMetric {
|
|||
return ignoreUnlabeled;
|
||||
}
|
||||
|
||||
public static Precision fromXContent(XContentParser parser) {
|
||||
public static PrecisionAtK fromXContent(XContentParser parser) {
|
||||
return PARSER.apply(parser, null);
|
||||
}
|
||||
|
||||
|
@ -155,7 +155,7 @@ public class Precision implements RankedListQualityMetric {
|
|||
}
|
||||
EvalQueryQuality evalQueryQuality = new EvalQueryQuality(taskId, precision);
|
||||
evalQueryQuality.addMetricDetails(
|
||||
new Precision.Breakdown(truePositives, truePositives + falsePositives));
|
||||
new PrecisionAtK.Breakdown(truePositives, truePositives + falsePositives));
|
||||
evalQueryQuality.addHitsAndRatings(ratedSearchHits);
|
||||
return evalQueryQuality;
|
||||
}
|
||||
|
@ -179,7 +179,7 @@ public class Precision implements RankedListQualityMetric {
|
|||
if (obj == null || getClass() != obj.getClass()) {
|
||||
return false;
|
||||
}
|
||||
Precision other = (Precision) obj;
|
||||
PrecisionAtK other = (PrecisionAtK) obj;
|
||||
return Objects.equals(relevantRatingThreshhold, other.relevantRatingThreshhold)
|
||||
&& Objects.equals(ignoreUnlabeled, other.ignoreUnlabeled);
|
||||
}
|
||||
|
@ -241,7 +241,7 @@ public class Precision implements RankedListQualityMetric {
|
|||
if (obj == null || getClass() != obj.getClass()) {
|
||||
return false;
|
||||
}
|
||||
Precision.Breakdown other = (Precision.Breakdown) obj;
|
||||
PrecisionAtK.Breakdown other = (PrecisionAtK.Breakdown) obj;
|
||||
return Objects.equals(relevantRetrieved, other.relevantRetrieved)
|
||||
&& Objects.equals(retrieved, other.retrieved);
|
||||
}
|
|
@ -65,15 +65,15 @@ public class RankEvalPlugin extends Plugin implements ActionPlugin {
|
|||
public List<NamedWriteableRegistry.Entry> getNamedWriteables() {
|
||||
List<NamedWriteableRegistry.Entry> namedWriteables = new ArrayList<>();
|
||||
namedWriteables.add(new NamedWriteableRegistry.Entry(RankedListQualityMetric.class,
|
||||
Precision.NAME, Precision::new));
|
||||
PrecisionAtK.NAME, PrecisionAtK::new));
|
||||
namedWriteables.add(new NamedWriteableRegistry.Entry(RankedListQualityMetric.class,
|
||||
ReciprocalRank.NAME, ReciprocalRank::new));
|
||||
MeanReciprocalRank.NAME, MeanReciprocalRank::new));
|
||||
namedWriteables.add(new NamedWriteableRegistry.Entry(RankedListQualityMetric.class,
|
||||
DiscountedCumulativeGain.NAME, DiscountedCumulativeGain::new));
|
||||
namedWriteables.add(new NamedWriteableRegistry.Entry(MetricDetails.class, Precision.NAME,
|
||||
Precision.Breakdown::new));
|
||||
namedWriteables.add(new NamedWriteableRegistry.Entry(MetricDetails.class, PrecisionAtK.NAME,
|
||||
PrecisionAtK.Breakdown::new));
|
||||
namedWriteables.add(new NamedWriteableRegistry.Entry(MetricDetails.class,
|
||||
ReciprocalRank.NAME, ReciprocalRank.Breakdown::new));
|
||||
MeanReciprocalRank.NAME, MeanReciprocalRank.Breakdown::new));
|
||||
return namedWriteables;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -65,11 +65,11 @@ public interface RankedListQualityMetric extends ToXContent, NamedWriteable {
|
|||
|
||||
// TODO maybe switch to using a plugable registry later?
|
||||
switch (metricName) {
|
||||
case Precision.NAME:
|
||||
rc = Precision.fromXContent(parser);
|
||||
case PrecisionAtK.NAME:
|
||||
rc = PrecisionAtK.fromXContent(parser);
|
||||
break;
|
||||
case ReciprocalRank.NAME:
|
||||
rc = ReciprocalRank.fromXContent(parser);
|
||||
case MeanReciprocalRank.NAME:
|
||||
rc = MeanReciprocalRank.fromXContent(parser);
|
||||
break;
|
||||
case DiscountedCumulativeGain.NAME:
|
||||
rc = DiscountedCumulativeGain.fromXContent(parser);
|
||||
|
|
|
@ -36,119 +36,57 @@ import static org.elasticsearch.rest.RestRequest.Method.GET;
|
|||
import static org.elasticsearch.rest.RestRequest.Method.POST;
|
||||
|
||||
/**
|
||||
* Accepted input format:
|
||||
*
|
||||
* General Format:
|
||||
*
|
||||
*
|
||||
{
|
||||
"spec_id": "human_readable_id",
|
||||
"requests": [{
|
||||
"id": "human_readable_id",
|
||||
"request": { ... request to check ... },
|
||||
"ratings": { ... mapping from doc id to rating value ... }
|
||||
}],
|
||||
"metric": {
|
||||
"... metric_name... ": {
|
||||
"... metric_parameter_key ...": ...metric_parameter_value...
|
||||
}
|
||||
}
|
||||
}
|
||||
*
|
||||
* Example:
|
||||
*
|
||||
*
|
||||
{"spec_id": "huge_weight_on_location",
|
||||
"requests": [{
|
||||
"id": "amsterdam_query",
|
||||
"request": {
|
||||
"query": {
|
||||
"bool": {
|
||||
"must": [
|
||||
{"match": {"beverage": "coffee"}},
|
||||
{"term": {"browser": {"value": "safari"}}},
|
||||
{"term": {"time_of_day": {"value": "morning","boost": 2}}},
|
||||
{"term": {"ip_location": {"value": "ams","boost": 10}}}]}
|
||||
},
|
||||
"size": 10
|
||||
},
|
||||
"ratings": [
|
||||
{\"index\": \"test\", \"type\": \"my_type\", \"doc_id\": \"1\", \"rating\" : 1 },
|
||||
{\"index\": \"test\", \"type\": \"my_type\", \"doc_id\": \"2\", \"rating\" : 0 },
|
||||
{\"index\": \"test\", \"type\": \"my_type\", \"doc_id\": \"3\", \"rating\" : 1 }
|
||||
]
|
||||
}, {
|
||||
"id": "berlin_query",
|
||||
"request": {
|
||||
"query": {
|
||||
"bool": {
|
||||
"must": [
|
||||
{"match": {"beverage": "club mate"}},
|
||||
{"term": {"browser": {"value": "chromium"}}},
|
||||
{"term": {"time_of_day": {"value": "evening","boost": 2}}},
|
||||
{"term": {"ip_location": {"value": "ber","boost": 10}}}]}
|
||||
},
|
||||
"size": 10
|
||||
},
|
||||
"ratings": [ ... ]
|
||||
}],
|
||||
"metric": {
|
||||
"precisionAtN": {
|
||||
"size": 10
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
*
|
||||
* Output format:
|
||||
*
|
||||
* General format:
|
||||
*
|
||||
*
|
||||
{
|
||||
"took": 59,
|
||||
"timed_out": false,
|
||||
"_shards": {
|
||||
"total": 5,
|
||||
"successful": 5,
|
||||
"failed": 0
|
||||
},
|
||||
"quality_level": ... quality level ...,
|
||||
"unknown_docs": {"user_request_id": [... list of unknown docs ...]}
|
||||
}
|
||||
|
||||
*
|
||||
* Example:
|
||||
*
|
||||
*
|
||||
*
|
||||
{
|
||||
"took": 59,
|
||||
"timed_out": false,
|
||||
"_shards": {
|
||||
"total": 5,
|
||||
"successful": 5,
|
||||
"failed": 0
|
||||
},
|
||||
"rank_eval": [{
|
||||
"spec_id": "huge_weight_on_location",
|
||||
"quality_level": 0.4,
|
||||
"unknown_docs": {
|
||||
"amsterdam_query": [
|
||||
{ "index" : "test", "doc_id" : "21"},
|
||||
{ "index" : "test", "doc_id" : "5"},
|
||||
{ "index" : "test", "doc_id" : "9"}
|
||||
]
|
||||
}, {
|
||||
"berlin_query": [
|
||||
{ "index" : "test", "doc_id" : "42"}
|
||||
]
|
||||
}
|
||||
}]
|
||||
}
|
||||
|
||||
|
||||
* */
|
||||
* {
|
||||
* "requests": [{
|
||||
* "id": "amsterdam_query",
|
||||
* "request": {
|
||||
* "query": {
|
||||
* "match": {
|
||||
* "text": "amsterdam"
|
||||
* }
|
||||
* }
|
||||
* },
|
||||
* "ratings": [{
|
||||
* "_index": "foo",
|
||||
* "_id": "doc1",
|
||||
* "rating": 0
|
||||
* },
|
||||
* {
|
||||
* "_index": "foo",
|
||||
* "_id": "doc2",
|
||||
* "rating": 1
|
||||
* },
|
||||
* {
|
||||
* "_index": "foo",
|
||||
* "_id": "doc3",
|
||||
* "rating": 1
|
||||
* }
|
||||
* ]
|
||||
* },
|
||||
* {
|
||||
* "id": "berlin_query",
|
||||
* "request": {
|
||||
* "query": {
|
||||
* "match": {
|
||||
* "text": "berlin"
|
||||
* }
|
||||
* },
|
||||
* "size": 10
|
||||
* },
|
||||
* "ratings": [{
|
||||
* "_index": "foo",
|
||||
* "_id": "doc1",
|
||||
* "rating": 1
|
||||
* }]
|
||||
* }
|
||||
* ],
|
||||
* "metric": {
|
||||
* "precision": {
|
||||
* "ignore_unlabeled": true
|
||||
* }
|
||||
* }
|
||||
* }
|
||||
*/
|
||||
public class RestRankEvalAction extends BaseRestHandler {
|
||||
|
||||
public RestRankEvalAction(Settings settings, RestController controller) {
|
||||
|
|
|
@ -46,7 +46,7 @@ public class EvalQueryQualityTests extends ESTestCase {
|
|||
randomDoubleBetween(0.0, 1.0, true));
|
||||
if (randomBoolean()) {
|
||||
// TODO randomize this
|
||||
evalQueryQuality.addMetricDetails(new Precision.Breakdown(1, 5));
|
||||
evalQueryQuality.addMetricDetails(new PrecisionAtK.Breakdown(1, 5));
|
||||
}
|
||||
evalQueryQuality.addHitsAndRatings(ratedHits);
|
||||
return evalQueryQuality;
|
||||
|
@ -81,7 +81,7 @@ public class EvalQueryQualityTests extends ESTestCase {
|
|||
break;
|
||||
case 2:
|
||||
if (metricDetails == null) {
|
||||
metricDetails = new Precision.Breakdown(1, 5);
|
||||
metricDetails = new PrecisionAtK.Breakdown(1, 5);
|
||||
} else {
|
||||
metricDetails = null;
|
||||
}
|
||||
|
|
|
@ -43,10 +43,10 @@ public class PrecisionTests extends ESTestCase {
|
|||
public void testPrecisionAtFiveCalculation() {
|
||||
List<RatedDocument> rated = new ArrayList<>();
|
||||
rated.add(new RatedDocument("test", "0", Rating.RELEVANT.ordinal()));
|
||||
EvalQueryQuality evaluated = (new Precision()).evaluate("id", toSearchHits(rated, "test"), rated);
|
||||
EvalQueryQuality evaluated = (new PrecisionAtK()).evaluate("id", toSearchHits(rated, "test"), rated);
|
||||
assertEquals(1, evaluated.getQualityLevel(), 0.00001);
|
||||
assertEquals(1, ((Precision.Breakdown) evaluated.getMetricDetails()).getRelevantRetrieved());
|
||||
assertEquals(1, ((Precision.Breakdown) evaluated.getMetricDetails()).getRetrieved());
|
||||
assertEquals(1, ((PrecisionAtK.Breakdown) evaluated.getMetricDetails()).getRelevantRetrieved());
|
||||
assertEquals(1, ((PrecisionAtK.Breakdown) evaluated.getMetricDetails()).getRetrieved());
|
||||
}
|
||||
|
||||
public void testPrecisionAtFiveIgnoreOneResult() {
|
||||
|
@ -56,10 +56,10 @@ public class PrecisionTests extends ESTestCase {
|
|||
rated.add(new RatedDocument("test", "2", Rating.RELEVANT.ordinal()));
|
||||
rated.add(new RatedDocument("test", "3", Rating.RELEVANT.ordinal()));
|
||||
rated.add(new RatedDocument("test", "4", Rating.IRRELEVANT.ordinal()));
|
||||
EvalQueryQuality evaluated = (new Precision()).evaluate("id", toSearchHits(rated, "test"), rated);
|
||||
EvalQueryQuality evaluated = (new PrecisionAtK()).evaluate("id", toSearchHits(rated, "test"), rated);
|
||||
assertEquals((double) 4 / 5, evaluated.getQualityLevel(), 0.00001);
|
||||
assertEquals(4, ((Precision.Breakdown) evaluated.getMetricDetails()).getRelevantRetrieved());
|
||||
assertEquals(5, ((Precision.Breakdown) evaluated.getMetricDetails()).getRetrieved());
|
||||
assertEquals(4, ((PrecisionAtK.Breakdown) evaluated.getMetricDetails()).getRelevantRetrieved());
|
||||
assertEquals(5, ((PrecisionAtK.Breakdown) evaluated.getMetricDetails()).getRetrieved());
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -74,12 +74,12 @@ public class PrecisionTests extends ESTestCase {
|
|||
rated.add(new RatedDocument("test", "2", 2));
|
||||
rated.add(new RatedDocument("test", "3", 3));
|
||||
rated.add(new RatedDocument("test", "4", 4));
|
||||
Precision precisionAtN = new Precision();
|
||||
PrecisionAtK precisionAtN = new PrecisionAtK();
|
||||
precisionAtN.setRelevantRatingThreshhold(2);
|
||||
EvalQueryQuality evaluated = precisionAtN.evaluate("id", toSearchHits(rated, "test"), rated);
|
||||
assertEquals((double) 3 / 5, evaluated.getQualityLevel(), 0.00001);
|
||||
assertEquals(3, ((Precision.Breakdown) evaluated.getMetricDetails()).getRelevantRetrieved());
|
||||
assertEquals(5, ((Precision.Breakdown) evaluated.getMetricDetails()).getRetrieved());
|
||||
assertEquals(3, ((PrecisionAtK.Breakdown) evaluated.getMetricDetails()).getRelevantRetrieved());
|
||||
assertEquals(5, ((PrecisionAtK.Breakdown) evaluated.getMetricDetails()).getRetrieved());
|
||||
}
|
||||
|
||||
public void testPrecisionAtFiveCorrectIndex() {
|
||||
|
@ -90,10 +90,10 @@ public class PrecisionTests extends ESTestCase {
|
|||
rated.add(new RatedDocument("test", "1", Rating.RELEVANT.ordinal()));
|
||||
rated.add(new RatedDocument("test", "2", Rating.IRRELEVANT.ordinal()));
|
||||
// the following search hits contain only the last three documents
|
||||
EvalQueryQuality evaluated = (new Precision()).evaluate("id", toSearchHits(rated.subList(2, 5), "test"), rated);
|
||||
EvalQueryQuality evaluated = (new PrecisionAtK()).evaluate("id", toSearchHits(rated.subList(2, 5), "test"), rated);
|
||||
assertEquals((double) 2 / 3, evaluated.getQualityLevel(), 0.00001);
|
||||
assertEquals(2, ((Precision.Breakdown) evaluated.getMetricDetails()).getRelevantRetrieved());
|
||||
assertEquals(3, ((Precision.Breakdown) evaluated.getMetricDetails()).getRetrieved());
|
||||
assertEquals(2, ((PrecisionAtK.Breakdown) evaluated.getMetricDetails()).getRelevantRetrieved());
|
||||
assertEquals(3, ((PrecisionAtK.Breakdown) evaluated.getMetricDetails()).getRetrieved());
|
||||
}
|
||||
|
||||
public void testIgnoreUnlabeled() {
|
||||
|
@ -105,18 +105,18 @@ public class PrecisionTests extends ESTestCase {
|
|||
searchHits[2] = new SearchHit(2, "2", new Text("testtype"), Collections.emptyMap());
|
||||
searchHits[2].shard(new SearchShardTarget("testnode", new Index("index", "uuid"), 0, null));
|
||||
|
||||
EvalQueryQuality evaluated = (new Precision()).evaluate("id", searchHits, rated);
|
||||
EvalQueryQuality evaluated = (new PrecisionAtK()).evaluate("id", searchHits, rated);
|
||||
assertEquals((double) 2 / 3, evaluated.getQualityLevel(), 0.00001);
|
||||
assertEquals(2, ((Precision.Breakdown) evaluated.getMetricDetails()).getRelevantRetrieved());
|
||||
assertEquals(3, ((Precision.Breakdown) evaluated.getMetricDetails()).getRetrieved());
|
||||
assertEquals(2, ((PrecisionAtK.Breakdown) evaluated.getMetricDetails()).getRelevantRetrieved());
|
||||
assertEquals(3, ((PrecisionAtK.Breakdown) evaluated.getMetricDetails()).getRetrieved());
|
||||
|
||||
// also try with setting `ignore_unlabeled`
|
||||
Precision prec = new Precision();
|
||||
PrecisionAtK prec = new PrecisionAtK();
|
||||
prec.setIgnoreUnlabeled(true);
|
||||
evaluated = prec.evaluate("id", searchHits, rated);
|
||||
assertEquals((double) 2 / 2, evaluated.getQualityLevel(), 0.00001);
|
||||
assertEquals(2, ((Precision.Breakdown) evaluated.getMetricDetails()).getRelevantRetrieved());
|
||||
assertEquals(2, ((Precision.Breakdown) evaluated.getMetricDetails()).getRetrieved());
|
||||
assertEquals(2, ((PrecisionAtK.Breakdown) evaluated.getMetricDetails()).getRelevantRetrieved());
|
||||
assertEquals(2, ((PrecisionAtK.Breakdown) evaluated.getMetricDetails()).getRetrieved());
|
||||
}
|
||||
|
||||
public void testNoRatedDocs() throws Exception {
|
||||
|
@ -125,30 +125,30 @@ public class PrecisionTests extends ESTestCase {
|
|||
hits[i] = new SearchHit(i, i + "", new Text("type"), Collections.emptyMap());
|
||||
hits[i].shard(new SearchShardTarget("testnode", new Index("index", "uuid"), 0, null));
|
||||
}
|
||||
EvalQueryQuality evaluated = (new Precision()).evaluate("id", hits, Collections.emptyList());
|
||||
EvalQueryQuality evaluated = (new PrecisionAtK()).evaluate("id", hits, Collections.emptyList());
|
||||
assertEquals(0.0d, evaluated.getQualityLevel(), 0.00001);
|
||||
assertEquals(0, ((Precision.Breakdown) evaluated.getMetricDetails()).getRelevantRetrieved());
|
||||
assertEquals(5, ((Precision.Breakdown) evaluated.getMetricDetails()).getRetrieved());
|
||||
assertEquals(0, ((PrecisionAtK.Breakdown) evaluated.getMetricDetails()).getRelevantRetrieved());
|
||||
assertEquals(5, ((PrecisionAtK.Breakdown) evaluated.getMetricDetails()).getRetrieved());
|
||||
|
||||
// also try with setting `ignore_unlabeled`
|
||||
Precision prec = new Precision();
|
||||
PrecisionAtK prec = new PrecisionAtK();
|
||||
prec.setIgnoreUnlabeled(true);
|
||||
evaluated = prec.evaluate("id", hits, Collections.emptyList());
|
||||
assertEquals(0.0d, evaluated.getQualityLevel(), 0.00001);
|
||||
assertEquals(0, ((Precision.Breakdown) evaluated.getMetricDetails()).getRelevantRetrieved());
|
||||
assertEquals(0, ((Precision.Breakdown) evaluated.getMetricDetails()).getRetrieved());
|
||||
assertEquals(0, ((PrecisionAtK.Breakdown) evaluated.getMetricDetails()).getRelevantRetrieved());
|
||||
assertEquals(0, ((PrecisionAtK.Breakdown) evaluated.getMetricDetails()).getRetrieved());
|
||||
}
|
||||
|
||||
public void testParseFromXContent() throws IOException {
|
||||
String xContent = " {\n" + " \"relevant_rating_threshold\" : 2" + "}";
|
||||
try (XContentParser parser = createParser(JsonXContent.jsonXContent, xContent)) {
|
||||
Precision precicionAt = Precision.fromXContent(parser);
|
||||
PrecisionAtK precicionAt = PrecisionAtK.fromXContent(parser);
|
||||
assertEquals(2, precicionAt.getRelevantRatingThreshold());
|
||||
}
|
||||
}
|
||||
|
||||
public void testCombine() {
|
||||
Precision metric = new Precision();
|
||||
PrecisionAtK metric = new PrecisionAtK();
|
||||
Vector<EvalQueryQuality> partialResults = new Vector<>(3);
|
||||
partialResults.add(new EvalQueryQuality("a", 0.1));
|
||||
partialResults.add(new EvalQueryQuality("b", 0.2));
|
||||
|
@ -157,12 +157,12 @@ public class PrecisionTests extends ESTestCase {
|
|||
}
|
||||
|
||||
public void testInvalidRelevantThreshold() {
|
||||
Precision prez = new Precision();
|
||||
PrecisionAtK prez = new PrecisionAtK();
|
||||
expectThrows(IllegalArgumentException.class, () -> prez.setRelevantRatingThreshhold(-1));
|
||||
}
|
||||
|
||||
public static Precision createTestItem() {
|
||||
Precision precision = new Precision();
|
||||
public static PrecisionAtK createTestItem() {
|
||||
PrecisionAtK precision = new PrecisionAtK();
|
||||
if (randomBoolean()) {
|
||||
precision.setRelevantRatingThreshhold(randomIntBetween(0, 10));
|
||||
}
|
||||
|
@ -171,13 +171,13 @@ public class PrecisionTests extends ESTestCase {
|
|||
}
|
||||
|
||||
public void testXContentRoundtrip() throws IOException {
|
||||
Precision testItem = createTestItem();
|
||||
PrecisionAtK testItem = createTestItem();
|
||||
XContentBuilder builder = XContentFactory.contentBuilder(randomFrom(XContentType.values()));
|
||||
XContentBuilder shuffled = shuffleXContent(testItem.toXContent(builder, ToXContent.EMPTY_PARAMS));
|
||||
try (XContentParser itemParser = createParser(shuffled)) {
|
||||
itemParser.nextToken();
|
||||
itemParser.nextToken();
|
||||
Precision parsedItem = Precision.fromXContent(itemParser);
|
||||
PrecisionAtK parsedItem = PrecisionAtK.fromXContent(itemParser);
|
||||
assertNotSame(testItem, parsedItem);
|
||||
assertEquals(testItem, parsedItem);
|
||||
assertEquals(testItem.hashCode(), parsedItem.hashCode());
|
||||
|
@ -185,22 +185,22 @@ public class PrecisionTests extends ESTestCase {
|
|||
}
|
||||
|
||||
public void testSerialization() throws IOException {
|
||||
Precision original = createTestItem();
|
||||
Precision deserialized = RankEvalTestHelper.copy(original, Precision::new);
|
||||
PrecisionAtK original = createTestItem();
|
||||
PrecisionAtK deserialized = RankEvalTestHelper.copy(original, PrecisionAtK::new);
|
||||
assertEquals(deserialized, original);
|
||||
assertEquals(deserialized.hashCode(), original.hashCode());
|
||||
assertNotSame(deserialized, original);
|
||||
}
|
||||
|
||||
public void testEqualsAndHash() throws IOException {
|
||||
Precision testItem = createTestItem();
|
||||
RankEvalTestHelper.testHashCodeAndEquals(testItem, mutateTestItem(testItem), RankEvalTestHelper.copy(testItem, Precision::new));
|
||||
PrecisionAtK testItem = createTestItem();
|
||||
RankEvalTestHelper.testHashCodeAndEquals(testItem, mutateTestItem(testItem), RankEvalTestHelper.copy(testItem, PrecisionAtK::new));
|
||||
}
|
||||
|
||||
private static Precision mutateTestItem(Precision original) {
|
||||
private static PrecisionAtK mutateTestItem(PrecisionAtK original) {
|
||||
boolean ignoreUnlabeled = original.getIgnoreUnlabeled();
|
||||
int relevantThreshold = original.getRelevantRatingThreshold();
|
||||
Precision precision = new Precision();
|
||||
PrecisionAtK precision = new PrecisionAtK();
|
||||
precision.setIgnoreUnlabeled(ignoreUnlabeled);
|
||||
precision.setRelevantRatingThreshhold(relevantThreshold);
|
||||
|
||||
|
|
|
@ -82,7 +82,7 @@ public class RankEvalRequestIT extends ESIntegTestCase {
|
|||
|
||||
specifications.add(berlinRequest);
|
||||
|
||||
Precision metric = new Precision();
|
||||
PrecisionAtK metric = new PrecisionAtK();
|
||||
metric.setIgnoreUnlabeled(true);
|
||||
RankEvalSpec task = new RankEvalSpec(specifications, metric);
|
||||
|
||||
|
@ -148,7 +148,7 @@ public class RankEvalRequestIT extends ESIntegTestCase {
|
|||
brokenRequest.setIndices(indices);
|
||||
specifications.add(brokenRequest);
|
||||
|
||||
RankEvalSpec task = new RankEvalSpec(specifications, new Precision());
|
||||
RankEvalSpec task = new RankEvalSpec(specifications, new PrecisionAtK());
|
||||
|
||||
RankEvalRequestBuilder builder = new RankEvalRequestBuilder(client(),
|
||||
RankEvalAction.INSTANCE, new RankEvalRequest());
|
||||
|
|
|
@ -114,10 +114,11 @@ public class RankEvalSpecTests extends ESTestCase {
|
|||
|
||||
List<NamedWriteableRegistry.Entry> namedWriteables = new ArrayList<>();
|
||||
namedWriteables.add(new NamedWriteableRegistry.Entry(QueryBuilder.class, MatchAllQueryBuilder.NAME, MatchAllQueryBuilder::new));
|
||||
namedWriteables.add(new NamedWriteableRegistry.Entry(RankedListQualityMetric.class, Precision.NAME, Precision::new));
|
||||
namedWriteables.add(new NamedWriteableRegistry.Entry(RankedListQualityMetric.class, PrecisionAtK.NAME, PrecisionAtK::new));
|
||||
namedWriteables.add(new NamedWriteableRegistry.Entry(RankedListQualityMetric.class, DiscountedCumulativeGain.NAME,
|
||||
DiscountedCumulativeGain::new));
|
||||
namedWriteables.add(new NamedWriteableRegistry.Entry(RankedListQualityMetric.class, ReciprocalRank.NAME, ReciprocalRank::new));
|
||||
namedWriteables
|
||||
.add(new NamedWriteableRegistry.Entry(RankedListQualityMetric.class, MeanReciprocalRank.NAME, MeanReciprocalRank::new));
|
||||
|
||||
RankEvalSpec deserialized = RankEvalTestHelper.copy(original, RankEvalSpec::new, new NamedWriteableRegistry(namedWriteables));
|
||||
assertEquals(deserialized, original);
|
||||
|
@ -130,10 +131,11 @@ public class RankEvalSpecTests extends ESTestCase {
|
|||
|
||||
List<NamedWriteableRegistry.Entry> namedWriteables = new ArrayList<>();
|
||||
namedWriteables.add(new NamedWriteableRegistry.Entry(QueryBuilder.class, MatchAllQueryBuilder.NAME, MatchAllQueryBuilder::new));
|
||||
namedWriteables.add(new NamedWriteableRegistry.Entry(RankedListQualityMetric.class, Precision.NAME, Precision::new));
|
||||
namedWriteables.add(new NamedWriteableRegistry.Entry(RankedListQualityMetric.class, PrecisionAtK.NAME, PrecisionAtK::new));
|
||||
namedWriteables.add(new NamedWriteableRegistry.Entry(RankedListQualityMetric.class, DiscountedCumulativeGain.NAME,
|
||||
DiscountedCumulativeGain::new));
|
||||
namedWriteables.add(new NamedWriteableRegistry.Entry(RankedListQualityMetric.class, ReciprocalRank.NAME, ReciprocalRank::new));
|
||||
namedWriteables
|
||||
.add(new NamedWriteableRegistry.Entry(RankedListQualityMetric.class, MeanReciprocalRank.NAME, MeanReciprocalRank::new));
|
||||
|
||||
RankEvalSpec mutant = RankEvalTestHelper.copy(testItem, RankEvalSpec::new, new NamedWriteableRegistry(namedWriteables));
|
||||
RankEvalTestHelper.testHashCodeAndEquals(testItem, mutateTestItem(mutant),
|
||||
|
@ -152,10 +154,10 @@ public class RankEvalSpecTests extends ESTestCase {
|
|||
ratedRequests.add(request);
|
||||
break;
|
||||
case 1:
|
||||
if (metric instanceof Precision) {
|
||||
if (metric instanceof PrecisionAtK) {
|
||||
metric = new DiscountedCumulativeGain();
|
||||
} else {
|
||||
metric = new Precision();
|
||||
metric = new PrecisionAtK();
|
||||
}
|
||||
break;
|
||||
case 2:
|
||||
|
@ -175,7 +177,7 @@ public class RankEvalSpecTests extends ESTestCase {
|
|||
}
|
||||
|
||||
public void testMissingRatedRequestsFailsParsing() {
|
||||
RankedListQualityMetric metric = new Precision();
|
||||
RankedListQualityMetric metric = new PrecisionAtK();
|
||||
expectThrows(IllegalStateException.class, () -> new RankEvalSpec(new ArrayList<>(), metric));
|
||||
expectThrows(IllegalStateException.class, () -> new RankEvalSpec(null, metric));
|
||||
}
|
||||
|
@ -194,6 +196,6 @@ public class RankEvalSpecTests extends ESTestCase {
|
|||
RatedRequest request = new RatedRequest("id", ratedDocs, params, "templateId");
|
||||
List<RatedRequest> ratedRequests = Arrays.asList(request);
|
||||
|
||||
expectThrows(IllegalStateException.class, () -> new RankEvalSpec(ratedRequests, new Precision()));
|
||||
expectThrows(IllegalStateException.class, () -> new RankEvalSpec(ratedRequests, new PrecisionAtK()));
|
||||
}
|
||||
}
|
||||
|
|
|
@ -41,7 +41,7 @@ import java.util.Vector;
|
|||
public class ReciprocalRankTests extends ESTestCase {
|
||||
|
||||
public void testMaxAcceptableRank() {
|
||||
ReciprocalRank reciprocalRank = new ReciprocalRank();
|
||||
MeanReciprocalRank reciprocalRank = new MeanReciprocalRank();
|
||||
|
||||
int searchHits = randomIntBetween(1, 50);
|
||||
|
||||
|
@ -59,17 +59,17 @@ public class ReciprocalRankTests extends ESTestCase {
|
|||
int rankAtFirstRelevant = relevantAt + 1;
|
||||
EvalQueryQuality evaluation = reciprocalRank.evaluate("id", hits, ratedDocs);
|
||||
assertEquals(1.0 / rankAtFirstRelevant, evaluation.getQualityLevel(), Double.MIN_VALUE);
|
||||
assertEquals(rankAtFirstRelevant, ((ReciprocalRank.Breakdown) evaluation.getMetricDetails()).getFirstRelevantRank());
|
||||
assertEquals(rankAtFirstRelevant, ((MeanReciprocalRank.Breakdown) evaluation.getMetricDetails()).getFirstRelevantRank());
|
||||
|
||||
// check that if we have fewer search hits than relevant doc position,
|
||||
// we don't find any result and get 0.0 quality level
|
||||
reciprocalRank = new ReciprocalRank();
|
||||
reciprocalRank = new MeanReciprocalRank();
|
||||
evaluation = reciprocalRank.evaluate("id", Arrays.copyOfRange(hits, 0, relevantAt), ratedDocs);
|
||||
assertEquals(0.0, evaluation.getQualityLevel(), Double.MIN_VALUE);
|
||||
}
|
||||
|
||||
public void testEvaluationOneRelevantInResults() {
|
||||
ReciprocalRank reciprocalRank = new ReciprocalRank();
|
||||
MeanReciprocalRank reciprocalRank = new MeanReciprocalRank();
|
||||
SearchHit[] hits = createSearchHits(0, 9, "test");
|
||||
List<RatedDocument> ratedDocs = new ArrayList<>();
|
||||
// mark one of the ten docs relevant
|
||||
|
@ -84,7 +84,7 @@ public class ReciprocalRankTests extends ESTestCase {
|
|||
|
||||
EvalQueryQuality evaluation = reciprocalRank.evaluate("id", hits, ratedDocs);
|
||||
assertEquals(1.0 / (relevantAt + 1), evaluation.getQualityLevel(), Double.MIN_VALUE);
|
||||
assertEquals(relevantAt + 1, ((ReciprocalRank.Breakdown) evaluation.getMetricDetails()).getFirstRelevantRank());
|
||||
assertEquals(relevantAt + 1, ((MeanReciprocalRank.Breakdown) evaluation.getMetricDetails()).getFirstRelevantRank());
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -101,15 +101,15 @@ public class ReciprocalRankTests extends ESTestCase {
|
|||
rated.add(new RatedDocument("test", "4", 4));
|
||||
SearchHit[] hits = createSearchHits(0, 5, "test");
|
||||
|
||||
ReciprocalRank reciprocalRank = new ReciprocalRank();
|
||||
MeanReciprocalRank reciprocalRank = new MeanReciprocalRank();
|
||||
reciprocalRank.setRelevantRatingThreshhold(2);
|
||||
EvalQueryQuality evaluation = reciprocalRank.evaluate("id", hits, rated);
|
||||
assertEquals((double) 1 / 3, evaluation.getQualityLevel(), 0.00001);
|
||||
assertEquals(3, ((ReciprocalRank.Breakdown) evaluation.getMetricDetails()).getFirstRelevantRank());
|
||||
assertEquals(3, ((MeanReciprocalRank.Breakdown) evaluation.getMetricDetails()).getFirstRelevantRank());
|
||||
}
|
||||
|
||||
public void testCombine() {
|
||||
ReciprocalRank reciprocalRank = new ReciprocalRank();
|
||||
MeanReciprocalRank reciprocalRank = new MeanReciprocalRank();
|
||||
Vector<EvalQueryQuality> partialResults = new Vector<>(3);
|
||||
partialResults.add(new EvalQueryQuality("id1", 0.5));
|
||||
partialResults.add(new EvalQueryQuality("id2", 1.0));
|
||||
|
@ -118,7 +118,7 @@ public class ReciprocalRankTests extends ESTestCase {
|
|||
}
|
||||
|
||||
public void testEvaluationNoRelevantInResults() {
|
||||
ReciprocalRank reciprocalRank = new ReciprocalRank();
|
||||
MeanReciprocalRank reciprocalRank = new MeanReciprocalRank();
|
||||
SearchHit[] hits = createSearchHits(0, 9, "test");
|
||||
List<RatedDocument> ratedDocs = new ArrayList<>();
|
||||
EvalQueryQuality evaluation = reciprocalRank.evaluate("id", hits, ratedDocs);
|
||||
|
@ -126,13 +126,13 @@ public class ReciprocalRankTests extends ESTestCase {
|
|||
}
|
||||
|
||||
public void testXContentRoundtrip() throws IOException {
|
||||
ReciprocalRank testItem = createTestItem();
|
||||
MeanReciprocalRank testItem = createTestItem();
|
||||
XContentBuilder builder = XContentFactory.contentBuilder(randomFrom(XContentType.values()));
|
||||
XContentBuilder shuffled = shuffleXContent(testItem.toXContent(builder, ToXContent.EMPTY_PARAMS));
|
||||
try (XContentParser itemParser = createParser(shuffled)) {
|
||||
itemParser.nextToken();
|
||||
itemParser.nextToken();
|
||||
ReciprocalRank parsedItem = ReciprocalRank.fromXContent(itemParser);
|
||||
MeanReciprocalRank parsedItem = MeanReciprocalRank.fromXContent(itemParser);
|
||||
assertNotSame(testItem, parsedItem);
|
||||
assertEquals(testItem, parsedItem);
|
||||
assertEquals(testItem.hashCode(), parsedItem.hashCode());
|
||||
|
@ -152,36 +152,36 @@ public class ReciprocalRankTests extends ESTestCase {
|
|||
return hits;
|
||||
}
|
||||
|
||||
private static ReciprocalRank createTestItem() {
|
||||
ReciprocalRank testItem = new ReciprocalRank();
|
||||
private static MeanReciprocalRank createTestItem() {
|
||||
MeanReciprocalRank testItem = new MeanReciprocalRank();
|
||||
testItem.setRelevantRatingThreshhold(randomIntBetween(0, 20));
|
||||
return testItem;
|
||||
}
|
||||
|
||||
public void testSerialization() throws IOException {
|
||||
ReciprocalRank original = createTestItem();
|
||||
MeanReciprocalRank original = createTestItem();
|
||||
|
||||
ReciprocalRank deserialized = RankEvalTestHelper.copy(original, ReciprocalRank::new);
|
||||
MeanReciprocalRank deserialized = RankEvalTestHelper.copy(original, MeanReciprocalRank::new);
|
||||
assertEquals(deserialized, original);
|
||||
assertEquals(deserialized.hashCode(), original.hashCode());
|
||||
assertNotSame(deserialized, original);
|
||||
}
|
||||
|
||||
public void testEqualsAndHash() throws IOException {
|
||||
ReciprocalRank testItem = createTestItem();
|
||||
MeanReciprocalRank testItem = createTestItem();
|
||||
RankEvalTestHelper.testHashCodeAndEquals(testItem, mutateTestItem(testItem),
|
||||
RankEvalTestHelper.copy(testItem, ReciprocalRank::new));
|
||||
RankEvalTestHelper.copy(testItem, MeanReciprocalRank::new));
|
||||
}
|
||||
|
||||
private static ReciprocalRank mutateTestItem(ReciprocalRank testItem) {
|
||||
private static MeanReciprocalRank mutateTestItem(MeanReciprocalRank testItem) {
|
||||
int relevantThreshold = testItem.getRelevantRatingThreshold();
|
||||
ReciprocalRank rank = new ReciprocalRank();
|
||||
MeanReciprocalRank rank = new MeanReciprocalRank();
|
||||
rank.setRelevantRatingThreshhold(randomValueOtherThan(relevantThreshold, () -> randomIntBetween(0, 10)));
|
||||
return rank;
|
||||
}
|
||||
|
||||
public void testInvalidRelevantThreshold() {
|
||||
ReciprocalRank prez = new ReciprocalRank();
|
||||
MeanReciprocalRank prez = new MeanReciprocalRank();
|
||||
expectThrows(IllegalArgumentException.class, () -> prez.setRelevantRatingThreshhold(-1));
|
||||
}
|
||||
}
|
||||
|
|
|
@ -82,7 +82,7 @@
|
|||
- is_false: rank_eval.details.berlin_query.hits.1.rating
|
||||
|
||||
---
|
||||
"Reciprocal Rank":
|
||||
"Mean Reciprocal Rank":
|
||||
|
||||
- do:
|
||||
indices.create:
|
||||
|
@ -139,7 +139,7 @@
|
|||
"ratings": [{"_index": "foo", "_id": "doc4", "rating": 1}]
|
||||
}
|
||||
],
|
||||
"metric" : { "reciprocal_rank": {} }
|
||||
"metric" : { "mean_reciprocal_rank": {} }
|
||||
}
|
||||
|
||||
# average is (1/3 + 1/2)/2 = 5/12 ~ 0.41666666666666663
|
||||
|
|
Loading…
Reference in New Issue