LUCENE-1771: QueryWight back to Weight, but as abstract class rather than interface - explain now takes a Searcher and passes the sub reader that contains the doc if a top level reader is a multi reader.

git-svn-id: https://svn.apache.org/repos/asf/lucene/java/trunk@803339 13f79535-47bb-0310-9956-ffa450edef68
This commit is contained in:
Mark Robert Miller 2009-08-12 01:22:30 +00:00
parent 644a4b356f
commit 7fff5a7ea7
44 changed files with 420 additions and 617 deletions

View File

@ -53,16 +53,15 @@ Changes in backwards compatibility policy
which was unlikely done, because there is no possibility to change
Lucene's FieldCache implementation. (Grant Ingersoll, Uwe Schindler)
3. LUCENE-1630: Deprecate Weight in favor of QueryWeight: added
matching methods to Searcher to take QueryWeight and deprecated
those taking Weight. If you have a Weight implementation, you can
turn it into a QueryWeight with QueryWeightWrapper (will be
removed in 3.0). All of the Weight-based methods were implemented
by calling the QueryWeight variants by wrapping the given Weight.
3. LUCENE-1630, LUCENE-1771: Weight, previously an interface, is now an abstract
class. Some of the method signatures have changed, but it should be fairly
easy to see what adjustments must be made to existing code to sync up
with the new API. You can find more detail in the API Changes section.
Going forward Searchable will be kept for convenience only and may
be changed between minor releases without any deprecation
process. It is not recommended to implement it, but rather extend
Searcher. (Shai Erera via Mike McCandless)
process. It is not recommended that you implement it, but rather extend
Searcher. (Shai Erera, Chris Hostetter, Mark Miller via Mike McCandless)
4. LUCENE-1422, LUCENE-1693: The new TokenStream API (see below) using
Attributes has some backwards breaks in rare cases.
@ -296,23 +295,22 @@ API Changes
NumericRangeQuery and its new indexing format for numeric or
date values. (Uwe Schindler)
24. LUCENE-1630: Deprecate Weight in favor of QueryWeight, which adds
24. LUCENE-1630, LUCENE-1771: Weight is now an abstract class, andd adds
a scorer(IndexReader, boolean /* scoreDocsInOrder */, boolean /*
topScorer */) method instead of scorer(IndexReader) (now
deprecated). The new method is used by IndexSearcher to mate
between Collector and Scorer orderness of doc IDs. Some Scorers
(like BooleanScorer) are much more efficient if out-of-order
documents scoring is allowed by a Collector. Collector must now
implement acceptsDocsOutOfOrder. If you write a Collector which
does not care about doc ID orderness, it is recommended that you
return true. QueryWeight has the scoresDocsOutOfOrder method,
which by default returns false. If you create a QueryWeight which
will score documents out of order if that's requested, you should
override that method to return true. Also deprecated
BooleanQuery's setAllowDocsOutOfOrder and getAllowDocsOutOfOrder
as they are not needed anymore. BooleanQuery will now score docs
out of order when used with a Collector that can accept docs out
of order. (Shai Erera via Mike McCandless)
topScorer */) method instead of scorer(IndexReader). IndexSearcher uses
this method to obtain a scorer matching the capabilities of the Collector
wrt orderness of docIDs. Some Scorers (like BooleanScorer) are much more
efficient if out-of-order documents scoring is allowed by a Collector.
Collector must now implement acceptsDocsOutOfOrder. If you write a
Collector which does not care about doc ID orderness, it is recommended
that you return true. Weight has a scoresDocsOutOfOrder method, which by
default returns false. If you create a Weight which will score documents
out of order if requested, you should override that method to return true.
BooleanQuery's setAllowDocsOutOfOrder and getAllowDocsOutOfOrder have been
deprecated as they are not needed anymore. BooleanQuery will now score docs
out of order when used with a Collector that can accept docs out of order.
Finally, Weight#explain now also takes a Searcher.
(Shai Erera, Chris Hostetter, Mark Miller via Mike McCandless)
25. LUCENE-1466: Changed Tokenizer.input to be a CharStream; added
CharFilter and MappingCharFilter, which allows chaining & mapping

View File

@ -48,15 +48,10 @@ public class RemoteSearchable
/** @deprecated use {@link #search(Weight, Filter, Collector)} instead. */
public void search(Weight weight, Filter filter, HitCollector results)
throws IOException {
search(new QueryWeightWrapper(weight), filter, new HitCollectorWrapper(results));
local.search(weight, filter, results);
}
public void search(Weight weight, Filter filter, Collector results)
throws IOException {
search(new QueryWeightWrapper(weight), filter, results);
}
public void search(QueryWeight weight, Filter filter, Collector results)
throws IOException {
local.search(weight, filter, results);
}
@ -79,19 +74,10 @@ public class RemoteSearchable
}
public TopDocs search(Weight weight, Filter filter, int n) throws IOException {
return search(new QueryWeightWrapper(weight), filter, n);
}
public TopDocs search(QueryWeight weight, Filter filter, int n) throws IOException {
return local.search(weight, filter, n);
}
public TopFieldDocs search(Weight weight, Filter filter, int n, Sort sort)
throws IOException {
return search(new QueryWeightWrapper(weight), filter, n, sort);
}
public TopFieldDocs search(QueryWeight weight, Filter filter, int n, Sort sort)
public TopFieldDocs search(Weight weight, Filter filter, int n, Sort sort)
throws IOException {
return local.search (weight, filter, n, sort);
}
@ -109,10 +95,6 @@ public class RemoteSearchable
}
public Explanation explain(Weight weight, int doc) throws IOException {
return explain(new QueryWeightWrapper(weight), doc);
}
public Explanation explain(QueryWeight weight, int doc) throws IOException {
return local.explain(weight, doc);
}

View File

@ -32,9 +32,9 @@ import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.QueryWeight;
import org.apache.lucene.search.Scorer;
import org.apache.lucene.search.Similarity;
import org.apache.lucene.search.Weight;
import org.apache.lucene.store.AlreadyClosedException;
import org.apache.lucene.store.Directory;
import org.apache.lucene.util.ArrayUtil;
@ -1005,7 +1005,7 @@ final class DocumentsWriter {
Entry entry = (Entry) iter.next();
Query query = (Query) entry.getKey();
int limit = ((Integer) entry.getValue()).intValue();
QueryWeight weight = query.queryWeight(searcher);
Weight weight = query.weight(searcher);
Scorer scorer = weight.scorer(reader, true, false);
if (scorer != null) {
while(true) {

View File

@ -179,11 +179,9 @@ public class BooleanQuery extends Query {
* <p>NOTE: this API and implementation is subject to
* change suddenly in the next release.</p>
*/
protected class BooleanWeight extends QueryWeight {
protected class BooleanWeight extends Weight {
/** The Similarity implementation. */
protected Similarity similarity;
/** The Weights for our subqueries, in 1-1 correspondence with clauses */
protected ArrayList weights;
public BooleanWeight(Searcher searcher)
@ -192,7 +190,7 @@ public class BooleanQuery extends Query {
weights = new ArrayList(clauses.size());
for (int i = 0 ; i < clauses.size(); i++) {
BooleanClause c = (BooleanClause)clauses.get(i);
weights.add(c.getQuery().createQueryWeight(searcher));
weights.add(c.getQuery().createWeight(searcher));
}
}
@ -203,7 +201,7 @@ public class BooleanQuery extends Query {
float sum = 0.0f;
for (int i = 0 ; i < weights.size(); i++) {
BooleanClause c = (BooleanClause)clauses.get(i);
QueryWeight w = (QueryWeight)weights.get(i);
Weight w = (Weight)weights.get(i);
// call sumOfSquaredWeights for all clauses in case of side effects
float s = w.sumOfSquaredWeights(); // sum sub weights
if (!c.isProhibited())
@ -220,13 +218,13 @@ public class BooleanQuery extends Query {
public void normalize(float norm) {
norm *= getBoost(); // incorporate boost
for (Iterator iter = weights.iterator(); iter.hasNext();) {
QueryWeight w = (QueryWeight) iter.next();
Weight w = (Weight) iter.next();
// normalize all clauses, (even if prohibited in case of side affects)
w.normalize(norm);
}
}
public Explanation explain(IndexReader reader, int doc)
public Explanation explain(Searcher searcher, IndexReader reader, int doc)
throws IOException {
final int minShouldMatch =
BooleanQuery.this.getMinimumNumberShouldMatch();
@ -238,12 +236,12 @@ public class BooleanQuery extends Query {
boolean fail = false;
int shouldMatchCount = 0;
for (Iterator wIter = weights.iterator(), cIter = clauses.iterator(); wIter.hasNext();) {
QueryWeight w = (QueryWeight) wIter.next();
Weight w = (Weight) wIter.next();
BooleanClause c = (BooleanClause) cIter.next();
if (w.scorer(reader, true, true) == null) {
continue;
}
Explanation e = w.explain(reader, doc);
Explanation e = w.explain(searcher, reader, doc);
if (!c.isProhibited()) maxCoord++;
if (e.isMatch()) {
if (!c.isProhibited()) {
@ -303,7 +301,7 @@ public class BooleanQuery extends Query {
List prohibited = new ArrayList();
List optional = new ArrayList();
for (Iterator wIter = weights.iterator(), cIter = clauses.iterator(); wIter.hasNext();) {
QueryWeight w = (QueryWeight) wIter.next();
Weight w = (Weight) wIter.next();
BooleanClause c = (BooleanClause) cIter.next();
Scorer subScorer = w.scorer(reader, true, false);
if (subScorer == null) {
@ -364,7 +362,7 @@ public class BooleanQuery extends Query {
* Whether hit docs may be collected out of docid order.
*
* @deprecated this will not be needed anymore, as
* {@link QueryWeight#scoresDocsOutOfOrder()} is used.
* {@link Weight#scoresDocsOutOfOrder()} is used.
*/
private static boolean allowDocsOutOfOrder = true;
@ -391,7 +389,7 @@ public class BooleanQuery extends Query {
* </p>
*
* @deprecated this is not needed anymore, as
* {@link QueryWeight#scoresDocsOutOfOrder()} is used.
* {@link Weight#scoresDocsOutOfOrder()} is used.
*/
public static void setAllowDocsOutOfOrder(boolean allow) {
allowDocsOutOfOrder = allow;
@ -402,7 +400,7 @@ public class BooleanQuery extends Query {
*
* @see #setAllowDocsOutOfOrder(boolean)
* @deprecated this is not needed anymore, as
* {@link QueryWeight#scoresDocsOutOfOrder()} is used.
* {@link Weight#scoresDocsOutOfOrder()} is used.
*/
public static boolean getAllowDocsOutOfOrder() {
return allowDocsOutOfOrder;
@ -422,7 +420,7 @@ public class BooleanQuery extends Query {
return getAllowDocsOutOfOrder();
}
public QueryWeight createQueryWeight(Searcher searcher) throws IOException {
public Weight createWeight(Searcher searcher) throws IOException {
return new BooleanWeight(searcher);
}

View File

@ -50,7 +50,7 @@ public class ConstantScoreQuery extends Query {
// but may not be OK for highlighting
}
protected class ConstantWeight extends QueryWeight {
protected class ConstantWeight extends Weight {
private Similarity similarity;
private float queryNorm;
private float queryWeight;
@ -81,9 +81,9 @@ public class ConstantScoreQuery extends Query {
return new ConstantScorer(similarity, reader, this);
}
public Explanation explain(IndexReader reader, int doc) throws IOException {
ConstantScorer cs = (ConstantScorer) scorer(reader, true, false);
public Explanation explain(Searcher searcher, IndexReader reader, int doc) throws IOException {
ConstantScorer cs = new ConstantScorer(similarity, reader, this);
boolean exists = cs.docIdSetIterator.advance(doc) == doc;
ComplexExplanation result = new ComplexExplanation();
@ -110,7 +110,7 @@ public class ConstantScoreQuery extends Query {
final float theScore;
int doc = -1;
public ConstantScorer(Similarity similarity, IndexReader reader, QueryWeight w) throws IOException {
public ConstantScorer(Similarity similarity, IndexReader reader, Weight w) throws IOException {
super(similarity);
theScore = w.getValue();
DocIdSet docIdSet = filter.getDocIdSet(reader);
@ -162,7 +162,7 @@ public class ConstantScoreQuery extends Query {
}
}
public QueryWeight createQueryWeight(Searcher searcher) {
public Weight createWeight(Searcher searcher) {
return new ConstantScoreQuery.ConstantWeight(searcher);
}

View File

@ -92,18 +92,18 @@ public class DisjunctionMaxQuery extends Query {
* <p>NOTE: this API and implementation is subject to
* change suddenly in the next release.</p>
*/
protected class DisjunctionMaxWeight extends QueryWeight {
protected class DisjunctionMaxWeight extends Weight {
/** The Similarity implementation. */
protected Similarity similarity;
/** The Weights for our subqueries, in 1-1 correspondence with disjuncts */
protected ArrayList weights = new ArrayList();
protected ArrayList weights = new ArrayList(); // The Weight's for our subqueries, in 1-1 correspondence with disjuncts
/* Construct the Weight for this Query searched by searcher. Recursively construct subquery weights. */
public DisjunctionMaxWeight(Searcher searcher) throws IOException {
this.similarity = searcher.getSimilarity();
for (Iterator iter = disjuncts.iterator(); iter.hasNext();) {
weights.add(((Query) iter.next()).createQueryWeight(searcher));
weights.add(((Query) iter.next()).createWeight(searcher));
}
}
@ -117,7 +117,7 @@ public class DisjunctionMaxQuery extends Query {
public float sumOfSquaredWeights() throws IOException {
float max = 0.0f, sum = 0.0f;
for (Iterator iter = weights.iterator(); iter.hasNext();) {
float sub = ((QueryWeight) iter.next()).sumOfSquaredWeights();
float sub = ((Weight) iter.next()).sumOfSquaredWeights();
sum += sub;
max = Math.max(max, sub);
@ -130,7 +130,7 @@ public class DisjunctionMaxQuery extends Query {
public void normalize(float norm) {
norm *= getBoost(); // Incorporate our boost
for (Iterator iter = weights.iterator(); iter.hasNext();) {
((QueryWeight) iter.next()).normalize(norm);
((Weight) iter.next()).normalize(norm);
}
}
@ -140,7 +140,7 @@ public class DisjunctionMaxQuery extends Query {
Scorer[] scorers = new Scorer[weights.size()];
int idx = 0;
for (Iterator iter = weights.iterator(); iter.hasNext();) {
QueryWeight w = (QueryWeight) iter.next();
Weight w = (Weight) iter.next();
Scorer subScorer = w.scorer(reader, true, false);
if (subScorer != null && subScorer.nextDoc() != DocIdSetIterator.NO_MORE_DOCS) {
scorers[idx++] = subScorer;
@ -152,13 +152,13 @@ public class DisjunctionMaxQuery extends Query {
}
/* Explain the score we computed for doc */
public Explanation explain(IndexReader reader, int doc) throws IOException {
if (disjuncts.size() == 1) return ((QueryWeight) weights.get(0)).explain(reader,doc);
public Explanation explain(Searcher searcher, IndexReader reader, int doc) throws IOException {
if (disjuncts.size() == 1) return ((Weight) weights.get(0)).explain(searcher, reader,doc);
ComplexExplanation result = new ComplexExplanation();
float max = 0.0f, sum = 0.0f;
result.setDescription(tieBreakerMultiplier == 0.0f ? "max of:" : "max plus " + tieBreakerMultiplier + " times others of:");
for (Iterator iter = weights.iterator(); iter.hasNext();) {
Explanation e = ((QueryWeight) iter.next()).explain(reader, doc);
Explanation e = ((Weight) iter.next()).explain(searcher, reader, doc);
if (e.isMatch()) {
result.setMatch(Boolean.TRUE);
result.addDetail(e);
@ -172,8 +172,8 @@ public class DisjunctionMaxQuery extends Query {
} // end of DisjunctionMaxWeight inner class
/* Create the QueryWeight used to score us */
public QueryWeight createQueryWeight(Searcher searcher) throws IOException {
/* Create the Weight used to score us */
public Weight createWeight(Searcher searcher) throws IOException {
return new DisjunctionMaxWeight(searcher);
}

View File

@ -22,7 +22,7 @@ import org.apache.lucene.index.*;
final class ExactPhraseScorer extends PhraseScorer {
ExactPhraseScorer(QueryWeight weight, TermPositions[] tps, int[] offsets,
ExactPhraseScorer(Weight weight, TermPositions[] tps, int[] offsets,
Similarity similarity, byte[] norms) {
super(weight, tps, offsets, similarity, norms);
}

View File

@ -58,10 +58,10 @@ extends Query {
* Returns a Weight that applies the filter to the enclosed query's Weight.
* This is accomplished by overriding the Scorer returned by the Weight.
*/
public QueryWeight createQueryWeight(final Searcher searcher) throws IOException {
final QueryWeight weight = query.createQueryWeight (searcher);
public Weight createWeight(final Searcher searcher) throws IOException {
final Weight weight = query.createWeight (searcher);
final Similarity similarity = query.getSimilarity(searcher);
return new QueryWeight() {
return new Weight() {
private float value;
// pass these methods through to enclosed query's weight
@ -73,8 +73,8 @@ extends Query {
weight.normalize(v);
value = weight.getValue() * getBoost();
}
public Explanation explain (IndexReader ir, int i) throws IOException {
Explanation inner = weight.explain (ir, i);
public Explanation explain (Searcher searcher, IndexReader ir, int i) throws IOException {
Explanation inner = weight.explain (searcher, ir, i);
if (getBoost()!=1) {
Explanation preBoost = inner;
inner = new Explanation(inner.getValue()*getBoost(),"product of:");

View File

@ -53,7 +53,7 @@ import org.apache.lucene.index.CorruptIndexException;
* </pre>
*/
public final class Hits {
private QueryWeight weight;
private Weight weight;
private Searcher searcher;
private Filter filter = null;
private Sort sort = null;
@ -73,7 +73,7 @@ public final class Hits {
boolean debugCheckedForDeletions = false; // for test purposes.
Hits(Searcher s, Query q, Filter f) throws IOException {
weight = q.queryWeight(s);
weight = q.weight(s);
searcher = s;
filter = f;
nDeletions = countDeletions(s);
@ -82,7 +82,7 @@ public final class Hits {
}
Hits(Searcher s, Query q, Filter f, Sort o) throws IOException {
weight = q.queryWeight(s);
weight = q.weight(s);
searcher = s;
filter = f;
sort = o;

View File

@ -27,6 +27,7 @@ import org.apache.lucene.index.CorruptIndexException;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.Term;
import org.apache.lucene.store.Directory;
import org.apache.lucene.util.ReaderUtil;
/** Implements search over a single IndexReader.
*
@ -121,15 +122,7 @@ public class IndexSearcher extends Searcher {
}
protected void gatherSubReaders(List allSubReaders, IndexReader r) {
IndexReader[] subReaders = r.getSequentialSubReaders();
if (subReaders == null) {
// Add the reader itself, and do not recurse
allSubReaders.add(r);
} else {
for (int i = 0; i < subReaders.length; i++) {
gatherSubReaders(allSubReaders, subReaders[i]);
}
}
ReaderUtil.gatherSubReaders(allSubReaders, r);
}
/** Return the {@link IndexReader} this searches. */
@ -169,7 +162,7 @@ public class IndexSearcher extends Searcher {
}
// inherit javadoc
public TopDocs search(QueryWeight weight, Filter filter, final int nDocs) throws IOException {
public TopDocs search(Weight weight, Filter filter, final int nDocs) throws IOException {
if (nDocs <= 0) {
throw new IllegalArgumentException("nDocs must be > 0");
@ -180,22 +173,22 @@ public class IndexSearcher extends Searcher {
return collector.topDocs();
}
public TopFieldDocs search(QueryWeight weight, Filter filter,
public TopFieldDocs search(Weight weight, Filter filter,
final int nDocs, Sort sort) throws IOException {
return search(weight, filter, nDocs, sort, true);
}
/**
* Just like {@link #search(QueryWeight, Filter, int, Sort)}, but you choose
* Just like {@link #search(Weight, Filter, int, Sort)}, but you choose
* whether or not the fields in the returned {@link FieldDoc} instances should
* be set by specifying fillFields.<br>
* <b>NOTE:</b> currently, this method tracks document scores and sets them in
* the returned {@link FieldDoc}, however in 3.0 it will move to not track
* document scores. If document scores tracking is still needed, you can use
* {@link #search(QueryWeight, Filter, Collector)} and pass in a
* {@link #search(Weight, Filter, Collector)} and pass in a
* {@link TopFieldCollector} instance.
*/
public TopFieldDocs search(QueryWeight weight, Filter filter, final int nDocs,
public TopFieldDocs search(Weight weight, Filter filter, final int nDocs,
Sort sort, boolean fillFields)
throws IOException {
@ -242,7 +235,7 @@ public class IndexSearcher extends Searcher {
return (TopFieldDocs) collector.topDocs();
}
public void search(QueryWeight weight, Filter filter, Collector collector)
public void search(Weight weight, Filter filter, Collector collector)
throws IOException {
if (filter == null) {
@ -261,7 +254,7 @@ public class IndexSearcher extends Searcher {
}
}
private void searchWithFilter(IndexReader reader, QueryWeight weight,
private void searchWithFilter(IndexReader reader, Weight weight,
final Filter filter, final Collector collector) throws IOException {
assert filter != null;
@ -316,8 +309,11 @@ public class IndexSearcher extends Searcher {
return query;
}
public Explanation explain(QueryWeight weight, int doc) throws IOException {
return weight.explain(reader, doc);
public Explanation explain(Weight weight, int doc) throws IOException {
int n = ReaderUtil.subIndex(doc, docStarts);
int deBasedDoc = doc - docStarts[n];
return weight.explain(this, subReaders[n], deBasedDoc);
}
private boolean fieldSortDoTrackScores;

View File

@ -49,7 +49,7 @@ public class MatchAllDocsQuery extends Query {
final byte[] norms;
private int doc = -1;
MatchAllScorer(IndexReader reader, Similarity similarity, QueryWeight w,
MatchAllScorer(IndexReader reader, Similarity similarity, Weight w,
byte[] norms) throws IOException {
super(similarity);
this.termDocs = reader.termDocs(null);
@ -93,7 +93,7 @@ public class MatchAllDocsQuery extends Query {
}
}
private class MatchAllDocsWeight extends QueryWeight {
private class MatchAllDocsWeight extends Weight {
private Similarity similarity;
private float queryWeight;
private float queryNorm;
@ -129,7 +129,7 @@ public class MatchAllDocsQuery extends Query {
normsField != null ? reader.norms(normsField) : null);
}
public Explanation explain(IndexReader reader, int doc) {
public Explanation explain(Searcher searcher, IndexReader reader, int doc) {
// explain query weight
Explanation queryExpl = new ComplexExplanation
(true, getValue(), "MatchAllDocsQuery, product of:");
@ -142,7 +142,7 @@ public class MatchAllDocsQuery extends Query {
}
}
public QueryWeight createQueryWeight(Searcher searcher) {
public Weight createWeight(Searcher searcher) {
return new MatchAllDocsWeight(searcher);
}

View File

@ -123,7 +123,7 @@ public class MultiPhraseQuery extends Query {
}
private class MultiPhraseWeight extends QueryWeight {
private class MultiPhraseWeight extends Weight {
private Similarity similarity;
private float value;
private float idf;
@ -186,7 +186,7 @@ public class MultiPhraseQuery extends Query {
slop, reader.norms(field));
}
public Explanation explain(IndexReader reader, int doc)
public Explanation explain(Searcher searcher, IndexReader reader, int doc)
throws IOException {
ComplexExplanation result = new ComplexExplanation();
result.setDescription("weight("+getQuery()+" in "+doc+"), product of:");
@ -265,7 +265,7 @@ public class MultiPhraseQuery extends Query {
}
}
public QueryWeight createQueryWeight(Searcher searcher) throws IOException {
public Weight createWeight(Searcher searcher) throws IOException {
return new MultiPhraseWeight(searcher);
}

View File

@ -22,6 +22,7 @@ import org.apache.lucene.document.FieldSelector;
import org.apache.lucene.index.CorruptIndexException;
import org.apache.lucene.index.Term;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.util.ReaderUtil;
import java.io.IOException;
import java.util.HashMap;
@ -94,19 +95,19 @@ public class MultiSearcher extends Searcher {
throw new UnsupportedOperationException();
}
public Explanation explain(QueryWeight weight,int doc) {
public Explanation explain(Weight weight,int doc) {
throw new UnsupportedOperationException();
}
public void search(QueryWeight weight, Filter filter, Collector results) {
public void search(Weight weight, Filter filter, Collector results) {
throw new UnsupportedOperationException();
}
public TopDocs search(QueryWeight weight,Filter filter,int n) {
public TopDocs search(Weight weight,Filter filter,int n) {
throw new UnsupportedOperationException();
}
public TopFieldDocs search(QueryWeight weight,Filter filter,int n,Sort sort) {
public TopFieldDocs search(Weight weight,Filter filter,int n,Sort sort) {
throw new UnsupportedOperationException();
}
}
@ -164,25 +165,7 @@ public class MultiSearcher extends Searcher {
/** Returns index of the searcher for document <code>n</code> in the array
* used to construct this searcher. */
public int subSearcher(int n) { // find searcher for doc n:
// replace w/ call to Arrays.binarySearch in Java 1.2
int lo = 0; // search starts array
int hi = searchables.length - 1; // for first element less
// than n, return its index
while (hi >= lo) {
int mid = (lo + hi) >>> 1;
int midValue = starts[mid];
if (n < midValue)
hi = mid - 1;
else if (n > midValue)
lo = mid + 1;
else { // found a match
while (mid+1 < searchables.length && starts[mid+1] == midValue) {
mid++; // scan to last match
}
return mid;
}
}
return hi;
return ReaderUtil.subIndex(n, starts);
}
/** Returns the document number of document <code>n</code> within its
@ -195,7 +178,7 @@ public class MultiSearcher extends Searcher {
return maxDoc;
}
public TopDocs search(QueryWeight weight, Filter filter, int nDocs)
public TopDocs search(Weight weight, Filter filter, int nDocs)
throws IOException {
HitQueue hq = new HitQueue(nDocs, false);
@ -222,7 +205,7 @@ public class MultiSearcher extends Searcher {
return new TopDocs(totalHits, scoreDocs, maxScore);
}
public TopFieldDocs search (QueryWeight weight, Filter filter, int n, Sort sort)
public TopFieldDocs search (Weight weight, Filter filter, int n, Sort sort)
throws IOException {
FieldDocSortedHitQueue hq = null;
int totalHits = 0;
@ -264,7 +247,7 @@ public class MultiSearcher extends Searcher {
}
// inherit javadoc
public void search(QueryWeight weight, Filter filter, final Collector collector)
public void search(Weight weight, Filter filter, final Collector collector)
throws IOException {
for (int i = 0; i < searchables.length; i++) {
@ -297,7 +280,7 @@ public class MultiSearcher extends Searcher {
return queries[0].combine(queries);
}
public Explanation explain(QueryWeight weight, int doc) throws IOException {
public Explanation explain(Weight weight, int doc) throws IOException {
int i = subSearcher(doc); // find searcher index
return searchables[i].explain(weight, doc - starts[i]); // dispatch to searcher
}
@ -317,7 +300,7 @@ public class MultiSearcher extends Searcher {
*
* @return rewritten queries
*/
protected QueryWeight createQueryWeight(Query original) throws IOException {
protected Weight createWeight(Query original) throws IOException {
// step 1
Query rewrittenQuery = rewrite(original);
@ -345,7 +328,7 @@ public class MultiSearcher extends Searcher {
int numDocs = maxDoc();
CachedDfSource cacheSim = new CachedDfSource(dfMap, numDocs, getSimilarity());
return rewrittenQuery.queryWeight(cacheSim);
return rewrittenQuery.weight(cacheSim);
}
}

View File

@ -52,7 +52,7 @@ public class ParallelMultiSearcher extends MultiSearcher {
* Searchable, waits for each search to complete and merge
* the results back together.
*/
public TopDocs search(QueryWeight weight, Filter filter, int nDocs)
public TopDocs search(Weight weight, Filter filter, int nDocs)
throws IOException {
HitQueue hq = new HitQueue(nDocs, false);
int totalHits = 0;
@ -97,7 +97,7 @@ public class ParallelMultiSearcher extends MultiSearcher {
* Searchable, waits for each search to complete and merges
* the results back together.
*/
public TopFieldDocs search(QueryWeight weight, Filter filter, int nDocs, Sort sort)
public TopFieldDocs search(Weight weight, Filter filter, int nDocs, Sort sort)
throws IOException {
// don't specify the fields - we'll wait to do this until we get results
FieldDocSortedHitQueue hq = new FieldDocSortedHitQueue (null, nDocs);
@ -153,7 +153,7 @@ public class ParallelMultiSearcher extends MultiSearcher {
*
* @todo parallelize this one too
*/
public void search(QueryWeight weight, Filter filter, final Collector collector)
public void search(Weight weight, Filter filter, final Collector collector)
throws IOException {
for (int i = 0; i < searchables.length; i++) {
@ -194,7 +194,7 @@ public class ParallelMultiSearcher extends MultiSearcher {
class MultiSearcherThread extends Thread {
private Searchable searchable;
private QueryWeight weight;
private Weight weight;
private Filter filter;
private int nDocs;
private TopDocs docs;
@ -204,7 +204,7 @@ class MultiSearcherThread extends Thread {
private IOException ioe;
private Sort sort;
public MultiSearcherThread(Searchable searchable, QueryWeight weight, Filter filter,
public MultiSearcherThread(Searchable searchable, Weight weight, Filter filter,
int nDocs, HitQueue hq, int i, int[] starts, String name) {
super(name);
this.searchable = searchable;
@ -216,7 +216,7 @@ class MultiSearcherThread extends Thread {
this.starts = starts;
}
public MultiSearcherThread(Searchable searchable, QueryWeight weight,
public MultiSearcherThread(Searchable searchable, Weight weight,
Filter filter, int nDocs, FieldDocSortedHitQueue hq, Sort sort, int i,
int[] starts, String name) {
super(name);

View File

@ -106,7 +106,7 @@ public class PhraseQuery extends Query {
return result;
}
private class PhraseWeight extends QueryWeight {
private class PhraseWeight extends Weight {
private Similarity similarity;
private float value;
private float idf;
@ -158,7 +158,7 @@ public class PhraseQuery extends Query {
}
public Explanation explain(IndexReader reader, int doc)
public Explanation explain(Searcher searcher, IndexReader reader, int doc)
throws IOException {
Explanation result = new Explanation();
@ -241,12 +241,12 @@ public class PhraseQuery extends Query {
}
}
public QueryWeight createQueryWeight(Searcher searcher) throws IOException {
public Weight createWeight(Searcher searcher) throws IOException {
if (terms.size() == 1) { // optimize one-term case
Term term = (Term)terms.get(0);
Query termQuery = new TermQuery(term);
termQuery.setBoost(getBoost());
return termQuery.createQueryWeight(searcher);
return termQuery.createWeight(searcher);
}
return new PhraseWeight(searcher);
}

View File

@ -32,7 +32,7 @@ import org.apache.lucene.index.TermPositions;
* means a match.
*/
abstract class PhraseScorer extends Scorer {
private QueryWeight weight;
private Weight weight;
protected byte[] norms;
protected float value;
@ -43,7 +43,7 @@ abstract class PhraseScorer extends Scorer {
private float freq; //prhase frequency in current doc as computed by phraseFreq().
PhraseScorer(QueryWeight weight, TermPositions[] tps, int[] offsets,
PhraseScorer(Weight weight, TermPositions[] tps, int[] offsets,
Similarity similarity, byte[] norms) {
super(similarity);
this.norms = norms;

View File

@ -86,45 +86,17 @@ public abstract class Query implements java.io.Serializable, Cloneable {
*
* <p>
* Only implemented by primitive queries, which re-write to themselves.
* @deprecated use {@link #createQueryWeight(Searcher)} instead.
*/
protected Weight createWeight(Searcher searcher) throws IOException {
throw new UnsupportedOperationException();
}
/**
* Expert: Constructs an appropriate {@link QueryWeight} implementation for
* this query.
* <p>
* Only implemented by primitive queries, which re-write to themselves.
* <p>
* <b>NOTE:</b> in 3.0 this method will throw
* {@link UnsupportedOperationException}. It is implemented now by calling
* {@link #createWeight(Searcher)} for backwards compatibility, for
* {@link Query} implementations that did not override it yet (but did
* override {@link #createWeight(Searcher)}).
*/
// TODO (3.0): change to throw UnsupportedOperationException.
public QueryWeight createQueryWeight(Searcher searcher) throws IOException {
return new QueryWeightWrapper(createWeight(searcher));
}
/**
* Expert: Constructs and initializes a Weight for a top-level query.
*
* @deprecated use {@link #queryWeight(Searcher)} instead.
*/
public Weight weight(Searcher searcher) throws IOException {
return queryWeight(searcher);
}
/**
* Expert: Constructs and initializes a {@link QueryWeight} for a top-level
* query.
*/
public QueryWeight queryWeight(Searcher searcher) throws IOException {
Query query = searcher.rewrite(this);
QueryWeight weight = query.createQueryWeight(searcher);
Weight weight = query.createWeight(searcher);
float sum = weight.sumOfSquaredWeights();
float norm = getSimilarity(searcher).queryNorm(sum);
weight.normalize(norm);

View File

@ -1,122 +0,0 @@
package org.apache.lucene.search;
/**
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import java.io.IOException;
import java.io.Serializable;
import org.apache.lucene.index.IndexReader;
/**
* Expert: Calculate query weights and build query scorers.
* <p>
* The purpose of {@link QueryWeight} is to ensure searching does not
* modify a {@link Query}, so that a {@link Query} instance can be reused. <br>
* {@link Searcher} dependent state of the query should reside in the
* {@link QueryWeight}. <br>
* {@link IndexReader} dependent state should reside in the {@link Scorer}.
* <p>
* A <code>QueryWeight</code> is used in the following way:
* <ol>
* <li>A <code>QueryWeight</code> is constructed by a top-level query, given a
* <code>Searcher</code> ({@link Query#createWeight(Searcher)}).
* <li>The {@link #sumOfSquaredWeights()} method is called on the
* <code>QueryWeight</code> to compute the query normalization factor
* {@link Similarity#queryNorm(float)} of the query clauses contained in the
* query.
* <li>The query normalization factor is passed to {@link #normalize(float)}. At
* this point the weighting is complete.
* <li>A <code>Scorer</code> is constructed by {@link #scorer(IndexReader)}.
* </ol>
*
* @since 2.9
*/
public abstract class QueryWeight implements Weight, Serializable {
/** An explanation of the score computation for the named document. */
public abstract Explanation explain(IndexReader reader, int doc) throws IOException;
/** The query that this concerns. */
public abstract Query getQuery();
/** The weight for this query. */
public abstract float getValue();
/** Assigns the query normalization factor to this. */
public abstract void normalize(float norm);
/**
* @deprecated use {@link #scorer(IndexReader, boolean, boolean)} instead.
* Currently this defaults to asking a scorer in out-of-order
* mode, but will be removed in 3.0.
*/
public Scorer scorer(IndexReader reader) throws IOException {
return scorer(reader, true, false);
}
/**
* Returns a {@link Scorer} which scores documents in/out-of order according
* to <code>scoreDocsInOrder</code>.
* <p>
* <b>NOTE:</b> even if <code>scoreDocsInOrder</code> is false, it is
* recommended to check whether the returned <code>Scorer</code> indeed scores
* documents out of order (i.e., call {@link #scoresDocsOutOfOrder()}), as
* some <code>Scorer</code> implementations will always return documents
* in-order.<br>
* <b>NOTE:</b> null can be returned if no documents will be scored by this
* query.
*
* @param reader
* the {@link IndexReader} for which to return the {@link Scorer}.
* @param scoreDocsInOrder
* specifies whether in-order scoring of documents is required. Note
* that if set to false (i.e., out-of-order scoring is required),
* this method can return whatever scoring mode it supports, as every
* in-order scorer is also an out-of-order one. However, an
* out-of-order scorer may not support {@link Scorer#nextDoc()}
* and/or {@link Scorer#advance(int)}, therfore it is recommended to
* request an in-order scorer if use of these methods is required.
* @param topScorer
* specifies whether the returned {@link Scorer} will be used as a
* top scorer or as in iterator. I.e., if true,
* {@link Scorer#score(Collector)} will be called; if false,
* {@link Scorer#nextDoc()} and/or {@link Scorer#advance(int)} will
* be called.
* @return a {@link Scorer} which scores documents in/out-of order.
* @throws IOException
*/
public abstract Scorer scorer(IndexReader reader, boolean scoreDocsInOrder,
boolean topScorer) throws IOException;
/** The sum of squared weights of contained query clauses. */
public abstract float sumOfSquaredWeights() throws IOException;
/**
* Returns true iff this implementation scores docs only out of order. This
* method is used in conjunction with {@link Collector}'s
* {@link Collector#acceptsDocsOutOfOrder() acceptsDocsOutOfOrder} and
* {@link #scorer(org.apache.lucene.index.IndexReader, boolean, boolean)} to
* create a matching {@link Scorer} instance for a given {@link Collector}, or
* vice versa.
* <p>
* <b>NOTE:</b> the default implementation returns <code>false</code>, i.e.
* the <code>Scorer</code> scores documents in-order.
*/
public boolean scoresDocsOutOfOrder() { return false; }
}

View File

@ -1,68 +0,0 @@
package org.apache.lucene.search;
/**
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import java.io.IOException;
import org.apache.lucene.index.IndexReader;
/**
* A wrapper class for the deprecated {@link Weight}.
* Please re-implement any custom Weight classes as {@link
* QueryWeight} instead.
*
* @deprecated will be removed in 3.0
*/
public class QueryWeightWrapper extends QueryWeight {
private Weight weight;
public QueryWeightWrapper(Weight weight) {
this.weight = weight;
}
public Explanation explain(IndexReader reader, int doc) throws IOException {
return weight.explain(reader, doc);
}
public Query getQuery() {
return weight.getQuery();
}
public float getValue() {
return weight.getValue();
}
public void normalize(float norm) {
weight.normalize(norm);
}
public Scorer scorer(IndexReader reader, boolean scoreDocsInOrder, boolean topScorer)
throws IOException {
return weight.scorer(reader);
}
public float sumOfSquaredWeights() throws IOException {
return weight.sumOfSquaredWeights();
}
public Scorer scorer(IndexReader reader) throws IOException {
return weight.scorer(reader);
}
}

View File

@ -69,7 +69,7 @@ public class QueryWrapperFilter extends Filter {
}
public DocIdSet getDocIdSet(final IndexReader reader) throws IOException {
final QueryWeight weight = query.queryWeight(new IndexSearcher(reader));
final Weight weight = query.createWeight(new IndexSearcher(reader));
return new DocIdSet() {
public DocIdSetIterator iterator() throws IOException {
return weight.scorer(reader, true, false);

View File

@ -127,7 +127,7 @@ public abstract class Scorer extends DocIdSetIterator {
* @param doc The document number for the explanation.
*
* @deprecated Please use {@link IndexSearcher#explain}
* or {@link QueryWeight#explain} instead.
* or {@link Weight#explain} instead.
*/
public abstract Explanation explain(int doc) throws IOException;

View File

@ -58,7 +58,7 @@ public interface Searchable {
* @param filter if non-null, used to permit documents to be collected.
* @param results to receive hits
* @throws BooleanQuery.TooManyClauses
* @deprecated use {@link #search(QueryWeight, Filter, Collector)} instead.
* @deprecated use {@link #search(Weight, Filter, Collector)} instead.
*/
void search(Weight weight, Filter filter, HitCollector results)
throws IOException;
@ -82,33 +82,9 @@ public interface Searchable {
* @param collector
* to receive hits
* @throws BooleanQuery.TooManyClauses
*
* @deprecated use {@link #search(QueryWeight, Filter, Collector)} instead.
*/
void search(Weight weight, Filter filter, Collector collector) throws IOException;
/**
* Lower-level search API.
*
* <p>
* {@link Collector#collect(int)} is called for every document. <br>
* Collector-based access to remote indexes is discouraged.
*
* <p>
* Applications should only use this if they need <i>all</i> of the matching
* documents. The high-level search API ({@link Searcher#search(Query)}) is
* usually more efficient, as it skips non-high-scoring hits.
*
* @param weight
* to match documents
* @param filter
* if non-null, used to permit documents to be collected.
* @param collector
* to receive hits
* @throws BooleanQuery.TooManyClauses
*/
void search(QueryWeight weight, Filter filter, Collector collector) throws IOException;
/** Frees resources associated with this Searcher.
* Be careful not to call this method while you are still using objects
* like {@link Hits}.
@ -141,20 +117,9 @@ public interface Searchable {
* <p>Applications should usually call {@link Searcher#search(Query)} or
* {@link Searcher#search(Query,Filter)} instead.
* @throws BooleanQuery.TooManyClauses
* @deprecated use {@link #search(QueryWeight, Filter, int)} instead.
* @deprecated use {@link #search(Weight, Filter, int)} instead.
*/
TopDocs search(Weight weight, Filter filter, int n) throws IOException;
/** Expert: Low-level search implementation. Finds the top <code>n</code>
* hits for <code>query</code>, applying <code>filter</code> if non-null.
*
* <p>Called by {@link Hits}.
*
* <p>Applications should usually call {@link Searcher#search(Query)} or
* {@link Searcher#search(Query,Filter)} instead.
* @throws BooleanQuery.TooManyClauses
*/
TopDocs search(QueryWeight weight, Filter filter, int n) throws IOException;
/** Expert: Returns the stored fields of document <code>i</code>.
* Called by {@link HitCollector} implementations.
@ -202,22 +167,9 @@ public interface Searchable {
* entire index.
* <p>Applications should call {@link Searcher#explain(Query, int)}.
* @throws BooleanQuery.TooManyClauses
* @deprecated use {@link #explain(QueryWeight, int)} instead.
* @deprecated use {@link #explain(Weight, int)} instead.
*/
Explanation explain(Weight weight, int doc) throws IOException;
/** Expert: low-level implementation method
* Returns an Explanation that describes how <code>doc</code> scored against
* <code>weight</code>.
*
* <p>This is intended to be used in developing Similarity implementations,
* and, for good performance, should not be displayed with every hit.
* Computing an explanation is as expensive as executing the query over the
* entire index.
* <p>Applications should call {@link Searcher#explain(Query, int)}.
* @throws BooleanQuery.TooManyClauses
*/
Explanation explain(QueryWeight weight, int doc) throws IOException;
/** Expert: Low-level search implementation with arbitrary sorting. Finds
* the top <code>n</code> hits for <code>query</code>, applying
@ -228,22 +180,8 @@ public interface Searchable {
* Searcher#search(Query,Filter,Sort)} instead.
*
* @throws BooleanQuery.TooManyClauses
* @deprecated use {@link #search(QueryWeight, Filter, int, Sort)} instead.
*/
TopFieldDocs search(Weight weight, Filter filter, int n, Sort sort)
throws IOException;
/** Expert: Low-level search implementation with arbitrary sorting. Finds
* the top <code>n</code> hits for <code>query</code>, applying
* <code>filter</code> if non-null, and sorting the hits by the criteria in
* <code>sort</code>.
*
* <p>Applications should usually call {@link
* Searcher#search(Query,Filter,Sort)} instead.
*
* @throws BooleanQuery.TooManyClauses
*/
TopFieldDocs search(QueryWeight weight, Filter filter, int n, Sort sort)
throws IOException;
}

View File

@ -89,7 +89,7 @@ public abstract class Searcher implements Searchable {
*/
public TopFieldDocs search(Query query, Filter filter, int n,
Sort sort) throws IOException {
return search(createQueryWeight(query), filter, n, sort);
return search(createWeight(query), filter, n, sort);
}
/** Lower-level search API.
@ -109,7 +109,7 @@ public abstract class Searcher implements Searchable {
*/
public void search(Query query, HitCollector results)
throws IOException {
search(createQueryWeight(query), null, new HitCollectorWrapper(results));
search(createWeight(query), null, new HitCollectorWrapper(results));
}
/** Lower-level search API.
@ -127,7 +127,7 @@ public abstract class Searcher implements Searchable {
*/
public void search(Query query, Collector results)
throws IOException {
search(createQueryWeight(query), null, results);
search(createWeight(query), null, results);
}
/** Lower-level search API.
@ -149,7 +149,7 @@ public abstract class Searcher implements Searchable {
*/
public void search(Query query, Filter filter, HitCollector results)
throws IOException {
search(createQueryWeight(query), filter, new HitCollectorWrapper(results));
search(createWeight(query), filter, new HitCollectorWrapper(results));
}
/** Lower-level search API.
@ -170,7 +170,7 @@ public abstract class Searcher implements Searchable {
*/
public void search(Query query, Filter filter, Collector results)
throws IOException {
search(createQueryWeight(query), filter, results);
search(createWeight(query), filter, results);
}
/** Finds the top <code>n</code>
@ -180,7 +180,7 @@ public abstract class Searcher implements Searchable {
*/
public TopDocs search(Query query, Filter filter, int n)
throws IOException {
return search(createQueryWeight(query), filter, n);
return search(createWeight(query), filter, n);
}
/** Finds the top <code>n</code>
@ -202,7 +202,7 @@ public abstract class Searcher implements Searchable {
* entire index.
*/
public Explanation explain(Query query, int doc) throws IOException {
return explain(createQueryWeight(query), doc);
return explain(createWeight(query), doc);
}
/** The Similarity implementation used by this searcher. */
@ -215,7 +215,7 @@ public abstract class Searcher implements Searchable {
public void setSimilarity(Similarity similarity) {
this.similarity = similarity;
}
/** Expert: Return the Similarity implementation used by this Searcher.
*
* <p>This defaults to the current value of {@link Similarity#getDefault()}.
@ -226,15 +226,10 @@ public abstract class Searcher implements Searchable {
/**
* creates a weight for <code>query</code>
*
* @deprecated use {@link #createQueryWeight(Query)} instead.
* @return new weight
*/
protected Weight createWeight(Query query) throws IOException {
return createQueryWeight(query);
}
protected QueryWeight createQueryWeight(Query query) throws IOException {
return query.queryWeight(this);
return query.weight(this);
}
// inherit javadoc
@ -253,33 +248,16 @@ public abstract class Searcher implements Searchable {
* @deprecated use {@link #search(Weight, Filter, Collector)} instead.
*/
public void search(Weight weight, Filter filter, HitCollector results) throws IOException {
search(new QueryWeightWrapper(weight), filter, new HitCollectorWrapper(results));
search(weight, filter, new HitCollectorWrapper(results));
}
/** @deprecated delete in 3.0. */
public void search(Weight weight, Filter filter, Collector collector)
throws IOException {
search(new QueryWeightWrapper(weight), filter, collector);
}
abstract public void search(QueryWeight weight, Filter filter, Collector results) throws IOException;
abstract public void search(Weight weight, Filter filter, Collector results) throws IOException;
abstract public void close() throws IOException;
abstract public int docFreq(Term term) throws IOException;
abstract public int maxDoc() throws IOException;
/** @deprecated use {@link #search(QueryWeight, Filter, int)} instead. */
public TopDocs search(Weight weight, Filter filter, int n) throws IOException {
return search(new QueryWeightWrapper(weight), filter, n);
}
abstract public TopDocs search(QueryWeight weight, Filter filter, int n) throws IOException;
abstract public TopDocs search(Weight weight, Filter filter, int n) throws IOException;
abstract public Document doc(int i) throws CorruptIndexException, IOException;
abstract public Query rewrite(Query query) throws IOException;
/** @deprecated use {@link #explain(QueryWeight, int)} instead. */
public Explanation explain(Weight weight, int doc) throws IOException {
return explain(new QueryWeightWrapper(weight), doc);
}
abstract public Explanation explain(QueryWeight weight, int doc) throws IOException;
/** @deprecated use {@link #search(QueryWeight, Filter, int, Sort)} instead. */
public TopFieldDocs search(Weight weight, Filter filter, int n, Sort sort) throws IOException {
return search(new QueryWeightWrapper(weight), filter, n, sort);
}
abstract public TopFieldDocs search(QueryWeight weight, Filter filter, int n, Sort sort) throws IOException;
abstract public Explanation explain(Weight weight, int doc) throws IOException;
abstract public TopFieldDocs search(Weight weight, Filter filter, int n, Sort sort) throws IOException;
/* End patch for GCJ bug #15411. */
}

View File

@ -28,7 +28,7 @@ final class SloppyPhraseScorer extends PhraseScorer {
private PhrasePositions tmpPos[]; // for flipping repeating pps.
private boolean checkedRepeats;
SloppyPhraseScorer(QueryWeight weight, TermPositions[] tps, int[] offsets, Similarity similarity,
SloppyPhraseScorer(Weight weight, TermPositions[] tps, int[] offsets, Similarity similarity,
int slop, byte[] norms) {
super(weight, tps, offsets, similarity, norms);
this.slop = slop;

View File

@ -31,7 +31,7 @@ import org.apache.lucene.util.ToStringUtils;
public class TermQuery extends Query {
private Term term;
private class TermWeight extends QueryWeight {
private class TermWeight extends Weight {
private Similarity similarity;
private float value;
private float idf;
@ -69,15 +69,19 @@ public class TermQuery extends Query {
return new TermScorer(this, termDocs, similarity, reader.norms(term.field()));
}
public Explanation explain(IndexReader reader, int doc)
public Explanation explain(Searcher searcher, IndexReader reader, int doc)
throws IOException {
ComplexExplanation result = new ComplexExplanation();
result.setDescription("weight("+getQuery()+" in "+doc+"), product of:");
Explanation idfExpl =
new Explanation(idf, "idf(docFreq=" + reader.docFreq(term) +
", numDocs=" + reader.numDocs() + ")");
Explanation expl;
if(searcher == null) {
expl = new Explanation(idf, "idf(" + idf + ")");
} else {
expl = new Explanation(idf, "idf(docFreq=" + searcher.docFreq(term) +
", maxDocs=" + searcher.maxDoc() + ")");
}
// explain query weight
Explanation queryExpl = new Explanation();
@ -86,13 +90,13 @@ public class TermQuery extends Query {
Explanation boostExpl = new Explanation(getBoost(), "boost");
if (getBoost() != 1.0f)
queryExpl.addDetail(boostExpl);
queryExpl.addDetail(idfExpl);
queryExpl.addDetail(expl);
Explanation queryNormExpl = new Explanation(queryNorm,"queryNorm");
queryExpl.addDetail(queryNormExpl);
queryExpl.setValue(boostExpl.getValue() *
idfExpl.getValue() *
expl.getValue() *
queryNormExpl.getValue());
result.addDetail(queryExpl);
@ -105,7 +109,7 @@ public class TermQuery extends Query {
Explanation tfExpl = scorer(reader, true, false).explain(doc);
fieldExpl.addDetail(tfExpl);
fieldExpl.addDetail(idfExpl);
fieldExpl.addDetail(expl);
Explanation fieldNormExpl = new Explanation();
byte[] fieldNorms = reader.norms(field);
@ -117,7 +121,7 @@ public class TermQuery extends Query {
fieldExpl.setMatch(Boolean.valueOf(tfExpl.isMatch()));
fieldExpl.setValue(tfExpl.getValue() *
idfExpl.getValue() *
expl.getValue() *
fieldNormExpl.getValue());
result.addDetail(fieldExpl);
@ -141,7 +145,7 @@ public class TermQuery extends Query {
/** Returns the term of this query. */
public Term getTerm() { return term; }
public QueryWeight createQueryWeight(Searcher searcher) throws IOException {
public Weight createWeight(Searcher searcher) throws IOException {
return new TermWeight(searcher);
}

View File

@ -27,7 +27,7 @@ final class TermScorer extends Scorer {
private static final float[] SIM_NORM_DECODER = Similarity.getNormDecoder();
private QueryWeight weight;
private Weight weight;
private TermDocs termDocs;
private byte[] norms;
private float weightValue;
@ -53,30 +53,8 @@ final class TermScorer extends Scorer {
* computations.
* @param norms
* The field norms of the document fields for the <code>Term</code>.
*
* @deprecated use delete in 3.0, kept around for TestTermScorer in tag which
* creates TermScorer directly, and cannot pass in a QueryWeight
* object.
*/
TermScorer(Weight weight, TermDocs td, Similarity similarity, byte[] norms) {
this(new QueryWeightWrapper(weight), td, similarity, norms);
}
/**
* Construct a <code>TermScorer</code>.
*
* @param weight
* The weight of the <code>Term</code> in the query.
* @param td
* An iterator over the documents matching the <code>Term</code>.
* @param similarity
* The </code>Similarity</code> implementation to be used for score
* computations.
* @param norms
* The field norms of the document fields for the <code>Term</code>.
*/
TermScorer(QueryWeight weight, TermDocs td, Similarity similarity,
byte[] norms) {
super(similarity);
this.weight = weight;
this.termDocs = td;

View File

@ -18,47 +18,106 @@ package org.apache.lucene.search;
*/
import java.io.IOException;
import java.io.Serializable;
import org.apache.lucene.index.IndexReader;
/** Expert: Calculate query weights and build query scorers.
/**
* Expert: Calculate query weights and build query scorers.
* <p>
* The purpose of Weight is to make it so that searching does not modify
* a Query, so that a Query instance can be reused. <br>
* Searcher dependent state of the query should reside in the Weight. <br>
* IndexReader dependent state should reside in the Scorer.
* The purpose of {@link Weight} is to ensure searching does not
* modify a {@link Query}, so that a {@link Query} instance can be reused. <br>
* {@link Searcher} dependent state of the query should reside in the
* {@link Weight}. <br>
* {@link IndexReader} dependent state should reside in the {@link Scorer}.
* <p>
* A <code>Weight</code> is used in the following way:
* <ol>
* <li>A <code>Weight</code> is constructed by a top-level query,
* given a <code>Searcher</code> ({@link Query#createWeight(Searcher)}).
* <li>The {@link #sumOfSquaredWeights()} method is called
* on the <code>Weight</code> to compute
* the query normalization factor {@link Similarity#queryNorm(float)}
* of the query clauses contained in the query.
* <li>The query normalization factor is passed to {@link #normalize(float)}.
* At this point the weighting is complete.
* <li>A <code>Weight</code> is constructed by a top-level query, given a
* <code>Searcher</code> ({@link Query#createWeight(Searcher)}).
* <li>The {@link #sumOfSquaredWeights()} method is called on the
* <code>Weight</code> to compute the query normalization factor
* {@link Similarity#queryNorm(float)} of the query clauses contained in the
* query.
* <li>The query normalization factor is passed to {@link #normalize(float)}. At
* this point the weighting is complete.
* <li>A <code>Scorer</code> is constructed by {@link #scorer(IndexReader)}.
* </ol>
*
* @deprecated use {@link QueryWeight} instead.
* @since 2.9
*/
public interface Weight extends java.io.Serializable {
public abstract class Weight implements Serializable {
/**
* An explanation of the score computation for the named document.
*
* Until 3.0, null may be passed in situations where the {@Searcher} is not
* available, so impls must only use {@Searcher} to generate optional
* explain info.
*
* @param searcher over the index or null
* @param reader sub-reader containing the give doc
* @param doc
* @return an Explanation for the score
* @throws IOException
*/
public abstract Explanation explain(Searcher searcher, IndexReader reader, int doc) throws IOException;
/** The query that this concerns. */
Query getQuery();
public abstract Query getQuery();
/** The weight for this query. */
float getValue();
/** The sum of squared weights of contained query clauses. */
float sumOfSquaredWeights() throws IOException;
public abstract float getValue();
/** Assigns the query normalization factor to this. */
void normalize(float norm);
public abstract void normalize(float norm);
/** Constructs a scorer for this. */
Scorer scorer(IndexReader reader) throws IOException;
/**
* Returns a {@link Scorer} which scores documents in/out-of order according
* to <code>scoreDocsInOrder</code>.
* <p>
* <b>NOTE:</b> even if <code>scoreDocsInOrder</code> is false, it is
* recommended to check whether the returned <code>Scorer</code> indeed scores
* documents out of order (i.e., call {@link #scoresDocsOutOfOrder()}), as
* some <code>Scorer</code> implementations will always return documents
* in-order.<br>
* <b>NOTE:</b> null can be returned if no documents will be scored by this
* query.
*
* @param reader
* the {@link IndexReader} for which to return the {@link Scorer}.
* @param scoreDocsInOrder
* specifies whether in-order scoring of documents is required. Note
* that if set to false (i.e., out-of-order scoring is required),
* this method can return whatever scoring mode it supports, as every
* in-order scorer is also an out-of-order one. However, an
* out-of-order scorer may not support {@link Scorer#nextDoc()}
* and/or {@link Scorer#advance(int)}, therefore it is recommended to
* request an in-order scorer if use of these methods is required.
* @param topScorer
* if true, {@link Scorer#score(Collector)} will be called; if false,
* {@link Scorer#nextDoc()} and/or {@link Scorer#advance(int)} will
* be called.
* @return a {@link Scorer} which scores documents in/out-of order.
* @throws IOException
*/
public abstract Scorer scorer(IndexReader reader, boolean scoreDocsInOrder,
boolean topScorer) throws IOException;
/** The sum of squared weights of contained query clauses. */
public abstract float sumOfSquaredWeights() throws IOException;
/**
* Returns true iff this implementation scores docs only out of order. This
* method is used in conjunction with {@link Collector}'s
* {@link Collector#acceptsDocsOutOfOrder() acceptsDocsOutOfOrder} and
* {@link #scorer(org.apache.lucene.index.IndexReader, boolean, boolean)} to
* create a matching {@link Scorer} instance for a given {@link Collector}, or
* vice versa.
* <p>
* <b>NOTE:</b> the default implementation returns <code>false</code>, i.e.
* the <code>Scorer</code> scores documents in-order.
*/
public boolean scoresDocsOutOfOrder() { return false; }
/** An explanation of the score computation for the named document. */
Explanation explain(IndexReader reader, int doc) throws IOException;
}

View File

@ -24,7 +24,7 @@ import org.apache.lucene.index.IndexReader;
import org.apache.lucene.search.ComplexExplanation;
import org.apache.lucene.search.Explanation;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.QueryWeight;
import org.apache.lucene.search.Weight;
import org.apache.lucene.search.Scorer;
import org.apache.lucene.search.Searcher;
import org.apache.lucene.search.Similarity;
@ -271,18 +271,18 @@ public class CustomScoreQuery extends Query {
//=========================== W E I G H T ============================
private class CustomWeight extends QueryWeight {
private class CustomWeight extends Weight {
Similarity similarity;
QueryWeight subQueryWeight;
QueryWeight[] valSrcWeights;
Weight subQueryWeight;
Weight[] valSrcWeights;
boolean qStrict;
public CustomWeight(Searcher searcher) throws IOException {
this.similarity = getSimilarity(searcher);
this.subQueryWeight = subQuery.queryWeight(searcher);
this.valSrcWeights = new QueryWeight[valSrcQueries.length];
this.subQueryWeight = subQuery.weight(searcher);
this.valSrcWeights = new Weight[valSrcQueries.length];
for(int i = 0; i < valSrcQueries.length; i++) {
this.valSrcWeights[i] = valSrcQueries[i].createQueryWeight(searcher);
this.valSrcWeights[i] = valSrcQueries[i].createWeight(searcher);
}
this.qStrict = strict;
}
@ -336,16 +336,39 @@ public class CustomScoreQuery extends Query {
}
Scorer[] valSrcScorers = new Scorer[valSrcWeights.length];
for(int i = 0; i < valSrcScorers.length; i++) {
valSrcScorers[i] = valSrcWeights[i].scorer(reader, true, false);
valSrcScorers[i] = valSrcWeights[i].scorer(reader, true, topScorer);
}
return new CustomScorer(similarity, reader, this, subQueryScorer, valSrcScorers);
}
public Explanation explain(IndexReader reader, int doc) throws IOException {
Scorer scorer = scorer(reader, true, false);
return scorer == null ? new Explanation(0.0f, "no matching docs") : scorer.explain(doc);
public Explanation explain(Searcher searcher, IndexReader reader, int doc) throws IOException {
Explanation explain = doExplain(searcher, reader, doc);
return explain == null ? new Explanation(0.0f, "no matching docs") : doExplain(searcher, reader, doc);
}
private Explanation doExplain(Searcher searcher, IndexReader reader, int doc) throws IOException {
Scorer[] valSrcScorers = new Scorer[valSrcWeights.length];
for(int i = 0; i < valSrcScorers.length; i++) {
valSrcScorers[i] = valSrcWeights[i].scorer(reader, true, false);
}
Explanation subQueryExpl = subQueryWeight.explain(searcher, reader, doc);
if (!subQueryExpl.isMatch()) {
return subQueryExpl;
}
// match
Explanation[] valSrcExpls = new Explanation[valSrcScorers.length];
for(int i = 0; i < valSrcScorers.length; i++) {
valSrcExpls[i] = valSrcScorers[i].explain(doc);
}
Explanation customExp = customExplain(doc,subQueryExpl,valSrcExpls);
float sc = getValue() * customExp.getValue();
Explanation res = new ComplexExplanation(
true, sc, CustomScoreQuery.this.toString() + ", product of:");
res.addDetail(customExp);
res.addDetail(new Explanation(getValue(), "queryBoost")); // actually using the q boost as q weight (== weight value)
return res;
}
public boolean scoresDocsOutOfOrder() {
return false;
}
@ -425,9 +448,10 @@ public class CustomScoreQuery extends Query {
return doc;
}
// TODO: remove in 3.0
/*(non-Javadoc) @see org.apache.lucene.search.Scorer#explain(int) */
public Explanation explain(int doc) throws IOException {
Explanation subQueryExpl = weight.subQueryWeight.explain(reader,doc);
Explanation subQueryExpl = weight.subQueryWeight.explain(null, reader,doc); // nocommit: needs resolution
if (!subQueryExpl.isMatch()) {
return subQueryExpl;
}
@ -446,7 +470,7 @@ public class CustomScoreQuery extends Query {
}
}
public QueryWeight createQueryWeight(Searcher searcher) throws IOException {
public Weight createWeight(Searcher searcher) throws IOException {
return new CustomWeight(searcher);
}

View File

@ -62,7 +62,7 @@ public class ValueSourceQuery extends Query {
// no terms involved here
}
private class ValueSourceWeight extends QueryWeight {
class ValueSourceWeight extends Weight {
Similarity similarity;
float queryNorm;
float queryWeight;
@ -98,8 +98,8 @@ public class ValueSourceQuery extends Query {
}
/*(non-Javadoc) @see org.apache.lucene.search.Weight#explain(org.apache.lucene.index.IndexReader, int) */
public Explanation explain(IndexReader reader, int doc) throws IOException {
return scorer(reader, true, false).explain(doc);
public Explanation explain(Searcher searcher, IndexReader reader, int doc) throws IOException {
return new ValueSourceScorer(similarity, reader, this).explain(doc);
}
}
@ -172,7 +172,7 @@ public class ValueSourceQuery extends Query {
}
}
public QueryWeight createQueryWeight(Searcher searcher) {
public Weight createWeight(Searcher searcher) {
return new ValueSourceQuery.ValueSourceWeight(searcher);
}

View File

@ -21,7 +21,7 @@ import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.TermPositions;
import org.apache.lucene.search.Searcher;
import org.apache.lucene.search.Scorer;
import org.apache.lucene.search.QueryWeight;
import org.apache.lucene.search.Weight;
import org.apache.lucene.search.Similarity;
import org.apache.lucene.search.Explanation;
import org.apache.lucene.search.ComplexExplanation;
@ -54,7 +54,7 @@ public class BoostingFunctionTermQuery extends SpanTermQuery implements Payload
public QueryWeight createQueryWeight(Searcher searcher) throws IOException {
public Weight createWeight(Searcher searcher) throws IOException {
return new BoostingFunctionTermWeight(this, searcher);
}
@ -76,7 +76,7 @@ public class BoostingFunctionTermQuery extends SpanTermQuery implements Payload
protected float payloadScore;
protected int payloadsSeen;
public BoostingFunctionSpanScorer(TermSpans spans, QueryWeight weight, Similarity similarity,
public BoostingFunctionSpanScorer(TermSpans spans, Weight weight, Similarity similarity,
byte[] norms) throws IOException {
super(spans, weight, similarity, norms);
positions = spans.getPositions();

View File

@ -18,7 +18,6 @@ package org.apache.lucene.search.payloads;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.search.Explanation;
import org.apache.lucene.search.QueryWeight;
import org.apache.lucene.search.Scorer;
import org.apache.lucene.search.Searcher;
import org.apache.lucene.search.Similarity;
@ -63,7 +62,7 @@ public class BoostingNearQuery extends SpanNearQuery implements PayloadQuery {
}
public QueryWeight createQueryWeight(Searcher searcher) throws IOException {
public Weight createWeight(Searcher searcher) throws IOException {
return new BoostingSpanWeight(this, searcher);
}

View File

@ -1,16 +1,14 @@
package org.apache.lucene.search.payloads;
import java.io.IOException;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.Term;
import org.apache.lucene.index.TermPositions;
import org.apache.lucene.search.*;
import org.apache.lucene.search.spans.SpanScorer;
import org.apache.lucene.search.spans.SpanTermQuery;
import org.apache.lucene.search.spans.SpanWeight;
import org.apache.lucene.search.Scorer;
import org.apache.lucene.search.Searcher;
import org.apache.lucene.search.Weight;
import org.apache.lucene.search.spans.TermSpans;
import java.io.IOException;
/**
* Copyright 2004 The Apache Software Foundation
* <p/>
@ -51,7 +49,7 @@ public class BoostingTermQuery extends BoostingFunctionTermQuery implements Payl
super(term, new AveragePayloadFunction(), includeSpanScore);
}
public QueryWeight createQueryWeight(Searcher searcher) throws IOException {
public Weight createWeight(Searcher searcher) throws IOException {
return new BoostingTermWeight(this, searcher);
}

View File

@ -23,7 +23,7 @@ import java.util.Set;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.QueryWeight;
import org.apache.lucene.search.Weight;
import org.apache.lucene.search.Searcher;
import org.apache.lucene.search.Similarity;
import org.apache.lucene.util.ToStringUtils;
@ -103,8 +103,8 @@ public class FieldMaskingSpanQuery extends SpanQuery {
maskedQuery.extractTerms(terms);
}
public QueryWeight createQueryWeight(Searcher searcher) throws IOException {
return maskedQuery.createQueryWeight(searcher);
public Weight createWeight(Searcher searcher) throws IOException {
return maskedQuery.createWeight(searcher);
}
public Similarity getSimilarity(Searcher searcher) {

View File

@ -22,7 +22,6 @@ import java.util.Collection;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.QueryWeight;
import org.apache.lucene.search.Searcher;
import org.apache.lucene.search.Weight;
@ -40,13 +39,8 @@ public abstract class SpanQuery extends Query {
* @see Query#extractTerms(Set)
*/
public abstract Collection getTerms();
/** @deprecated delete in 3.0. */
protected Weight createWeight(Searcher searcher) throws IOException {
return createQueryWeight(searcher);
}
public QueryWeight createQueryWeight(Searcher searcher) throws IOException {
public Weight createWeight(Searcher searcher) throws IOException {
return new SpanWeight(this, searcher);
}

View File

@ -20,18 +20,16 @@ package org.apache.lucene.search.spans;
import java.io.IOException;
import org.apache.lucene.search.Explanation;
import org.apache.lucene.search.QueryWeight;
import org.apache.lucene.search.QueryWeightWrapper;
import org.apache.lucene.search.Weight;
import org.apache.lucene.search.Scorer;
import org.apache.lucene.search.Similarity;
import org.apache.lucene.search.Weight;
/**
* Public for extension only.
*/
public class SpanScorer extends Scorer {
protected Spans spans;
protected QueryWeight weight;
protected Weight weight;
protected byte[] norms;
protected float value;
@ -42,13 +40,7 @@ public class SpanScorer extends Scorer {
protected int doc;
protected float freq;
/** @deprecated use {@link #SpanScorer(Spans, QueryWeight, Similarity, byte[])} instead.*/
protected SpanScorer(Spans spans, Weight weight, Similarity similarity, byte[] norms)
throws IOException {
this(spans, new QueryWeightWrapper(weight), similarity, norms);
}
protected SpanScorer(Spans spans, QueryWeight weight, Similarity similarity, byte[] norms)
throws IOException {
super(similarity);
this.spans = spans;

View File

@ -29,7 +29,7 @@ import java.util.Set;
/**
* Expert-only. Public for use by other weight implementations
*/
public class SpanWeight extends QueryWeight {
public class SpanWeight extends Weight {
protected Similarity similarity;
protected float value;
protected float idf;
@ -68,7 +68,7 @@ public class SpanWeight extends QueryWeight {
.norms(query.getField()));
}
public Explanation explain(IndexReader reader, int doc)
public Explanation explain(Searcher searcher, IndexReader reader, int doc)
throws IOException {
ComplexExplanation result = new ComplexExplanation();

View File

@ -0,0 +1,107 @@
package org.apache.lucene.util;
/**
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import java.util.ArrayList;
import java.util.List;
import org.apache.lucene.index.IndexReader;
public class ReaderUtil {
/**
* Gathers sub-readers from reader into a List.
*
* @param allSubReaders
* @param reader
*/
public static void gatherSubReaders(List allSubReaders, IndexReader reader) {
IndexReader[] subReaders = reader.getSequentialSubReaders();
if (subReaders == null) {
// Add the reader itself, and do not recurse
allSubReaders.add(reader);
} else {
for (int i = 0; i < subReaders.length; i++) {
gatherSubReaders(allSubReaders, subReaders[i]);
}
}
}
/**
* Returns sub IndexReader that contains the given document id.
*
* @param doc
* @param reader
* @return
*/
public static IndexReader subReader(int doc, IndexReader reader) {
List subReadersList = new ArrayList();
ReaderUtil.gatherSubReaders(subReadersList, reader);
IndexReader[] subReaders = (IndexReader[]) subReadersList
.toArray(new IndexReader[subReadersList.size()]);
int[] docStarts = new int[subReaders.length];
int maxDoc = 0;
for (int i = 0; i < subReaders.length; i++) {
docStarts[i] = maxDoc;
maxDoc += subReaders[i].maxDoc();
}
return subReaders[ReaderUtil.subIndex(doc, docStarts)];
}
/**
* Returns sub-reader subIndex from reader.
*
* @param reader
* @param subIndex
* @return
*/
public static IndexReader subReader(IndexReader reader, int subIndex) {
List subReadersList = new ArrayList();
ReaderUtil.gatherSubReaders(subReadersList, reader);
IndexReader[] subReaders = (IndexReader[]) subReadersList
.toArray(new IndexReader[subReadersList.size()]);
return subReaders[subIndex];
}
/**
* Returns index of the searcher/reader for document <code>n</code> in the
* array used to construct this searcher/reader.
*/
public static int subIndex(int n, int[] docStarts) { // find
// searcher/reader for doc n:
int size = docStarts.length;
int lo = 0; // search starts array
int hi = size - 1; // for first element less than n, return its index
while (hi >= lo) {
int mid = (lo + hi) >>> 1;
int midValue = docStarts[mid];
if (n < midValue)
hi = mid - 1;
else if (n > midValue)
lo = mid + 1;
else { // found a match
while (mid + 1 < size && docStarts[mid + 1] == midValue) {
mid++; // scan to last match
}
return mid;
}
}
return hi;
}
}

View File

@ -40,7 +40,7 @@ final class JustCompileSearch {
static final class JustCompileSearcher extends Searcher {
protected QueryWeight createQueryWeight(Query query) throws IOException {
protected Weight createWeight(Query query) throws IOException {
throw new UnsupportedOperationException(UNSUPPORTED_MSG);
}
@ -94,7 +94,7 @@ final class JustCompileSearch {
throw new UnsupportedOperationException(UNSUPPORTED_MSG);
}
public Explanation explain(QueryWeight weight, int doc) throws IOException {
public Explanation explain(Weight weight, int doc) throws IOException {
throw new UnsupportedOperationException(UNSUPPORTED_MSG);
}
@ -106,17 +106,17 @@ final class JustCompileSearch {
throw new UnsupportedOperationException(UNSUPPORTED_MSG);
}
public void search(QueryWeight weight, Filter filter, Collector results)
public void search(Weight weight, Filter filter, Collector results)
throws IOException {
throw new UnsupportedOperationException(UNSUPPORTED_MSG);
}
public TopDocs search(QueryWeight weight, Filter filter, int n)
public TopDocs search(Weight weight, Filter filter, int n)
throws IOException {
throw new UnsupportedOperationException(UNSUPPORTED_MSG);
}
public TopFieldDocs search(QueryWeight weight, Filter filter, int n, Sort sort)
public TopFieldDocs search(Weight weight, Filter filter, int n, Sort sort)
throws IOException {
throw new UnsupportedOperationException(UNSUPPORTED_MSG);
}
@ -296,7 +296,7 @@ final class JustCompileSearch {
static final class JustCompilePhraseScorer extends PhraseScorer {
JustCompilePhraseScorer(QueryWeight weight, TermPositions[] tps, int[] offsets,
JustCompilePhraseScorer(Weight weight, TermPositions[] tps, int[] offsets,
Similarity similarity, byte[] norms) {
super(weight, tps, offsets, similarity, norms);
}
@ -423,9 +423,9 @@ final class JustCompileSearch {
}
static final class JustCompileWeight extends QueryWeight {
static final class JustCompileWeight extends Weight {
public Explanation explain(IndexReader reader, int doc) throws IOException {
public Explanation explain(Searcher searcher, IndexReader reader, int doc) throws IOException {
throw new UnsupportedOperationException(UNSUPPORTED_MSG);
}

View File

@ -105,7 +105,7 @@ public class QueryUtils {
* @throws IOException if serialization check fail.
*/
private static void checkSerialization(Query q, Searcher s) throws IOException {
QueryWeight w = q.queryWeight(s);
Weight w = q.weight(s);
try {
ByteArrayOutputStream bos = new ByteArrayOutputStream();
ObjectOutputStream oos = new ObjectOutputStream(bos);
@ -150,7 +150,7 @@ public class QueryUtils {
//System.out.print("Order:");for (int i = 0; i < order.length; i++) System.out.print(order[i]==skip_op ? " skip()":" next()"); System.out.println();
final int opidx[] = {0};
final QueryWeight w = q.queryWeight(s);
final Weight w = q.weight(s);
final Scorer scorer = w.scorer(s.getIndexReader(), true, false);
if (scorer == null) {
continue;
@ -234,7 +234,7 @@ public class QueryUtils {
float score = scorer.score();
try {
for (int i=lastDoc[0]+1; i<=doc; i++) {
QueryWeight w = q.queryWeight(s);
Weight w = q.weight(s);
Scorer scorer = w.scorer(s.getIndexReader(), true, false);
Assert.assertTrue("query collected "+doc+" but skipTo("+i+") says no more docs!",scorer.advance(i) != DocIdSetIterator.NO_MORE_DOCS);
Assert.assertEquals("query collected "+doc+" but skipTo("+i+") got to "+scorer.docID(),doc,scorer.docID());
@ -254,7 +254,7 @@ public class QueryUtils {
return false;
}
});
QueryWeight w = q.queryWeight(s);
Weight w = q.weight(s);
Scorer scorer = w.scorer(s.getIndexReader(), true, false);
if (scorer != null) {
boolean more = scorer.advance(lastDoc[0] + 1) != DocIdSetIterator.NO_MORE_DOCS;

View File

@ -133,7 +133,7 @@ public class TestDisjunctionMaxQuery extends LuceneTestCase{
QueryUtils.check(dq,s);
final QueryWeight dw = dq.queryWeight(s);
final Weight dw = dq.weight(s);
final Scorer ds = dw.scorer(r, true, false);
final boolean skipOk = ds.advance(3) != DocIdSetIterator.NO_MORE_DOCS;
if (skipOk) {
@ -148,7 +148,7 @@ public class TestDisjunctionMaxQuery extends LuceneTestCase{
QueryUtils.check(dq,s);
final QueryWeight dw = dq.queryWeight(s);
final Weight dw = dq.weight(s);
final Scorer ds = dw.scorer(r, true, false);
assertTrue("firsttime skipTo found no match", ds.advance(3) != DocIdSetIterator.NO_MORE_DOCS);
assertEquals("found wrong docid", "d4", r.document(ds.docID()).get("id"));

View File

@ -69,7 +69,7 @@ public class TestTermScorer extends LuceneTestCase
Term allTerm = new Term(FIELD, "all");
TermQuery termQuery = new TermQuery(allTerm);
QueryWeight weight = termQuery.queryWeight(indexSearcher);
Weight weight = termQuery.weight(indexSearcher);
TermScorer ts = new TermScorer(weight,
indexReader.termDocs(allTerm), indexSearcher.getSimilarity(),
@ -131,7 +131,7 @@ public class TestTermScorer extends LuceneTestCase
Term allTerm = new Term(FIELD, "all");
TermQuery termQuery = new TermQuery(allTerm);
QueryWeight weight = termQuery.queryWeight(indexSearcher);
Weight weight = termQuery.weight(indexSearcher);
TermScorer ts = new TermScorer(weight,
indexReader.termDocs(allTerm), indexSearcher.getSimilarity(),
@ -148,7 +148,7 @@ public class TestTermScorer extends LuceneTestCase
Term allTerm = new Term(FIELD, "all");
TermQuery termQuery = new TermQuery(allTerm);
QueryWeight weight = termQuery.queryWeight(indexSearcher);
Weight weight = termQuery.weight(indexSearcher);
TermScorer ts = new TermScorer(weight,
indexReader.termDocs(allTerm), indexSearcher.getSimilarity(),
@ -163,7 +163,7 @@ public class TestTermScorer extends LuceneTestCase
Term allTerm = new Term(FIELD, "all");
TermQuery termQuery = new TermQuery(allTerm);
QueryWeight weight = termQuery.queryWeight(indexSearcher);
Weight weight = termQuery.weight(indexSearcher);
TermScorer ts = new TermScorer(weight,
indexReader.termDocs(allTerm), indexSearcher.getSimilarity(),
@ -181,7 +181,7 @@ public class TestTermScorer extends LuceneTestCase
Term dogsTerm = new Term(FIELD, "dogs");
termQuery = new TermQuery(dogsTerm);
weight = termQuery.queryWeight(indexSearcher);
weight = termQuery.weight(indexSearcher);
ts = new TermScorer(weight, indexReader.termDocs(dogsTerm), indexSearcher.getSimilarity(),
indexReader.norms(FIELD));

View File

@ -21,9 +21,8 @@ import java.io.IOException;
import java.util.Collection;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.search.QueryWeight;
import org.apache.lucene.search.Similarity;
import org.apache.lucene.search.Weight;
import org.apache.lucene.search.Similarity;
/**
* Holds all implementations of classes in the o.a.l.s.spans package as a
@ -123,16 +122,10 @@ final class JustCompileSearchSpans {
static final class JustCompileSpanScorer extends SpanScorer {
/** @deprecated delete in 3.0 */
protected JustCompileSpanScorer(Spans spans, Weight weight,
Similarity similarity, byte[] norms) throws IOException {
super(spans, weight, similarity, norms);
}
protected JustCompileSpanScorer(Spans spans, QueryWeight weight,
Similarity similarity, byte[] norms) throws IOException {
super(spans, weight, similarity, norms);
}
protected boolean setFreqCurrentDoc() throws IOException {
throw new UnsupportedOperationException(UNSUPPORTED_MSG);

View File

@ -26,7 +26,7 @@ import org.apache.lucene.queryParser.QueryParser;
import org.apache.lucene.search.CheckHits;
import org.apache.lucene.search.Explanation;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.QueryWeight;
import org.apache.lucene.search.Weight;
import org.apache.lucene.search.Scorer;
import org.apache.lucene.store.RAMDirectory;
import org.apache.lucene.util.LuceneTestCase;
@ -158,7 +158,7 @@ public class TestNearSpansOrdered extends LuceneTestCase {
*/
public void testSpanNearScorerSkipTo1() throws Exception {
SpanNearQuery q = makeQuery();
QueryWeight w = q.queryWeight(searcher);
Weight w = q.weight(searcher);
Scorer s = w.scorer(searcher.getIndexReader(), true, false);
assertEquals(1, s.advance(1));
}
@ -168,7 +168,7 @@ public class TestNearSpansOrdered extends LuceneTestCase {
*/
public void testSpanNearScorerExplain() throws Exception {
SpanNearQuery q = makeQuery();
QueryWeight w = q.queryWeight(searcher);
Weight w = q.weight(searcher);
Scorer s = w.scorer(searcher.getIndexReader(), true, false);
Explanation e = s.explain(1);
assertTrue("Scorer explanation value for doc#1 isn't positive: "

View File

@ -409,7 +409,7 @@ public class TestSpans extends LuceneTestCase {
}
};
Scorer spanScorer = snq.weight(searcher).scorer(searcher.getIndexReader());
Scorer spanScorer = snq.weight(searcher).scorer(searcher.getIndexReader(), true, false);
assertTrue("first doc", spanScorer.nextDoc() != DocIdSetIterator.NO_MORE_DOCS);
assertEquals("first doc number", spanScorer.docID(), 11);