Remove IndexSearcher#search(List<LeafReaderContext>, Weight, Collector) (#13780)

With the introduction of intra-segment concurrency, we have introduced a new
protected search(LeafReaderContextPartition[], Weight, Collector) method. The
previous variant that accepts a list of leaf reader contexts was left deprecated
as there is one leftover usages coming from search(Query, Collector). The hope was
that the latter was going to be removed soon as well, but there is actually no
need to tie the two removals. It is easier to fold this method into its only
caller, in order for it to still bypass the collector manager based methods. 
This way we fold two deprecated methods into a single one.
This commit is contained in:
Luca Cavanna 2024-09-13 09:00:50 +02:00 committed by GitHub
parent fe64b04fda
commit f778cc4924
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
8 changed files with 26 additions and 54 deletions

View File

@ -120,6 +120,9 @@ API Changes
* GITHUB#13328: Convert many basic Lucene classes to record classes, including CollectionStatistics, TermStatistics and LeafMetadata. (Shubham Chaudhary)
* GITHUB#13780: Remove `IndexSearcher#search(List<LeafReaderContext>, Weight, Collector)` in favour of the newly
introduced `IndexSearcher#search(LeafReaderContextPartition[], Weight, Collector)`
New Features
---------------------

View File

@ -859,7 +859,7 @@ Subclasses of `IndexSearcher` that call or override the `searchLeaf` method need
### Signature of static IndexSearch#slices method changed
The static `IndexSearcher#sslices(List<LeafReaderContext> leaves, int maxDocsPerSlice, int maxSegmentsPerSlice)`
The static `IndexSearcher#slices(List<LeafReaderContext> leaves, int maxDocsPerSlice, int maxSegmentsPerSlice)`
method now supports an additional 4th and last argument to optionally enable creating segment partitions:
`IndexSearcher#slices(List<LeafReaderContext> leaves, int maxDocsPerSlice, int maxSegmentsPerSlice, boolean allowSegmentPartitions)`
@ -868,3 +868,9 @@ method now supports an additional 4th and last argument to optionally enable cre
`TotalHitCountCollectorManager` now requires that an array of `LeafSlice`s, retrieved via `IndexSearcher#getSlices`,
is provided to its constructor. Depending on whether segment partitions are present among slices, the manager can
optimize the type of collectors it creates and exposes via `newCollector`.
### `IndexSearcher#search(List<LeafReaderContext>, Weight, Collector)` removed
The protected `IndexSearcher#search(List<LeafReaderContext> leaves, Weight weight, Collector collector)` method has been
removed in favour of the newly introduced `search(LeafReaderContextPartition[] partitions, Weight weight, Collector collector)`.
`IndexSearcher` subclasses that override this method need to instead override the new method.

View File

@ -21,8 +21,8 @@ package org.apache.lucene.search;
* the current leaf.
*
* <p>Note: IndexSearcher swallows this exception and never re-throws it. As a consequence, you
* should not catch it when calling {@link IndexSearcher#search} as it is unnecessary and might hide
* misuse of this exception.
* should not catch it when calling the different search methods that {@link IndexSearcher} exposes
* as it is unnecessary and might hide misuse of this exception.
*/
@SuppressWarnings("serial")
public final class CollectionTerminatedException extends RuntimeException {

View File

@ -134,7 +134,8 @@ public abstract class FieldComparator<T> {
/**
* Sorts by descending relevance. NOTE: if you are sorting only by descending relevance and then
* secondarily by ascending docID, performance is faster using {@link TopScoreDocCollector}
* directly (which {@link IndexSearcher#search} uses when no {@link Sort} is specified).
* directly (which {@link IndexSearcher#search(Query, int)} uses when no {@link Sort} is
* specified).
*/
public static final class RelevanceComparator extends FieldComparator<Float>
implements LeafFieldComparator {

View File

@ -606,9 +606,13 @@ public class IndexSearcher {
* CollectorManager)} due to its support for concurrency in IndexSearcher
*/
@Deprecated
public void search(Query query, Collector results) throws IOException {
query = rewrite(query, results.scoreMode().needsScores());
search(leafContexts, createWeight(query, results.scoreMode(), 1), results);
public void search(Query query, Collector collector) throws IOException {
query = rewrite(query, collector.scoreMode().needsScores());
Weight weight = createWeight(query, collector.scoreMode(), 1);
collector.setWeight(weight);
for (LeafReaderContext ctx : leafContexts) { // search each subreader
searchLeaf(ctx, 0, DocIdSetIterator.NO_MORE_DOCS, weight, collector);
}
}
/** Returns true if any search hit the {@link #setTimeout(QueryTimeout) timeout}. */
@ -785,38 +789,6 @@ public class IndexSearcher {
}
}
/**
* Lower-level search API.
*
* <p>{@link #searchLeaf(LeafReaderContext, int, int, Weight, Collector)} is called for every leaf
* partition. <br>
*
* <p>NOTE: this method executes the searches on all given leaves exclusively. To search across
* all the searchers leaves use {@link #leafContexts}.
*
* @param leaves the searchers leaves to execute the searches on
* @param weight to match documents
* @param collector to receive hits
* @throws TooManyClauses If a query would exceed {@link IndexSearcher#getMaxClauseCount()}
* clauses.
* @deprecated in favour of {@link #search(LeafReaderContextPartition[], Weight, Collector)} that
* provides the same functionality while also supporting segments partitioning. Will be
* removed once the removal of the deprecated {@link #search(Query, Collector)} is completed.
*/
@Deprecated
protected void search(List<LeafReaderContext> leaves, Weight weight, Collector collector)
throws IOException {
collector.setWeight(weight);
// TODO: should we make this
// threaded...? the Collector could be sync'd?
// always use single thread:
for (LeafReaderContext ctx : leaves) { // search each subreader
searchLeaf(ctx, 0, DocIdSetIterator.NO_MORE_DOCS, weight, collector);
}
}
/**
* Lower-level search API
*

View File

@ -1659,10 +1659,11 @@ public class TestDrillSideways extends FacetTestCase {
}
@Override
protected void search(List<LeafReaderContext> leaves, Weight weight, Collector collector)
protected void search(
LeafReaderContextPartition[] partitions, Weight weight, Collector collector)
throws IOException {
AssertingCollector assertingCollector = AssertingCollector.wrap(collector);
super.search(leaves, weight, assertingCollector);
super.search(partitions, weight, assertingCollector);
assert assertingCollector.hasFinishedCollectingPreviousLeaf;
}
}

View File

@ -17,12 +17,10 @@
package org.apache.lucene.tests.search;
import java.io.IOException;
import java.util.List;
import java.util.Random;
import java.util.concurrent.ExecutorService;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.IndexReaderContext;
import org.apache.lucene.index.LeafReaderContext;
import org.apache.lucene.search.Collector;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
@ -71,15 +69,6 @@ public class AssertingIndexSearcher extends IndexSearcher {
return rewritten;
}
@Override
protected void search(List<LeafReaderContext> leaves, Weight weight, Collector collector)
throws IOException {
assert weight instanceof AssertingWeight;
AssertingCollector assertingCollector = AssertingCollector.wrap(collector);
super.search(leaves, weight, assertingCollector);
assert assertingCollector.hasFinishedCollectingPreviousLeaf;
}
@Override
protected void search(LeafReaderContextPartition[] leaves, Weight weight, Collector collector)
throws IOException {

View File

@ -554,9 +554,9 @@ public class CheckHits {
}
@Override
public void search(Query query, Collector results) throws IOException {
public void search(Query query, Collector collector) throws IOException {
checkExplanations(query);
super.search(query, results);
super.search(query, collector);
}
@Override