I noticed while experimenting with brute-force search that our visitation limit is EXACTLY the number of filtered docs to hit. Consequently, if we happen to do brute force search and visit that exact number of vectors, we will fall back again to do brute-force a second time. This struck me as weird.
This commit adjusts the visit limit threshold for approximate search to account for this.
Speedup concurrent multi-segment HNWS graph search by exchanging
the global top candidated collected so far across segments. These global top
candidates set the minimum threshold that new candidates need to pass
to be considered. This allows earlier stopping for segments that don't have
good candidates.
This change modernizes the BWC tests to leverage RandomizedRunners Parameterized Tests that allow us to have more structured and hopefully more extendible BWC tests in the future. This change doesn't add any new tests but tries to make the ones we have more structured and support growth down the road.
Basically, every index type got it's own Test class that doesn't require to loop over all the indices in each test. Each test case is run with all versions specified. Several sanity checks are applied in the base class to make individual tests smaller and much easier to read.
Co-authored-by: Michael McCandless <lucene@mikemccandless.com>
Co-authored-by: Adrien Grand <jpountz@gmail.com>
If the required clause doesn't contribute scores, which typically happens if
the required clause is a `FILTER` clause, then the minimum competitive score
can be propagated directly to the optional clause.
This change makes `PointRangeQuery` exit early when it knows that it doesn't
match a segment. In the case when this query is part of a conjunction, this
helps make sure `ScorerSupplier#get` doesn't get called on other required
clauses, which is sometimes an expensive operation (e.g. multi-term queries).
This is especially relevant for time-based data combined with
`LogByteSizePolicy`, which gives non-adjacent ranges of timestamps to each
segment, which in-turn increases the likelihood that some segments won't match
a range query at all.
boxPolygon and trianglePolygon only uses minX and minY value of give XY Rectangle which results in a polygon with points in single place. With this changes, both methods generate correct polygons.
When merging `Lucene99HnswScalarQuantizedVectorsFormat` a NPE is possible when deleted documents are present.
`ScalarQuantizer#fromVectors` doesn't take deleted documents into account. This means using `FloatVectorValues#size` may actually be larger than the actual size of live documents. Consequently, when iterating for sampling iteration too far is possible and an NPE will be thrown.