This change modernizes the BWC tests to leverage RandomizedRunners Parameterized Tests that allow us to have more structured and hopefully more extendible BWC tests in the future. This change doesn't add any new tests but tries to make the ones we have more structured and support growth down the road.
Basically, every index type got it's own Test class that doesn't require to loop over all the indices in each test. Each test case is run with all versions specified. Several sanity checks are applied in the base class to make individual tests smaller and much easier to read.
Co-authored-by: Michael McCandless <lucene@mikemccandless.com>
Co-authored-by: Adrien Grand <jpountz@gmail.com>
If the required clause doesn't contribute scores, which typically happens if
the required clause is a `FILTER` clause, then the minimum competitive score
can be propagated directly to the optional clause.
This change makes `PointRangeQuery` exit early when it knows that it doesn't
match a segment. In the case when this query is part of a conjunction, this
helps make sure `ScorerSupplier#get` doesn't get called on other required
clauses, which is sometimes an expensive operation (e.g. multi-term queries).
This is especially relevant for time-based data combined with
`LogByteSizePolicy`, which gives non-adjacent ranges of timestamps to each
segment, which in-turn increases the likelihood that some segments won't match
a range query at all.
boxPolygon and trianglePolygon only uses minX and minY value of give XY Rectangle which results in a polygon with points in single place. With this changes, both methods generate correct polygons.
When merging `Lucene99HnswScalarQuantizedVectorsFormat` a NPE is possible when deleted documents are present.
`ScalarQuantizer#fromVectors` doesn't take deleted documents into account. This means using `FloatVectorValues#size` may actually be larger than the actual size of live documents. Consequently, when iterating for sampling iteration too far is possible and an NPE will be thrown.
Split taxonomy arrays across chunks
Taxonomy ordinals are added in an append-only way.
Instead of reallocating a single big array when loading new taxonomy
ordinals and copying all the values from the previous arrays over
individually, we can keep blocks of ordinals and reuse blocks from the
previous arrays.
This change runs the current BWC indices generation code together
with the unittest to catch issues with the generated indices earliy.
Each generation method runs a sanity check on the generated indices.
The changes on #12929 broke the generation code for BWC indices
since they are expecting vertain fields created by LineDocFile.
Yet, this change adds some sanity checks that run with unittest to ensure
the BWC generation is at least readable with the current version.
Relates to #12929
This change prevents users from adding a parent field to an existing index.
Parent field must be added before any documents are added to the index to
prevent documents without the parent field from being indexed and later
to be treated as child documents upon merge.
Relates to #12829
Before this change, `DocumentsWriterPerThread#getAndLock` could sometimes
return `null` even though the queue was empty at no point in time. The
practical implication is that we can end up with more DWPTs in memory than
indexing threads, which, while not strictly a bug, may require doing more
merging than we'd like later on.
I ran luceneutil's `IndexGeonames` with this change, and
`DocumentsWriterPerThread#getAndLock` was not the main source of
contention.
Closes#12649#12916