This improves testing of mismatched field numbers by
- improving `AssertingDocValuesProducer` to detect mismatched field numbers,
- introducing a `MismatchedCodecReader` to actually test mismatched field
numbers on `DocValuesProducer` (a `MismatchedLeafReader` wrapping a
`SlowCodecReaderWrapper` doesn't work since `SlowCodecReaderWrapper` implicitly
resolves the correct `FieldInfo` object),
- introducing an explicit test for mismatched field numbers for doc values, points,
postings and knn vectors.
These new tests uncovered a bug when merging sorted doc values, which would
call the underlying doc values producer with the merged field info.
Closes#13805
TermInSetQuery used to have an accessor to its terms that was removed in #12173
to protect leaking internal encoding details. This introduces an accessor to the
term data in the query that doesn't expose internals but merely allows iterating
over the decoded BytesRef, making inspection of the querys content possible again.
Closes#13804
After adjusting tests that truly exercise intra-merge parallelism, more issues have arisen. See: https://github.com/apache/lucene/issues/13798
To be risk adverse & due to the soon to be released/freezed Lucene 10 & 9.12, I am reverting all intra-merge parallelism, except for the parallelism when merging HNSW graphs.
Merging other structures was never really enabled in a release (we disabled it in a bugfix for Lucene 9.11). While this is frustrating as it seems like we leaving lots of perf on the floor, I am err'ing on the side of safety here.
In Lucene 10, we can work on incrementally reenabling intra-merge parallelism.
closes: https://github.com/apache/lucene/issues/13798
At the moment, our skip indexes record min/max ordinal/value per range
of doc IDs. It would be natural to extend it to other pre-aggregated
data such as a sum and value count, which facets could take advantage
of. This change switches `docValuesSkipIndex` from a boolean to an enum
so that we could release such changes in the future in an additive
fashion, by adding constants to this enum and new methods to
`DocValuesSkipper`.
Noticed some visible allocations in CompetitiveImpactAccumulator
during benchmarking and fixed the needless allocation for the comparator
in that class as well as a couple other similar spots where needless
classes and/or objects could easily be replaced by more lightweight
solutions.
We have recently (see #13735) introduced this utility method that creates a
collector manager which only works when a searcher does not have an executor
set, otherwise it throws exception once we attempt to create a new collector
for more than one slice.
While we discussed it should be safe to use in some specific scenarios like the
monitor module, we should be careful exposing this utility publicly, because
while we'd like to ease migration from the search(Query, Collector) method, we
may end up making users like even worse, in that it exposes them to failures
whenever an executor is set and there are more than one slice created, which
is hard to follow and does not provide a good user experience.
My proposal is that we use a similar collector manager locally, where safe and
required, but we don't expose it to users. In most places, we should rather
expose collector managers that do support search concurrency, rather than working
around the lack of those.
4 and 7 bit quantization still work.
It's a bit tricky because 9.11 indices may have 8 bit compressed
vectors which are buggy at search time (and users may not realize it,
or may not be using them at search time). But the index is still
intact since we keep the original full float precision vectors. So,
users can force rewrite all their 9.11 written segments (or reindex
those docs), and can change to 4 or 7 bit quantization for newly
indexed documents. The 9.11 index is still usable.
(I added a couple test cases confirming that one can indeed change
their mind, indexing a given vector field first with 4 bit
quantization, then later (new IndexWriter / Codec) with 7 bit or with
no quantization.)
I added MIGRATE.md explanation.
Separately, I also tightned up the `compress` boolean to throw an
exception unless bits=4. Previously (for 7 bit compression) it
silently ignored `compress=true` for 7, 8 bit quantization. And tried
to improve its javadocs a bit.
Closes#13519.
With the introduction of intra-segment concurrency, we have introduced a new
protected search(LeafReaderContextPartition[], Weight, Collector) method. The
previous variant that accepts a list of leaf reader contexts was left deprecated
as there is one leftover usages coming from search(Query, Collector). The hope was
that the latter was going to be removed soon as well, but there is actually no
need to tie the two removals. It is easier to fold this method into its only
caller, in order for it to still bypass the collector manager based methods.
This way we fold two deprecated methods into a single one.
We have been encoding docBase and the score in MaxScoreAccumulator#accumulate.
That makes the assumption that segments are going to be processed in doc order
and implements global max score accounting across segments searched concurrently.
With the introduction of intra-segment concurrency, the same segment may be seen
multiple times, once per segment partition. Partitions are all going to have the same
docBase, hence you may end up with topN results with higher docIds than expected,
because the search early terminates before docs with same score and lower doc ids
are seen.
This commit encodes the docId in the accumulator in place of the docBase to resolve
the described issue.
Trivial commit that makes HnswLock final and LockedRow a record. This is general clean up and helps the JIT a little when reasoning about these types - which show quite a bit in indexing and search profiles.
There's two tests where we use 250_000 as number of collected hits, but we only
ever index max 2000 docs. That makes use create a priority queue of size
250_000 for each segment partition which causes out of memory errors when the
number of partitions is higher than a few.
With this commit I propose that we lower the threshold to 2000 for those tests
that need a high number of collected hits. The assumption that a priority queue
is not built within the LargeNumHitsTopDocsCollector still holds so this change
should not defeat the purpose of the tests.
ProfilerCollector did not have until now a corresponding collector manager.
This commit introduces one and switches TestProfilerCollector to use search
concurrency and move away from the deprecated search(Query, Collector) method.
Note that the collector manager does not support children collectors. Figuring
out a generic API for that is rather complicated. Users can always create
their own collector manager depending on their collector hierarchy.
Relates to #12892
This has caused a few recent test failures with jdk23 and jdk24. It should
get fixed replacing the length with the lengt of the array, rather than using
the new length.
During segment merge we must verify that a given field has vectors and exists. The typical knn format checks assume the per-field format is used and thus only check for `null`.
But we should check for field existence in the field info and verify it has dense vectors
Additionally, this commit unifies how the knn formats work and they will throw if a non-existing field is queried. Except for PerField format, which will return null (like the other per field formats)
@iverase has uncovered a potential issue with intra-merge CMS parallelism.
This commit helps expose this problem by forcing tests to use intra-merge parallelism instead of always (well, usually) delegating to a SameThreadExecutorService.
When intra-merge parallelism is used, norms, doc_values, stored_values, etc. are all merged in a separate thread than the thread that was used to construct their merge optimized instances.
This trips assertions numerous assertions in AssertingCodec.assertThread where we assume that the thread that called getMergeInstance() is also the thread getting the values to merge.
In addition to the better testing, this corrects poor merge state handling in the event of parallelism.
Lists are invariant, so current API doesn't allow passing something like List<CollectorManagerImplementation> where `class CollectorManagerImplementation implements CollectorManager<A, B>`. Invariant makes it so `List<CollectorManagerImplementation>` does not extend `List<CollectorManager<A, B>>`. Using bounded wildcard type allows to overcome that.
This commit is a micro optimisation to the HNSW Lock implementation, which avoids integer and vararg boxing when determining the hash of the node and level. Trivially we provide our own arity specialised hash, rather than the more generic java.util.Objects.hash(Object...).