This commit updates the MemorySegment scorer so that it ensures the values is of the correct type.
The offset calculations for vectors in RandomAccessQuantizedByteVectorValues will be different than that of non-quantized. We can generalise the implementation for quantized vectors later, but for now, passing a quantised values indicated bug in wrapping or delegation.
* Remove hppc dependency
* Change fork version to 0.10.0
* Add @lucene.internal
* Move hppc classes to oal.internal.hppc but export it.
* Delete hppc license since it's no longer a dependency.
---------
Co-authored-by: Dawid Weiss <dawid.weiss@carrotsearch.com>
This relates to #13359: we want to take advantage of the `Weight#scorerSupplier` call to start scheduling some I/O in the background in parallel across clauses. For this to work properly with top-level disjunctions, we need to move `#bulkScorer()` from `Weight` to `ScorerSupplier` as well, so that the disjunctive `BooleanQuery` first performs a call to `Weight#scorerSupplier()` on all inner clauses, and then `ScorerSupplier#bulkScorer` on all inner clauses.
`ScorerSupplier#get` and `ScorerSupplier#bulkScorer` only support being called once. This forced me to fix some inefficiencies in `bulkScorer()` implementations when we would pull scorers and then throw it away when realizing that the strategy we were planning on using was not optimal. This is why e.g. `ReqExclBulkScorer` now also supports prohibited clauses that produce a two-phase iterator.
* avoid WrapperDownloader if have the JAR
* don't specify --source
More specific than needed, and some JDK/configs may complain about an incompatibility with --release.
From https://github.com/apache/solr/pull/2419
While doing an unrelated refactoring, I got hit by this unchecked cast, which
is incorrecw when the presearcher query produces some specialized `BulkScorer`.
Add a MemorySegment Vector scorer - for scoring without copying on-heap.
The vector scorer loads values directly from the backing memory segment when available. Otherwise, if the vector data spans across segments the scorer copies the vector data on-heap.
A benchmark shows ~2x performance improvement of this scorer over the default copy-on-heap scorer.
The scorer currently only operates on vectors with an element size of byte. We can evaluate if and how to support floats separately.
Add LongObjectHashMap and replace Map<Long, Object>.
Add LongIntHashMap and replace Map<Long, Int>.
Add HPPC dependency to join and spatial modules for primitive values float and double.
It sums up max scores in a float when it should sum them up in a double like we
do for `Scorer#score()`. Otherwise, max scores may be returned that are less
than actual scores.
This bug was introduced in #13343, so it is not released yet.
Closes#13371Closes#13396
Parsers may sometimes want to create an IntervalsSource that returns no
intervals. This adds a new factory method to `Intervals` that will create one,
and changes `IntervalBuilder` to use it in place of its custom empty intervals
source.
As Robert pointed out and benchmarks confirmed, there is some (small) overhead
to calling `madvise` via the foreign function API, benchmarks suggest it is in
the order of 1-2us. This is not much for a single call, but may become
non-negligible across many calls. Until now, we only looked into using
prefetch() for terms, skip data and postings start pointers which are a single
prefetch() operation per segment per term.
But we may want to start using it in cases that could result into more calls to
`madvise`, e.g. if we start using it for stored fields and a user requests 10k
documents. In #13337, Robert wondered if we could take advantage of `mincore()`
to reduce the overhead of `IndexInput#prefetch()`, which is what this PR is
doing via `MemorySegment#isLoaded()`.
`IndexInput#prefetch` tracks consecutive hits on the page cache and calls
`madvise` less and less frequently under the hood as the number of consecutive
cache hits increases.
Considering that the graphs of 2 indices are organized differently we need to explore a lot of candidates to ensure that both searchers find the same docs. Increasing beamWidth (number of nearest neighbor candidates to track while searching the graph for each newly inserted node) from 5 to 10 fixes the test.
The exception happen because the tail postings list block, which encoding with GroupVInt, had a docID delta that was >= 1<<30, when the postings are also storing freqs.
This uses the `IndexInput#prefetch` API for postings. This relies on heuristics, as we don't know ahead of time what data we will need from a postings list:
- Postings lists are prefetched entirely when they are short (< 16kB).
- Impacts enums also prefetch the first page of skip data.
- Postings enums prefetc skip data on the first call to advance().
Positions, offsets and payloads are never prefetched.
Putting the `IndexInput#prefetch` call in `TermsEnum#postings` and `TermsEnum#impacts` works well because `BooleanQuery` will first create postings/impacts enums for all clauses before it starts unioning/intersecting them. This allows the prefetching logic to run in parallel across all clauses of the same query on the same segment.
This commit fixes a corner case in the ScalarQuantizer when just a single vector is present. I ran into this when updating a test that previously passed successfully with Lucene 9.10 but fails in 9.x.
The score error correction is calculated to be NaN, as there are no score docs or variance.
This commit updates the writer to handle the case where there are no values.
Previously (before #13369), there was a check that there were some points values before trying to write, this is no longer the case. The code in writeFieldNDims has an assumption that the values is not empty - an empty values will result in calculating a negative number of splits, and a negate array size to hold the splits.
The fix is trivial, return null when values is empty - null is an allowable return value from this method. Note: writeField1Dim is able to handle an empty values.
The issue outlines the problem. When we have point value dimensions, segment core readers assume that there will be point files.
However, when allowing soft deletes and a document fails indexing failed before a point field could be written, this assumption fails. Consequently, the NRT fails to open. I settled on always flushing a point file if the field info says there are point fields, even if there aren't any docs in the buffer.
closes#13353
This makes `IndexInput#prefetch` take an offset instead of being relative to
the current position. This avoids requiring callers to seek only to call
`prefetch()`.
Follow up to: #13181
I noticed the quantized interface had a slightly different name.
Additionally, testing showed we are inconsistent when there aren't any vectors to score. This makes the response consistent (e.g. null when there aren't any vectors).
Depending on how we quantize and then scale, we can edge down below 0 for dotproduct scores.
This is exceptionally rare, I have only seen it in extreme circumstances in tests (with random data and low dimensionality).
This commit fixes an issue in the default flat vector scorer supplier whereby subsequent scorers created by the supplier can affect previously created scorers.
The issue is that we're sharing the backing array from the vector values, and overwriting it in subsequent scorers. We just need to use the ordinal to protect the scorer instance from mutation.