While working on the quantization codec & thinking about how merging will evolve, it became clearer that having merging attached directly to the vector writer is weird.
I extracted it out to its own class and removed the "initializedNodes" logic from the base class builder.
Also, there was on other refactoring around grabbing sorted nodes from the neighbor iterator, I just moved that static method so its not attached to the writer (as all bwc writers need it and all future HNSW writers will as well).
When we initially introduced support for dynamic pruning, we had an
implementation of WAND that would almost exclusively use `advance()`. Now that
we switched to MAXSCORE and rely much more on `nextDoc()`, it makes sense to
specialize nextDoc() as well.
This changes the following:
- fewer docs indexed in non-nightly runs,
- `QueryUtils#checkFirstSkipTo` uses the `ScorerSupplier` API to convey it
will only check one doc,
- `QueryUtils#checkFirstSkipTo` no longer relies on timing to run in a
reasonably amount of time.
This test sometimes fails because `SimpleText` has a non-deterministic size for
its segment info file, due to escape characters. The test now enforces the
default codec, and checks that segments have the expected size before moving
forward with forcemerge().
Closes#12648
The code was written as if frequencies should be lazily decoding, except that
when refilling buffers freqs were getting eagerly decoded instead of lazily.
### Description
This PR addresses the issue #12394. It adds an API **`similarityToQueryVector`** to `DoubleValuesSource` to compute vector similarity scores between the query vector and the `KnnByteVectorField`/`KnnFloatVectorField` for documents using the 2 new DVS implementations (`ByteVectorSimilarityValuesSource` for byte vectors and `FloatVectorSimilarityValuesSource` for float vectors). Below are the method signatures added to DVS in this PR:
- `DoubleValues similarityToQueryVector(LeafReaderContext ctx, float[] queryVector, String vectorField)` *(uses ByteVectorSimilarityValuesSource)*
- `DoubleValues similarityToQueryVector(LeafReaderContext ctx, byte[] queryVector, String vectorField)` *(uses FloatVectorSimilarityValuesSource)*
Closes#12394
DocumentsWriter had some duplicate logic for iterating over
segments to be flushed. This change simplifies some of the loops
and moves common code in on place. This also adds tests to ensure
we actually freeze and apply deletes on segment flush.
Relates to #12572
While going through: https://github.com/apache/lucene/pull/12582
I noticed that for a while now, our offheap vector readers haven't changed at all. We just keep copying them around for no reason.
To make adding a new vector codec simpler, this refactors the lucene95 codec to allow its offheap vector storage format (readers/writers) to be used.
Additionally, it will handle writing the appropriate fields for sparse vectors (read/write) to a provided index output/inputs.
This should reduce the churn in new codecs significantly.
Currently FSTCompiler and FST have circular dependencies to each
other. FSTCompiler creates an instance of FST, and on adding node
(add(IntsRef input, T output)), it delegates to FST.addNode() and passes
itself as a variable. This introduces a circular dependency and mixes up
the FST constructing and traversing code.
To make matter worse, this implies one can call FST.addNode with an
arbitrary FSTCompiler (as it's a parameter), but in reality it should be
the compiler which creates the FST.
This commit moves the addNode method to FSTCompiler instead.
Co-authored-by: Anh Dung Bui <buidun@amazon.com>
e.g. by user responding with ^D
```
Press (n)ext page, (q)uit or enter number to jump to a page.
Exception in thread "main" java.lang.NullPointerException: Cannot invoke "String.length()" because "line" is null
at org.apache.lucene.demo.SearchFiles.doPagingSearch(SearchFiles.java:244)
at org.apache.lucene.demo.SearchFiles.main(SearchFiles.java:152)
```
```
Press (p)revious page, (n)ext page, (q)uit or enter number to jump to a page.
n
Only results 1 - 50 of 104 total matching documents collected.
Collect more (y/n) ?
Exception in thread "main" java.lang.NullPointerException: Cannot invoke "String.length()" because "line" is null
at org.apache.lucene.demo.SearchFiles.doPagingSearch(SearchFiles.java:198)
at org.apache.lucene.demo.SearchFiles.main(SearchFiles.java:152)
```
Co-authored-by: Piotrek Żygieło <pzygielo@users.noreply.github.com>