* tweak comments; change if to switch
* remove old SOPs, minor comment styling, fixed silly performance bug on rehash using the wrong bitsRequired (count vs node)
* first raw cut; some nocommits added; some tests fail
* tests pass!
* fix silly fallback hash bug
* remove SOPs; add some temporary debugging metrics
* add temporary tool to test FST performance across differing NodeHash sizes
* remove (now deleted) shouldShareNonSingletonNodes call from Lucene90BlockTreeTermsWriter
* add simple tool to render results table to GitHub MD
* add simple temporary tool to iterate all terms from a provided luceneutil wikipedia index and build an FST from them
* first cut at using packed ints for hash t able again
* add some nocommits; tweak test_all_sizes.py to new RAM usage approach; when half of the double barrel is full, allocate new primary hash at full size to save cost of continuously rehashing for a large FST
* switch to limit suffix hash by RAM usage not count (more intuitive for users); clean up some stale nocommits
* switch to more intuitive approximate RAM (mb) limit for allowed size of NodeHash
* nuke a few nocommits; a few more remain
* remove DO_PRINT_HASH_RAM
* no more FST pruning
* remove final nocommit: randomly change allowed NodeHash suffix RAM size in TestFSTs.testRealTerms
* remove SOP
* tidy
* delete temp utility tools
* remove dead (FST pruning) code
* add CHANGES entry; fix one missed fst.addNode -> fstCompiler.addNode during merge conflict resolution
* remove a mal-formed nocommit
* fold PR feedback
* fold feedback
* add gradle help test details on how to specify heap size for the test JVM; fix bogus assert (uncovered by Test2BFST); add TODO to Test2BFST anticipating building massive FSTs in small bounded RAM
* suppress sysout checks for Test2BFSTs; add helpful comment showing how to run it directly
* tidy
Moved all the hairy allocSlice stuff as static method in TermsHashPerField and I introduce a BytesRefBlockPool to
encapsulate of the BytesRefHash write/read logic.
While working on the quantization codec & thinking about how merging will evolve, it became clearer that having merging attached directly to the vector writer is weird.
I extracted it out to its own class and removed the "initializedNodes" logic from the base class builder.
Also, there was on other refactoring around grabbing sorted nodes from the neighbor iterator, I just moved that static method so its not attached to the writer (as all bwc writers need it and all future HNSW writers will as well).
When we initially introduced support for dynamic pruning, we had an
implementation of WAND that would almost exclusively use `advance()`. Now that
we switched to MAXSCORE and rely much more on `nextDoc()`, it makes sense to
specialize nextDoc() as well.
This changes the following:
- fewer docs indexed in non-nightly runs,
- `QueryUtils#checkFirstSkipTo` uses the `ScorerSupplier` API to convey it
will only check one doc,
- `QueryUtils#checkFirstSkipTo` no longer relies on timing to run in a
reasonably amount of time.
This test sometimes fails because `SimpleText` has a non-deterministic size for
its segment info file, due to escape characters. The test now enforces the
default codec, and checks that segments have the expected size before moving
forward with forcemerge().
Closes#12648
The code was written as if frequencies should be lazily decoding, except that
when refilling buffers freqs were getting eagerly decoded instead of lazily.
### Description
This PR addresses the issue #12394. It adds an API **`similarityToQueryVector`** to `DoubleValuesSource` to compute vector similarity scores between the query vector and the `KnnByteVectorField`/`KnnFloatVectorField` for documents using the 2 new DVS implementations (`ByteVectorSimilarityValuesSource` for byte vectors and `FloatVectorSimilarityValuesSource` for float vectors). Below are the method signatures added to DVS in this PR:
- `DoubleValues similarityToQueryVector(LeafReaderContext ctx, float[] queryVector, String vectorField)` *(uses ByteVectorSimilarityValuesSource)*
- `DoubleValues similarityToQueryVector(LeafReaderContext ctx, byte[] queryVector, String vectorField)` *(uses FloatVectorSimilarityValuesSource)*
Closes#12394
DocumentsWriter had some duplicate logic for iterating over
segments to be flushed. This change simplifies some of the loops
and moves common code in on place. This also adds tests to ensure
we actually freeze and apply deletes on segment flush.
Relates to #12572