* Clean up ByteBlockPool
1. Move slice functionality to a separate class, ByteSlicePool.
2. Add exception case if the requested slice size is larger than the
block size.
3. Make pool buffers private instead of the comment asking not to modify it.
4. Consolidate setBytesRef methods with int offsets.
5. Simplify ramBytesUsed.
6. Update and expand comments and javadoc.
* Revert to long offsets in ByteBlockPool; Clean-up
* Remove slice functionality from TermsHashPerField
* First working test
* [Broken] Test interleaving slices
* Fixed randomized test
* Remove redundant tests
* Tidy
* Add CHANGES
* Move ByteSlicePool to oal.index
* Use value-based LRU cache in NodeHash (#12714)
* tidy code
* Add a nocommit about OffsetAndLength
* Fix the readBytes method
* Use List<byte[]> instead of ByteBlockPool
* Move nodesEqual to PagedGrowableHash
* Add generic type
* Fix the count variable
* Fix the RAM usage measurement
* Use PagedGrowableWriter instead of HashMap
* Remove unused generic type
* Update the ramBytesUsed formula
* Retain the FSTCompiler.addNode signature
* Switch back to ByteBlockPool
* Remove the unnecessary assertion
* Remove fstHashAddress
* Add some javadoc
* Fix the address offset when reading from fallback table
* tidy code
* Address comments
* Add assertions
This commit replaces the usage of the deprecated java.net.URL constructor with URI, later converting toURL where necessary to interoperate with the URLConnection API.
The default tests.multiplier passed from gradle was 1, but
LuceneTestCase tried to compute its default value from TESTS_NIGHTLY.
This could lead to subtle errors: nightly mode failures would not report
tests.multipler=1 and when started from the IDE, the tests.multiplier
would be set to 2 (leading to different randomness).
Since increasing the number of hits retrieved in nightly benchmarks from 10 to
100, the performance of sorting documents by title dropped back to the level it
had before introducing dynamic pruning. This is not too surprising given that
the `title` field is a unique field, so the optimization would only kick in
when the current 100th hit would have an ordinal that is less than 128 -
something that would only happen after collecting most hits.
This change increases the threshold to 1024, so that the optimization would
kick in when the current 100th hit has an ordinal that is less than 1024,
something that happens a bit sooner.
Ported from https://github.com/apache/solr/pull/1020
Also pin python versions in requirements.txt to avoid unexpected incompatibilties in the future
Co-authored-by: Jan Høydahl <janhoy@users.noreply.github.com>
The test expects that opening a writer on 5 segments doesn't cause merging, but
actually it does since randomization created a merge policy with a factor of 5.
If `mergeQuantizedByteVectorValues` fails with an exception, the temp output
never gets closed. This was found by the test that throws random exceptions.
PR #12382 added a bulk scorer for top-k hits on conjunctions that yielded a
significant speedup (annotation
[FP](http://people.apache.org/~mikemccand/lucenebench/AndHighHigh.html)). This
change proposes a similar change for exhaustive collection of conjunctive
queries, e.g. for counting, faceting, etc.
We shouldn't ever return negative scores from vector similarity functions. Given vector panama and nearly antipodal float[] vectors, it is possible that cosine and (normalized) dot-product become slightly negative due to compounding floating point errors.
Since we don't want to make panama vector incredibly slow, we stick to float32 operations for now, and just snap to `0` if the score is negative after our correction.
closes: https://github.com/apache/lucene/issues/12700
### Description
While going through [VectorUtil](https://github.com/apache/lucene/blob/main/lucene/core/src/java/org/apache/lucene/util/VectorUtil.java) class, I observed we don't have a check for unit vector in `VectorUtil#l2normalize` so passing a unit vector goes thorough the whole L2 normalization(which is totally not required and it should early exit?). I confirmed this by trying out a silly example of `VectorUtil.l2normalize(VectorUtil.l2normalize(nonUnitVector))` and it performed every calculation twice. We could also argue that user should not call for this for a unit vector but I believe there would be cases where user simply want to perform the L2 normalization without checking the vector or if there are some overflowing values.
TL;DR : We should early exit in `VectorUtil#l2normalize`, returning the same input vector if its a unit vector
This is easily avoidable if we introduce a light check to see if the L1 norm or squared sum of input vector is equal to 1.0 (or) maybe just check `Math.abs(l1norm - 1.0d) <= 1e-5` (as in this PR) because that unit vector dot product(`v x v`) are not exactly 1.0 but like example : `0.9999999403953552` etc. With `1e-5` delta here we would be assuming a vector v having `v x v` >= `0.99999` is a unit vector or say already L2 normalized which seems fine as the delta is really small? and also the check is not heavy one?.
IndexSearcher exposes a public getSlices method, that is used to
retrieve the slices that the searcher executes queries against, as well
as slices, which is supposed to be overridden to customize the creation
of slices.
I believe that getSlices should be final: there is no reason to override
the method. Also, it is too easy to confuse the two and end up
overriding the wrong one by mistake.
* Remove direct dependency of NodeHash to FST
* Fix index out of bounds when writing FST to different metaOut (#12697)
* Tidify code
* Update CHANGES.txt
* Re-add assertion
* Remove direct dependency of NodeHash to FST
* Hold off the FSTTraversal changes
* Rename variable
* Add Javadoc
* Add @Override
* tidy
* tidy
* Change to FSTReader
* Update CHANGES.txt