If `mergeQuantizedByteVectorValues` fails with an exception, the temp output
never gets closed. This was found by the test that throws random exceptions.
PR #12382 added a bulk scorer for top-k hits on conjunctions that yielded a
significant speedup (annotation
[FP](http://people.apache.org/~mikemccand/lucenebench/AndHighHigh.html)). This
change proposes a similar change for exhaustive collection of conjunctive
queries, e.g. for counting, faceting, etc.
We shouldn't ever return negative scores from vector similarity functions. Given vector panama and nearly antipodal float[] vectors, it is possible that cosine and (normalized) dot-product become slightly negative due to compounding floating point errors.
Since we don't want to make panama vector incredibly slow, we stick to float32 operations for now, and just snap to `0` if the score is negative after our correction.
closes: https://github.com/apache/lucene/issues/12700
### Description
While going through [VectorUtil](https://github.com/apache/lucene/blob/main/lucene/core/src/java/org/apache/lucene/util/VectorUtil.java) class, I observed we don't have a check for unit vector in `VectorUtil#l2normalize` so passing a unit vector goes thorough the whole L2 normalization(which is totally not required and it should early exit?). I confirmed this by trying out a silly example of `VectorUtil.l2normalize(VectorUtil.l2normalize(nonUnitVector))` and it performed every calculation twice. We could also argue that user should not call for this for a unit vector but I believe there would be cases where user simply want to perform the L2 normalization without checking the vector or if there are some overflowing values.
TL;DR : We should early exit in `VectorUtil#l2normalize`, returning the same input vector if its a unit vector
This is easily avoidable if we introduce a light check to see if the L1 norm or squared sum of input vector is equal to 1.0 (or) maybe just check `Math.abs(l1norm - 1.0d) <= 1e-5` (as in this PR) because that unit vector dot product(`v x v`) are not exactly 1.0 but like example : `0.9999999403953552` etc. With `1e-5` delta here we would be assuming a vector v having `v x v` >= `0.99999` is a unit vector or say already L2 normalized which seems fine as the delta is really small? and also the check is not heavy one?.
IndexSearcher exposes a public getSlices method, that is used to
retrieve the slices that the searcher executes queries against, as well
as slices, which is supposed to be overridden to customize the creation
of slices.
I believe that getSlices should be final: there is no reason to override
the method. Also, it is too easy to confuse the two and end up
overriding the wrong one by mistake.
* Remove direct dependency of NodeHash to FST
* Fix index out of bounds when writing FST to different metaOut (#12697)
* Tidify code
* Update CHANGES.txt
* Re-add assertion
* Remove direct dependency of NodeHash to FST
* Hold off the FSTTraversal changes
* Rename variable
* Add Javadoc
* Add @Override
* tidy
* tidy
* Change to FSTReader
* Update CHANGES.txt
Currently, merge-on-full-flush only checks if merges need to run if changes
have been flushed to disk. This prevents from having different merging logic
for refreshes and commits, since the merge policy would not be checked upon
commit if no new documents got indexed since the previous refresh.
Turns out that testCancelTasksOnException require a single threaded
executor. Given that most tests in the class rely make more sense with a
single thread, I went back to 1 thread for the shared executor and used
a multi-threaded executor in the only test that relies on multiple
threads.
Adds new int8 scalar quantization for HNSW codec. This uses a new lucene9.9 format and auto quantizes floating point vectors into bytes on flush and merge.
Apache committers who opt-in (via authentication) can have their local build scans be submitted to ge.apache.org.
Co-authored-by: Clay Johnson <cjohnson@gradle.com>
The idea behind MAXSCORE is to run disjunctions as `+(essentialClause1 ...
essentialClauseM) nonEssentialClause1 ... nonEssentialClauseN`, moving more and
more clauses from the essential list to the non-essential list as the minimum
competitive score increases. For instance, a query such as `the book of life`
which I found in the Tantivy benchmark ends up running as `+book the of life`
after some time, ie. with one required clause and other clauses optional. This
is because matching `the`, `of` and `life` alone is not good enough for
yielding a match.
Here some statistics in that case:
- min competitive score: 3.4781857
- max_window_score(book): 2.8796153
- max_window_score(life): 2.037863
- max_window_score(the): 0.103848875
- max_window_score(of): 0.19427927
Actually if you look at these statistics, we could do better, because a match
may only be competitive if it matches both `book` and `life`, so this query
could actually execute as `+book +life the of`, which may help evaluate fewer
documents compared to `+book the of life`. Especially if you enable recursive
graph bisection.
This is what this PR tries to achieve: in the event when there is a single
essential clause and matching all clauses but the best non-essential clause
cannot produce a competitive match, then the scorer will only evaluate
documents that match the intersection of the essential clause and the best
non-essential clause.
It's worth noting that this optimization would kick in very frequently on
2-clauses disjunctions.
When operations are parallelized, like query rewrite, or search, or
createWeight, one of the tasks may throw an exception. In that case we
wait for all tasks to be completed before re-throwing the exception that
were caught. Tasks that were not started when the exception is captured
though can be safely skipped. Ideally we would also cancel ongoing tasks
but I left that for another time.
Currently, merge-on-full-flush only checks if merges need to run if changes
have been flushed to disk. This prevents from having different merging logic
for refreshes and commits, since the merge policy would not be checked upon
commit if no new documents got indexed since the previous refresh.
If the add/updateDocuments(List<>) API is used, lucene guarantees that
all documents are indexed in the same segment with consecutive document IDs.
This enables features like nested documents etc. This change records the usage
of this API in SegmentsInfo and preserves this property across merges.
Relates to #12665
* tweak comments; change if to switch
* remove old SOPs, minor comment styling, fixed silly performance bug on rehash using the wrong bitsRequired (count vs node)
* first raw cut; some nocommits added; some tests fail
* tests pass!
* fix silly fallback hash bug
* remove SOPs; add some temporary debugging metrics
* add temporary tool to test FST performance across differing NodeHash sizes
* remove (now deleted) shouldShareNonSingletonNodes call from Lucene90BlockTreeTermsWriter
* add simple tool to render results table to GitHub MD
* add simple temporary tool to iterate all terms from a provided luceneutil wikipedia index and build an FST from them
* first cut at using packed ints for hash t able again
* add some nocommits; tweak test_all_sizes.py to new RAM usage approach; when half of the double barrel is full, allocate new primary hash at full size to save cost of continuously rehashing for a large FST
* switch to limit suffix hash by RAM usage not count (more intuitive for users); clean up some stale nocommits
* switch to more intuitive approximate RAM (mb) limit for allowed size of NodeHash
* nuke a few nocommits; a few more remain
* remove DO_PRINT_HASH_RAM
* no more FST pruning
* remove final nocommit: randomly change allowed NodeHash suffix RAM size in TestFSTs.testRealTerms
* remove SOP
* tidy
* delete temp utility tools
* remove dead (FST pruning) code
* add CHANGES entry; fix one missed fst.addNode -> fstCompiler.addNode during merge conflict resolution
* remove a mal-formed nocommit
* fold PR feedback
* fold feedback
* add gradle help test details on how to specify heap size for the test JVM; fix bogus assert (uncovered by Test2BFST); add TODO to Test2BFST anticipating building massive FSTs in small bounded RAM
* suppress sysout checks for Test2BFSTs; add helpful comment showing how to run it directly
* tidy