When re-using the HNSW graph during segment merges, it is possible that more than the configured M*2 connections could be made per vector.
In those instances, we should allow the graph to still be read from the codec and searchable.
The default ForkJoinPool implementation uses a thread factory that removes all
permissions on threads, so we need to create our own to avoid tests failing
with FS-based directories.
Recursive graph bisection is an extremely effective algorithm to reorder doc
IDs in a way that improves both storage and query efficiency by clustering
similar documents together. It usually performs better than other techniques
that try to achieve a similar goal such as sorting the index in natural order
(e.g. by URL) or by a min-hash, though it comes at a higher index-time cost.
The [original paper](https://arxiv.org/pdf/1602.08820.pdf) is good but I found
this [reproducibility study](http://engineering.nyu.edu/~suel/papers/bp-ecir19.pdf)
to describe the algorithm in more practical ways.
`ImpactsDISI` is nice: you give it an `ImpactsEnum`, typically coming from the
`PostingsFormat` and it will automatically skip hits whose score cannot be
greater than the minimum competitive score. This is the class that yields 10x
or more speedups on top-level `TermQuery`s compared to exhaustive evaluation.
However, when nested under a disjunction or a conjunction, `ImpactsDISI`
typically adds more overhead than it enables skipping. The reason is that on a
disjunction `a OR b`, the minimum competitive score of `a` is the minimum score
for the disjunction minus the maximum score of `b`. While this sort of
propagation of minimum competitive scores down the query tree sometimes helps,
it does hurt more than it helps on average, because `ImpactsDISI` adds quite
some overhead and the per-clauses minimum scores are usually so low that they
don't actually enable skipping hits. I looked into reducing this overhead, but
a big part of it is the additional virtual call, so the only way to get rid of
this overhead is to not wrap with an `ImpactsDISI` at all.
This means that scorers need a way to know whether they are producing the
top-level score, or whether they are producing a partial score that then gets
combined into the top-level score. Term queries would then only wrap with
`ImpactsDISI` when they produce the top-level score. Note that this does not
only include top-level term queries, but also conjunctions that have a single
scoring clause (`a #b`) or combinations of a term query and one or more
prohibited clauses (`a -b`).
This PR involves the refactoring of the HNSW builder and searcher, aiming to create an abstraction for the random access and vector comparisons conducted during graph traversal.
The newly added RandomVectorScorer provides a means to directly compare ordinals, eliminating the need to expose the raw vector primitive type.
This scorer takes charge of vector retrieval and comparison during the graph's construction and search processes.
The primary purpose of this abstraction is to enable the implementation of various strategies.
For example, it opens the door to constructing the graph using the original float vectors while performing searches using their quantized int8 vector counterparts.
* Document why we need `lastPosBlockOffset`
* Let ./gradlew tidy fix the formatting
* Fix '<' with <
---------
Co-authored-by: Tony Xu <tonyx@amazon.com>
There are a few places where tests don't close index readers. This has
not caused problems so far, but it becomes an issue when the reader gets
an executor, because its shutdown happens as a closing listener of the
reader. This has become more evident since we now offload sequential
execution to the executor. If there's an executor, but it's never used,
no threads are created, and no threads are leaked. If we do use the
executor, and the reader is not closed, the test leaks threads.
When an executor is set to the IndexSearcher, we should try and offload
most of the computation to such executor. Ideally, the caller thread
would only do light coordination work, and the executor is responsible
for the heavier workload. If we don't offload sequential execution to
the executor, it becomes very difficult to make any distinction about
the type of workload performed on the two sides.
Closes#12498
When performing concurrent search, we may get an execution exception
from one or more slices. In that case, we'd like to rethrow the cause of
the execution exception, which we do by wrapping it into a new runtime
exception. Instead, we can rethrow runtime exceptions as-is, and the
same is true for io exceptions. Any other exception is still wrapped
into a new runtime exception. This unifies the exceptions that get
thrown between sequential codepath (when no executor is provided) and
concurrent codepath (when an executor is provided).
This commit removes the QueueSizeBasedExecutor (package private) in favour of simply offloading concurrent execution to the provided executor. In need of specific behaviour, it can all be included in the executor itself.
This removes an instanceof check that determines which type of executor wrapper is used, which means that some tasks may be executed on the caller thread depending on queue size, whenever a rejection happens, or always for the last slice. This behaviour is not configurable in any way, and is too rigid. Rather than making this pluggable, I propose to make Lucene less opinionated about concurrent tasks execution and require that users include their own execution strategy directly in the executor that they provide to the index searcher.
Relates to #12498
The current query is returning parent-id's based off of the nearest child-id score. However, its difficult to invert that relationship (meaning determining what exactly the nearest child was during search).
So, I changed the new `ToParentBlockJoin[Byte|Float]KnnVectorQuery` to `DiversifyingChildren[Byte|Float]KnnVectorQuery` and now it returns the nearest child-id instead of just that child's parent id. The results are still diversified by parent-id.
Now its easy to determine the nearest child vector as that is what the query is returning. To determine its parent, its as simple as using the previously provided parent bit set.
Related to: https://github.com/apache/lucene/pull/12434
we should delete this comment since this constructor parameters already removed from LUCENE-2876 , it's description of 'given Similarity' is a lit bit confuse to reader.
Scorer always provide non-negative
This is a follow up to: https://github.com/apache/lucene/pull/12434
Adds a test for when parents are missing in the index and verifies we return no hits. Previously this would have thrown an NPE
This introduces `LeafCollector#collect(DocIdStream)` to enable collectors to
collect batches of doc IDs at once. `BooleanScorer` takes advantage of this by
creating a `DocIdStream` whose `count()` method counts the number of bits that
are set in the bit set of matches in the current window, instead of naively
iterating over all matches.
On wikimedium10m, this yields a ~20% speedup when counting hits for the `title
OR 12` query (2.9M hits).
Relates #12358
Periodically, the random indexer will force merge on close, this means that what was originally indexed as the zeroth document could no longer be the zeroth document.
This commit adjusts the assertion to ensure the to string format is as expected for `DocAndScoreQuery`, regardless of the matching doc-id in the test.
This seed shows the issue:
```
./gradlew test --tests TestKnnByteVectorQuery.testToString -Dtests.seed=B78CDB966F4B8FC5
```
* hunspell: simplify TrigramAutomaton to speed up the suggestion enumeration
avoid the automaton access on definitely absent characters;
count the scores for all substring lengths together
A `join` within Lucene is built by adding child-docs and parent-docs in order. Since our vector field already supports sparse indexing, it should be able to support parent join indexing.
However, when searching for the closest `k`, it is still the k nearest children vectors with no way to join back to the parent.
This commit adds this ability through some significant changes:
- New leaf reader function that allows a collector for knn results
- The knn results can then utilize bit-sets to join back to the parent id
This type of support is critical for nearest passage retrieval over larger documents. Generally, you want the top-k documents and knowledge of the nearest passages over each top-k document. Lucene's join functionality is a nice fit for this.
This does not replace the need for multi-valued vectors, which is important for other ranking methods (e.g. colbert token embeddings). But, it could be used in the case when metadata about the passage embedding must be stored (e.g. the related passage).
BooleanScorer aligns windows to multiples of 2048, but it doesn't have to.
Actually, not aligning windows can help evaluate fewer windows overall and
speed up query evaluation.
The way `DefaultBulkScorer` uses `ConjunctionDISI` may make it advance the
competitive iterator beyond the end of the window. This may cause bugs with
bulk scorers such as `BooleanScorer` that sometimes delegate to the single
clause that has matches in a given window of doc IDs. We should then make sure
to not advance the competitive iterator beyond the end of the window based on
this clause, as other clauses may have matches as well.