WANDScorer implements block-max WAND and needs to recompute score upper bounds
whenever it moves to a different block. Thus it's important for these blocks to
be large enough to avoid re-computing score upper bounds over and over again.
With this commit, WANDScorer no longer uses clauses whose cost is higher than
the cost of the filter to compute block boundaries. This effectively makes
blocks larger when the filter is more selective.
* Adding filter to toString() of KnnFloatVectorQuery when it's present (addresses https://github.com/apache/lucene/issues/13983)
* addressing review comments
* adding knnbytevectorquery
* unit test improvements
* tidy
* adding changes entry for the bug fix
A while back we added an optimized bulk scorer that implements block-max AND,
this yielded a good speedup on nightly benchmarks, see annotation `FP` at
https://benchmarks.mikemccandless.com/AndHighHigh.html. With this PR, filtered
conjunctions now also run through this optimized bulk scorer by doing two
things:
- It flattens inner conjunctions. This makes queries initially written as
something like `+(+term1 +term2) #filter` rewritten to
`+term1 +term2 #filter`.
- It evaluates queries that have a mix of MUST and FILTER clauses evaluated
through `BlockMaxConjunctionBulkScorer` by treating FILTER clauses as
scoring clauses that produce a score of 0.
This commit allows easier verification of the Panama Vectorization provider with newer Java versions.
The upper bound Java version of the Vectorization provider is hardcoded to the version that has been tested and is known to work. This is a bit inflexible when experimenting with and verifying newer JDK versions. This change proposes to add a new system property that allows to set the upper bound of the range of Java versions supported.
With this change, and the accompanying small gradle change, then one can verify newer JDKs as follows:
CI=true; RUNTIME_JAVA_HOME=/Users/chegar/binaries/jdk-24.jdk-ea-b23/Contents/Home
./gradlew :lucene:core:test -Dorg.apache.lucene.vectorization.upperJavaFeatureVersion=24
This change helps both testing and verifying with Early Access JDK builds, as well as allowing to override the upper bound when the JDK is known to work fine.
Many of vector-related tests set up a codec manually by extending the current
codec. This makes bumping the current codec a bit painful as all these files
need to be touched. This commit migrates to `TestUtil#alwaysKnnVectorsFormat`,
similarly to what we do for postings and doc values.
This commit tries to improve the algorithm by:
1.- If there is only one vertex, then there is not further checks and that's the one used.
2.- if there is a common vertex, it first compute the signed area of the join, if they have different sign,
it chooses the negative one as that's the convex union. If they have the same sign, it computes the angle
of the join and chooses the smallest angle.
No need to go through the indirection of 2 wrapped functions, just put the logic in plain
methods. Also, we can just outright set the field if there's no executor.
Our collector managers have a `supportsConcurrency` flag to optimize the case
when they are used in a single thread. This PR proposes to remove this flag now
that the optimization doesn't do much as a result of #13943.
This reverts commit 1ee4f8a111.
We have observed performance regressions that can be linked to #13221. We will need to revise the logic that such change introduced in main and branch_10x. While we do so, I propose that we bake it out of branch_10_0 and we release Lucene 10 without it.
Closes#13856
1. Rearrange/rename the parameters to be more idiomatic (e.g., follow conventions of Arrays#... methods)
2. Add assert to ensure expected sortedness we may rely on in the future (so we're not trappy)
3. Migrate PostingsReader to call VectorUtil instead of VectorUtilSupport (so it benefits from the common assert)
In Lucene 8.4, we updated postings to work on long[] arrays internally. This
allowed us to workaround the lack of explicit vectorization (auto-vectorization
doesn't detect all the scenarios that we would like to handle) support in the
JVM by summing up two integers in one operation for instance.
With explicit vectorization now available, it looks like we can get more
benefits from the ability to compare multiple intetgers in one operations than
from summing up two integers in one operation. Moving back to ints helps
compare 2x more integers at once vs. longs.
When initializing a joint graph from one of the segments' graphs,
we always assume that a segment's graph is present. But later we want
to explore an option where some segments will not have graphs (#13447).
This change allows to account for missing graphs.
PR #13692 tried to speed up advancing by using branchless binary search, but while this yielded a speedup on my machine, this yielded a slowdown on nightly benchmarks.
This PR tries a different approach using vectorization. Experimentation suggests that it speeds up queries that advance to the next few doc IDs, such as `AndHighHigh`.
127 times out of 128, nextDoc() returns the next doc ID in the buffer.
Currently, we check if the current doc is equal to the last doc ID in the block
to know if we need to refill. We can do better by comparing the current index
in the block with the block size, which is a bit more efficient since the
latter is a constant.
`TopScoreDocCollectorManager` has a dependency on `HitsThresholdChecker`, which
is essentially a shared counter that is incremented until it reaches the total
hits threshold, when the scorer can start dynamically pruning hits.
A consequence of this removal is that dynamic pruning may start later, as soon
as:
- either the current slice collected `totalHitsThreshold` hits,
- or another slice collected `totalHitsThreshold` hits and the current slice
collected enough hits (up to 1,024) to check the shared
`MaxScoreAccumulator`.
So in short, it exchanges a bit more work globally in favor of a bit less
contention. A longer-term goal of mine is to stop specializing our
`CollectorManager`s based on whether they are going to be used concurrently or
not.
`LeafSimScorer` is a specialization of a `SimScorer` for a given segment. It
doesn't add much value, but benchmarks suggest that it adds measurable overhead
to queries sorted by score.
When `totalHitsThreshold` is `Integer.MAX_VALUE`, dynamic pruning is never used
and all hits get evaluated. Thus, the minimum competitive score always stays at
zero, and there is nothing to exchange across slices.
Currently, we traverse the BKD tree or perform a binary search using DocValues first, and then check whether the count can be obtained in the count() method of IndexSortSortedNumericDocValuesRangeQuery.
we should consider providing a mechanism to perform this check beforehand, avoid unnecessary processing when dealing with a sparseRange