Recent speedups by making call sites bimorphic made me want to play with combining all postings enums and impacts enums of the default codec into a single class, in order to reduce polymorphism. Unfortunately, it does not yield a speedup since the major polymorphic call sites we have that hurt performance (DefaultBulkScorer, ConjunctionDISI) are still 3-polymorphic or more.
Yet, reduced polymorphism at little performance impact is a good trade-off as it would help make call sites bimorphic for users who don't have as much query diversity as nightly benchmarks, or in the future when we remove other causes of polymorphism.
The specialization of `SimpleCollector` vs. `PagingCollector` only helps save a
null check, so it's probably not worth the complexity. Benchmarks cannot see a
difference with this change.
This addresses an existing TODO about giving terms a zipfian distribution, and
disables query caching to make sure that two-phase iterators are properly
tested.
This update introduces an option to store term vectors generated by the FeatureField.
With this option, term vectors can be used to access all features for each document.
This commit reduces the Panama vector distance float implementations to less than the maximum bytecode size of a hot method to be inlined (325).
E.g. Previously: org.apache.lucene.internal.vectorization.PanamaVectorUtilSupport::dotProductBody (355 bytes) failed to inline: callee is too large.
After: org.apache.lucene.internal.vectorization.PanamaVectorUtilSupport::dotProductBody (3xx bytes) inline (hot)
This helps things a little.
Co-authored-by: Robert Muir <rmuir@apache.org>
This PR changes the following:
- As much work as possible is moved from `nextDoc()`/`advance()` to
`nextPosition()`. This helps only pay the overhead of reading positions when
all query terms agree on a candidate.
- Frequencies are read lazily. Again, this helps in case a document is needed
in a block, but clauses do not agree on a common candidate match, so
frequencies are never decoded.
- A few other minor optimizations.
The corresponding readLatestCommit method is public and can be used to
read segment infos from indices that are older than N - 1.
The same should be possible for readCommit, but that requires the method
that takes the minimum supported version as an argument to be public.
This implements a small contained hack to make sure that our compound scorers
like `MaxScoreBulkScorer`, `ConjunctionBulkScorer`,
`BlockMaxConjunctionBulkScorer`, `WANDScorer` and `ConjunctionDISI` only have
two concrete implementations of `DocIdSetIterator` and `Scorable` to deal with.
This helps because it makes calls to `DocIdSetIterator#nextDoc()`,
`DocIdSetIterator#advance(int)` and `Scorable#score()` bimorphic at most, and
bimorphic calls are candidate for inlining.
This should help speed up boolean queries of term queries at the expense of
boolean queries of other query types. This feels fair to me as it gives more
speedups than slowdowns in benchmarks, and that boolean queries of term queries
are extremely typical. Boolean queries that mix term queries and other types of
queries may get a slowdown or a speedup depending on whether they get more from
the speedup on their term clauses than they lose on their other clauses.
This commit adds IndexInput::isLoaded to help determine if the contents of an input is resident in physical memory.
The intent of this new method is to help build inspection and diagnostic infrastructure on top.
Running filtered disjunctions with a specialized bulk scorer seems to yield a
good speedup. For what it's worth, I also tried to implement a MAXSCORE-based
scorer to see if it had to do with the `BulkScorer` specialization or the
algorithm, but it didn't help.
To work properly, I had to add a rewrite rule to inline disjunctions in a MUST
clause.
As a next step, it would be interesting to see if we can further optimize this
by loading the filter into a bitset and applying it like live docs.
Currently, `WANDSCorer` considers that a hit is a match if the sum of maximum
scores across clauses is more than or equal to the minimum competitive score.
We can do better by computing scores of leading clauses on the fly. This helps
because scores are often lower than the score upper bound, so using actual
scores instead of score upper bounds can help skip advancing more clauses.
For reference, we are already doing the same trick in our conjunction (bulk)
scorers and in `MaxScoreBulkScorer` (bulk scorer for top-level disjunctions).
We currently use `SlowImpactsEnum` for terms whose `docFreq` is less than 128
because it's convenient as these terms don't have impacts anyway. But a recent
slowdown on nightly benchmarks suggests that this contributes to making some
hot calls more polymorphic than we'd like, so this PR moves such terms back to
the regular impacts enums.
`CombinedFieldQuery` currently returns an infinite maximum score. We can do
better by returning the maximum score that the sim scorer can return, which in
the case of BM25 is bounded by the IDF. This makes CombinedFieldQuery eligible
for WAND/MAXSCORE (not their block-max variants though, since we return the
same score upper bound for the whole index).
WANDScorer implements block-max WAND and needs to recompute score upper bounds
whenever it moves to a different block. Thus it's important for these blocks to
be large enough to avoid re-computing score upper bounds over and over again.
With this commit, WANDScorer no longer uses clauses whose cost is higher than
the cost of the filter to compute block boundaries. This effectively makes
blocks larger when the filter is more selective.
The PR removes the allocation of a new `LockedRow` for each locking operation in `HnswLock`. Even though the object was very quickly released, and JIT supports on-stack allocation, it didn't happen in my experiments on OpenJDK 21 - it's easy to avoid the allocation, rather than rely on the JIT.
The commit updates the ReadAdvice.SEQUENTIAL before the merge starts and reverts it to ReadAdvice.RANDOM at the end of the merge for Lucene99FlatVectorsReader.
WANDScorer implements block-max WAND and needs to recompute score upper bounds
whenever it moves to a different block. Thus it's important for these blocks to
be large enough to avoid re-computing score upper bounds over and over again.
With this commit, WANDScorer no longer uses clauses whose cost is higher than
the cost of the filter to compute block boundaries. This effectively makes
blocks larger when the filter is more selective.
* Adding filter to toString() of KnnFloatVectorQuery when it's present (addresses https://github.com/apache/lucene/issues/13983)
* addressing review comments
* adding knnbytevectorquery
* unit test improvements
* tidy
* adding changes entry for the bug fix
A while back we added an optimized bulk scorer that implements block-max AND,
this yielded a good speedup on nightly benchmarks, see annotation `FP` at
https://benchmarks.mikemccandless.com/AndHighHigh.html. With this PR, filtered
conjunctions now also run through this optimized bulk scorer by doing two
things:
- It flattens inner conjunctions. This makes queries initially written as
something like `+(+term1 +term2) #filter` rewritten to
`+term1 +term2 #filter`.
- It evaluates queries that have a mix of MUST and FILTER clauses evaluated
through `BlockMaxConjunctionBulkScorer` by treating FILTER clauses as
scoring clauses that produce a score of 0.
This commit allows easier verification of the Panama Vectorization provider with newer Java versions.
The upper bound Java version of the Vectorization provider is hardcoded to the version that has been tested and is known to work. This is a bit inflexible when experimenting with and verifying newer JDK versions. This change proposes to add a new system property that allows to set the upper bound of the range of Java versions supported.
With this change, and the accompanying small gradle change, then one can verify newer JDKs as follows:
CI=true; RUNTIME_JAVA_HOME=/Users/chegar/binaries/jdk-24.jdk-ea-b23/Contents/Home
./gradlew :lucene:core:test -Dorg.apache.lucene.vectorization.upperJavaFeatureVersion=24
This change helps both testing and verifying with Early Access JDK builds, as well as allowing to override the upper bound when the JDK is known to work fine.
Many of vector-related tests set up a codec manually by extending the current
codec. This makes bumping the current codec a bit painful as all these files
need to be touched. This commit migrates to `TestUtil#alwaysKnnVectorsFormat`,
similarly to what we do for postings and doc values.
This commit tries to improve the algorithm by:
1.- If there is only one vertex, then there is not further checks and that's the one used.
2.- if there is a common vertex, it first compute the signed area of the join, if they have different sign,
it chooses the negative one as that's the convex union. If they have the same sign, it computes the angle
of the join and chooses the smallest angle.
No need to go through the indirection of 2 wrapped functions, just put the logic in plain
methods. Also, we can just outright set the field if there's no executor.
Our collector managers have a `supportsConcurrency` flag to optimize the case
when they are used in a single thread. This PR proposes to remove this flag now
that the optimization doesn't do much as a result of #13943.
This reverts commit 1ee4f8a111.
We have observed performance regressions that can be linked to #13221. We will need to revise the logic that such change introduced in main and branch_10x. While we do so, I propose that we bake it out of branch_10_0 and we release Lucene 10 without it.
Closes#13856
1. Rearrange/rename the parameters to be more idiomatic (e.g., follow conventions of Arrays#... methods)
2. Add assert to ensure expected sortedness we may rely on in the future (so we're not trappy)
3. Migrate PostingsReader to call VectorUtil instead of VectorUtilSupport (so it benefits from the common assert)
In Lucene 8.4, we updated postings to work on long[] arrays internally. This
allowed us to workaround the lack of explicit vectorization (auto-vectorization
doesn't detect all the scenarios that we would like to handle) support in the
JVM by summing up two integers in one operation for instance.
With explicit vectorization now available, it looks like we can get more
benefits from the ability to compare multiple intetgers in one operations than
from summing up two integers in one operation. Moving back to ints helps
compare 2x more integers at once vs. longs.