This helps make calls sites of `Bits#get` bimorphic at most when checking live
docs. This helps because calls to `FixedBitSet#get` can then be inlined when
live docs are stored in a `FixedBitSet`. Another reason why this is important
is because these calls could be subject to auto-vectorizing when applied to an
array of doc IDs, which cannot apply if the `FixedBitSet#get` call is not
inlined.
While live docs are stored in a `FixedBitSet` by default, some features like
document-level security sometimes create wrappers.
I did not know it when I checked in the code, but this is almost exactly the v1
intersection algorithm from the "SIMD compression and the intersection of
sorted integers" paper.
Now that loading doc IDs into a bit set is much more efficient thanks to
auto-vectorization, it has become tempting to evaluate dense conjunctions by
and-ing bit sets.
In brazillian portuguese the conjunction "em(preposition)+<a|o|as|os>(article)" take the form "na, nas, no, nos" being common stop words.
For some reason the "nas" conjunction appear twice and the "no" is nowhere to be found, I think it was probably a mistake.
This pull request add the word to the list and remove the duplication.
* remove hardcoded lucene version, gets benchie working again
* add graviton4 instance type (c8g)
* use https clone to not require agent forwarding
* document prerequisites needed for this to work
* convert README to markdown
* apparently .md files need a license but .txt files do not
Similar to support for tolerating PFX/SFX count mismatches, add the
ability to tolerate REP count mismatches.
The issue arises in recent updates to LibreOffice mongolian dictionary
and is currently failing all PRs that change the analyzers:
https://bugs.documentfoundation.org/show_bug.cgi?id=164366
This is an iteration on #14064. The benefits of this approach are that the API
is a bit nicer and allows optimizing not only when doc IDs are stored in an
int[]. The downside is that it only helps non-scoring disjunctions for now, but
we can look into scoring disjunctions later on.
Currently, the disjunction iterator puts all clauses in a heap in order to be
able to merge doc IDs in a streaming fashion. This is a good approach for
exhaustive evaluation, when only one clause moves to a different doc ID on
average and the per-iteration cost is in the order of O(log(N)) where N is the
number of clauses.
However, if a selective filter is applied, this could cause many clauses to
move to a different doc ID. In the worst-case scenario, all clauses could move
to a different doc ID and the cost of maintaiting heap invariants could grow to
O(N * log(N)) (every clause introduces a O(log(N)) cost). With many clauses,
this is much higher than the cost of checking all clauses sequentially: O(N).
To protect from this reordering overhead, DisjunctionDISIApproximation now only
puts the cheapest clauses in a heap in a way that tries to achieve up to 1.5
clauses moving to a different doc ID on average. More expensive clauses are
checked linearly.
These classes specialize all bits per value up to 24. But performance of high
numbers of bits per value is not very important, because they are used by short
postings lists, which are fast to iterate anyway. So this PR only specializes
up to 16 bits per value.
For instance, if a postings list uses blocks of 17 bits per value, it means
that one can find gaps of 65,536 consecutive doc IDs that do not contain the
term. Such rare terms do not drive query performance.
This introduces a bulk scorer for `DisjunctionMaxQuery` that delegates to the
bulk scorers of the query clauses. This helps make the performance of top-level
`DisjunctionMaxQuery` better, especially when its clauses have optimized bulk
scorers themselves (e.g. disjunctions).
`docCountUpto` tracks the number of documents decoded so far, but it's only
used to compute the number of docs left to decode. So let's track the number of
docs left to decode instead.
Recent speedups by making call sites bimorphic made me want to play with combining all postings enums and impacts enums of the default codec into a single class, in order to reduce polymorphism. Unfortunately, it does not yield a speedup since the major polymorphic call sites we have that hurt performance (DefaultBulkScorer, ConjunctionDISI) are still 3-polymorphic or more.
Yet, reduced polymorphism at little performance impact is a good trade-off as it would help make call sites bimorphic for users who don't have as much query diversity as nightly benchmarks, or in the future when we remove other causes of polymorphism.
The specialization of `SimpleCollector` vs. `PagingCollector` only helps save a
null check, so it's probably not worth the complexity. Benchmarks cannot see a
difference with this change.
This addresses an existing TODO about giving terms a zipfian distribution, and
disables query caching to make sure that two-phase iterators are properly
tested.
This update introduces an option to store term vectors generated by the FeatureField.
With this option, term vectors can be used to access all features for each document.
This commit reduces the Panama vector distance float implementations to less than the maximum bytecode size of a hot method to be inlined (325).
E.g. Previously: org.apache.lucene.internal.vectorization.PanamaVectorUtilSupport::dotProductBody (355 bytes) failed to inline: callee is too large.
After: org.apache.lucene.internal.vectorization.PanamaVectorUtilSupport::dotProductBody (3xx bytes) inline (hot)
This helps things a little.
Co-authored-by: Robert Muir <rmuir@apache.org>
This PR changes the following:
- As much work as possible is moved from `nextDoc()`/`advance()` to
`nextPosition()`. This helps only pay the overhead of reading positions when
all query terms agree on a candidate.
- Frequencies are read lazily. Again, this helps in case a document is needed
in a block, but clauses do not agree on a common candidate match, so
frequencies are never decoded.
- A few other minor optimizations.
The corresponding readLatestCommit method is public and can be used to
read segment infos from indices that are older than N - 1.
The same should be possible for readCommit, but that requires the method
that takes the minimum supported version as an argument to be public.