Currently, we traverse the BKD tree or perform a binary search using DocValues first, and then check whether the count can be obtained in the count() method of IndexSortSortedNumericDocValuesRangeQuery.
we should consider providing a mechanism to perform this check beforehand, avoid unnecessary processing when dealing with a sparseRange
It is sometimes possible for `MaxScoreBulkScorer` to compute windows that don't
contain many candidate matches, resulting in more time spent evaluating maximum
scores per window than evaluating candidate matches on this window.
This PR introduces a heuristic that tries to require at least 32 candidate
matches per clause per window to amortize the per-window overhead. This results
in a speedup for the `OrMany` task.
This comes from observations on https://tantivy-search.github.io/bench/ for
exhaustive evaluation like `TOP_100_COUNT`. `collect()` is often inlined, but
other methods that we'd like to see inlined like `PostingsEnum#nextDoc()` are
not always inlined. This PR decreases the compiled size of `collect()` to make
more room for other methods to be inlined.
It does so by moving an assertion to `AssertingScorable` and extracting an
uncommon code path to a method.
I was looking at some queries where Lucene performs significantly worse than
Tantivy at https://tantivy-search.github.io/bench/, and found out that we get
quite some overhead from implementing `BooleanScorer` on top of `BulkScorer`
(effectively implemented by `DefaultBulkScorer` since it only runs term queries
as boolean clauses) rather than `Scorer` directly.
The `CountOrHighHigh` and `CountOrHighMed` tasks are a bit noisy on my machine,
so I did 3 runs on wikibigall, and all of them had speedups for these two
tasks, often with a very low p-value.
In theory, this change could make things slower when the inner query has a
specialized bulk scorer, such as `MatchAllDocsQuery` or a conjunction. It does
feel right to optimize for term queries though.
Follow-up to #13871, getting another speedup from relatively trivial changes:
* avoid redundant `end()` call by directly storing the end value for sub-iterator that we don't use for anything else
* also save most `get(...)` calls for this sub-iterator
* avoid redundant `start()` call by grabbing `start()` directly from `nextInterval`
* replace `getFirst()` with `get(0)`, it looks nice but has needless overhead in my testing (not sure why, but profiling clearly shows it to be slower, maybe just a result of having `get()`'s code hot in the cache with a higher likelihood or something esoteric like that)
* avoid using an iterator for loop for a random access list, this is probably the biggest win in this PR
Generate cleaner code for PForUtil that has no dead parameters.
Also:
PForUtil instances always create their own `ForUtil`, so we can inline
that into the field declaration. Also, we can save cycles
for accessing the input on PostingsDecodingUtil.
Surprisingly, the combination of these cleanups yields a small but
statistically fully visible speedup that the compiler isn't able to get
to on its own it seems.
`BlockMaxConjunctionBulkScorer` only checks if it can early exit based on
impacts once per window, and windows are computed using impact blocks of the
leading clause. So this logic is defeated if the leading clause produces a
single block (e.g. `ConstantScoreQuery`). This commit addresses this problem by
artificially lowering the window size to contain ~128 docs of the leading
clause.
MaxScoreBulkScorer partitions scorers into a set of essential scorers and a set
of non-essential scorers, depending on the maximum scores produced by scorers
and on the current minimum competitive score. An increase of the minimum
competitive score has the potential to yield a more favorable partitioning, but
repartitioning can also be expensive.
In order to repartition when necessary while avoiding to repartition too often,
this PR tracks the minimum value of the minimum competitive score that would
produce a more favorable partitioning, and repartitions scorers whenever the
minimum competitive score exceeds this threshold.
This commit tries to save calls to `madvise` which are not necessary, either
because they map to the OS' default, or because the advice would be overridden
later on anyway. I have not noticed specific problems with this, but it seems
desirable to keep calls to `madvise` to a minimum.
As a consequence:
- Files that are open with `ReadAdvice.NORMAL` do not call `madvise` since
this is the OS' default.
- Compound files are always open with `ReadAdvice.NORMAL`, and the actual is
only set when opening sub files of these compound files.
To make the latter less trappy, the `IOContext` parameter has been removed from
`CompoundFormat#getCompoundReader`.
We updated TestGenerateBwcIndices to create int7 HNSW indices instead of int8 with #13874.
The corresponding python code part of the release wizard needs to be updated accordingly.
* Fix 9.12.0 backcompat break (Lucene 9.12.0 cannot read 9.11.x indices written with quantized HNSW, `Lucene99HnswScalarQuantizedVectorsFormat`) (#13874)
* carefully regenerate the int8_hnsw bwc indices so that they do in fact use Lucene99ScalarQuantizedVectorsFormat ... when running TestInt8HnswBackwardsCompatibility it now fails (as expected) on 9.11.0 and 9.11.1 bwc indices, but not on 9.10.0
* rename int8 -> int7 bwc tests since we are actually testing 7 bit quantization
* actually fix the bwc bug: only allow compress=true when bits is 7 or 8 in HNSW scalar quantization
* tidy
* Revert "rename int8 -> int7 bwc tests since we are actually testing 7 bit quantization"
This reverts commit eeb3f8a668.
* Reapply "rename int8 -> int7 bwc tests since we are actually testing 7 bit quantization"
This reverts commit 3487c4210b.
* #13880: add test to verify the int7 quantized indices are in fact using quantized vectors not float32
* bump 9.12.x version to 9.12.1 and add bwc indices for 9.12.0
* remove duplicate 9.12.0 Version constant
* revert changes to index.9.12.0-cfs.zip, index.9.12.0-nocfs.zip, sorted.9.12.0.zip
* remove unused bwc index
Closes#13867Closes#13880
* int7 9.x goes unsupported
---------
Co-authored-by: Michael McCandless <mikemccand@apache.org>
It's in the title, extracting shared parts across both classes.
Almost exclusively mechanical changes with the exception of the introduction
of an array summing util.
This is a continuation of #13588, where we avoided allocating liveDocs
for segments that have the __soft_deletes field but no values in it.
However, that PR only addressed the reading side. This change fixes the
writing scenario with IndexWriter.
Relates #13588
These two share a lot of code, in particular the impacts implementation is 100% identical.
We can save a lot of code and potentially some cycles for method invocations by
drying things up. The changes are just mechanical field movements with the following exceptions:
1. One of the two implementations was using a bytes ref builder, one a bytes ref for holding the
serialized impacts. The `BytesRef` variant is faster so I used that for both when extracting.
2. Some simple arithmetic simplifications around the levels that should be obvious.
3. Removed the the logic for an index without positions in `BlockImpactsPostingsEnum`, that was dead code,
we only set this thing up if there's positions.
Lazy initialize these fields. They consume/cause a lot of memory/GC because they are
allocated frequently (~7% of all allocations in luceneutil's wikimedia medium run for me).
This does not cause any measurable slowdown as far as runtime is concerned and since these are not
even needed for all instances (in fact they are rarely used in the queries the benchmark explores)
save qutie a few CPU cycles for collecting and allocating them.
* Make generated archive files reproducible
This should ensure deterministic archive files and fix issues with changing checksums even
though the codebase has not changed
The two seeds at #13818 had different root causes:
- The test allows the number of segments to go above the limit, only if none
of the merges are legal. But there are multiple reasons why a merge may be
illegal: because it exceeds the max doc count or because it is too imbalanced.
However these two things were checked independently, so you could run into
cases when the test would think that there are legal merges from the doc count
perspective and from the balance perspective, but all legal merges from the doc
count perspective are illegal from the balance perspective and vice-versa. The
test now checks that there are merges that are good wrt these two criteria
at once.
- `TieredMergePolicy` allows at least `targetSearchConcurrency` segments in an
index. There was a bug in `TieredMergePolicy` where this condition is
applied after "too big" segments have been removed, so it effectively allowed
more segments than necessary in the index.
Closes#13818
`SerialIODirectory` doesn't count reads to files that are open with
`ReadAdvice#RANDOM_PRELOAD` as these files are expected to be loaded in memory.
Unfortunately, we cannot detect such files on compound segments, so this test
now disables compound segments.
Closes#13854
There's no need to allocate a byte array when serializing to heap
buffers and the string fits the remaining capacity without further bounds checks.
If it doesn't fit we could technically do better than the current
`writeLongString` and avoid one round of copying by chunking the string
but that might not be worth the complexity.
In either case we can calculate the utf8 length up-front.
While this costs extra cycles (in the small case) for iterating the string twice it saves
creating an oftentimes 3x oversized byte array, a `BytesRef`, field
reads from the `BytesRef`, copying from it to the buffer and the associated GC with cleaning it up.
Theory and some quick benchmarking suggests this version is likely faster for any string
length than the existing code.
Removing some obvious dead code, turning some fields into locals that don't need to be fields, making things static and deduplicating duplicate "scratch" field.
An object return inside hot code like this is needlessly wasteful.
Escape analysis doesn't catch this one and we end up allocating many GB
of throwaway objects during benchmark runs. We might as well use two
utility methods and accumulate the raw value.
This shows up as allocating tens of GB for iterators in the nightly
benchmarks. We should go the zero-allocation route for RandomAccess
lists, which I'd expect 100% of them will be here for a bit of a speedup.