* carefully regenerate the int8_hnsw bwc indices so that they do in fact use Lucene99ScalarQuantizedVectorsFormat ... when running TestInt8HnswBackwardsCompatibility it now fails (as expected) on 9.11.0 and 9.11.1 bwc indices, but not on 9.10.0
* rename int8 -> int7 bwc tests since we are actually testing 7 bit quantization
* actually fix the bwc bug: only allow compress=true when bits is 7 or 8 in HNSW scalar quantization
* tidy
* Revert "rename int8 -> int7 bwc tests since we are actually testing 7 bit quantization"
This reverts commit eeb3f8a668.
* Reapply "rename int8 -> int7 bwc tests since we are actually testing 7 bit quantization"
This reverts commit 3487c4210b.
* #13880: add test to verify the int7 quantized indices are in fact using quantized vectors not float32
* bump 9.12.x version to 9.12.1 and add bwc indices for 9.12.0
* remove duplicate 9.12.0 Version constant
* revert changes to index.9.12.0-cfs.zip, index.9.12.0-nocfs.zip, sorted.9.12.0.zip
* remove unused bwc index
Closes#13867Closes#13880
The two seeds at #13818 had different root causes:
- The test allows the number of segments to go above the limit, only if none
of the merges are legal. But there are multiple reasons why a merge may be
illegal: because it exceeds the max doc count or because it is too imbalanced.
However these two things were checked independently, so you could run into
cases when the test would think that there are legal merges from the doc count
perspective and from the balance perspective, but all legal merges from the doc
count perspective are illegal from the balance perspective and vice-versa. The
test now checks that there are merges that are good wrt these two criteria
at once.
- `TieredMergePolicy` allows at least `targetSearchConcurrency` segments in an
index. There was a bug in `TieredMergePolicy` where this condition is
applied after "too big" segments have been removed, so it effectively allowed
more segments than necessary in the index.
Closes#13818
`SerialIODirectory` doesn't count reads to files that are open with
`ReadAdvice#RANDOM_PRELOAD` as these files are expected to be loaded in memory.
Unfortunately, we cannot detect such files on compound segments, so this test
now disables compound segments.
Closes#13854
There's no need to allocate a byte array when serializing to heap
buffers and the string fits the remaining capacity without further bounds checks.
If it doesn't fit we could technically do better than the current
`writeLongString` and avoid one round of copying by chunking the string
but that might not be worth the complexity.
In either case we can calculate the utf8 length up-front.
While this costs extra cycles (in the small case) for iterating the string twice it saves
creating an oftentimes 3x oversized byte array, a `BytesRef`, field
reads from the `BytesRef`, copying from it to the buffer and the associated GC with cleaning it up.
Theory and some quick benchmarking suggests this version is likely faster for any string
length than the existing code.
Removing some obvious dead code, turning some fields into locals that don't need to be fields, making things static and deduplicating duplicate "scratch" field.
An object return inside hot code like this is needlessly wasteful.
Escape analysis doesn't catch this one and we end up allocating many GB
of throwaway objects during benchmark runs. We might as well use two
utility methods and accumulate the raw value.
This shows up as allocating tens of GB for iterators in the nightly
benchmarks. We should go the zero-allocation route for RandomAccess
lists, which I'd expect 100% of them will be here for a bit of a speedup.
This commit removes the flattening of ordered and unordered interval sources, as it alters the gap visibility for parent intervals. For example, ordered("a", ordered("b", "c")) should result in a different gap compared to ordered("a", "b", "c").
Phrase/Block operators will continue to flatten their sub-sources since this does not affect the inner gap (which is always 0 in the case of blocks).
The command to remove uploaded artifacts from svn is missing a dash, hence it
fails as it does not match the name of the artifacts uploaded at the previous steps.
There is currently no way to configure two parameters for the multi-leaf collector. For expert extensibility, this commit adds another ctor for advance usage:
closes: #13699
This is a test only change that verifies the behaviour when float vector values are passed to our FlatVectorsScorer implementations. This would have caught the bug causing #13844, subsequently fixed by #13850.
introduced in the major refactor #13779
Off-heap scoring is only present for byte[] vectors, and it isn't enough to verify that the vector provider also satisfies the HasIndexSlice interface. The vectors need to be byte vectors otherwise, the slice iterations and scoring are completely nonsensical leading to HNSW graph building to run until the heat-death of the universe.
While preparing Lucene 10 RC1, I had an issue running the release script from branch_10_0. It reproduces on branch_10x as well. The ./gradle clean check command fails with the following gradle error and some huge tasks dependency output:
Unable to make progress running work. There are items queued for execution but none of them can be started
I worked around this by splitting the clean and check into two separate calls, in which case everything works fine.am making this change at least until we have figured out what causes the issue and we have a fix.
This commit override the iterator method in the empty off-heap vector values. The implementation is just the dense iterator, which handles empty values just fine. We use it elsewhere for similar too.
Bump the codec version to 10.0.
Lucene100Codec is the exact same file format as Lucene912Codec. This codec
dance just makes things slightly easier to reason about since our backward
compatibility guarantees are aligned with major version: once we drop support
for 9.x indices, we can remove all `Lucene9XXCodec`s.
Even though this field is not `volatile`, writing it isn't free and
causes needless cache thrashing at some frequency. We can speed things
up by only writing the `true` value and never the `false` value.
This improves testing of mismatched field numbers by
- improving `AssertingDocValuesProducer` to detect mismatched field numbers,
- introducing a `MismatchedCodecReader` to actually test mismatched field
numbers on `DocValuesProducer` (a `MismatchedLeafReader` wrapping a
`SlowCodecReaderWrapper` doesn't work since `SlowCodecReaderWrapper` implicitly
resolves the correct `FieldInfo` object),
- introducing an explicit test for mismatched field numbers for doc values, points,
postings and knn vectors.
These new tests uncovered a bug when merging sorted doc values, which would
call the underlying doc values producer with the merged field info.
Closes#13805