This commit removes the flattening of ordered and unordered interval sources, as it alters the gap visibility for parent intervals. For example, ordered("a", ordered("b", "c")) should result in a different gap compared to ordered("a", "b", "c").
Phrase/Block operators will continue to flatten their sub-sources since this does not affect the inner gap (which is always 0 in the case of blocks).
The command to remove uploaded artifacts from svn is missing a dash, hence it
fails as it does not match the name of the artifacts uploaded at the previous steps.
There is currently no way to configure two parameters for the multi-leaf collector. For expert extensibility, this commit adds another ctor for advance usage:
closes: #13699
This is a test only change that verifies the behaviour when float vector values are passed to our FlatVectorsScorer implementations. This would have caught the bug causing #13844, subsequently fixed by #13850.
introduced in the major refactor #13779
Off-heap scoring is only present for byte[] vectors, and it isn't enough to verify that the vector provider also satisfies the HasIndexSlice interface. The vectors need to be byte vectors otherwise, the slice iterations and scoring are completely nonsensical leading to HNSW graph building to run until the heat-death of the universe.
While preparing Lucene 10 RC1, I had an issue running the release script from branch_10_0. It reproduces on branch_10x as well. The ./gradle clean check command fails with the following gradle error and some huge tasks dependency output:
Unable to make progress running work. There are items queued for execution but none of them can be started
I worked around this by splitting the clean and check into two separate calls, in which case everything works fine.am making this change at least until we have figured out what causes the issue and we have a fix.
This commit override the iterator method in the empty off-heap vector values. The implementation is just the dense iterator, which handles empty values just fine. We use it elsewhere for similar too.
Bump the codec version to 10.0.
Lucene100Codec is the exact same file format as Lucene912Codec. This codec
dance just makes things slightly easier to reason about since our backward
compatibility guarantees are aligned with major version: once we drop support
for 9.x indices, we can remove all `Lucene9XXCodec`s.
Even though this field is not `volatile`, writing it isn't free and
causes needless cache thrashing at some frequency. We can speed things
up by only writing the `true` value and never the `false` value.
This improves testing of mismatched field numbers by
- improving `AssertingDocValuesProducer` to detect mismatched field numbers,
- introducing a `MismatchedCodecReader` to actually test mismatched field
numbers on `DocValuesProducer` (a `MismatchedLeafReader` wrapping a
`SlowCodecReaderWrapper` doesn't work since `SlowCodecReaderWrapper` implicitly
resolves the correct `FieldInfo` object),
- introducing an explicit test for mismatched field numbers for doc values, points,
postings and knn vectors.
These new tests uncovered a bug when merging sorted doc values, which would
call the underlying doc values producer with the merged field info.
Closes#13805
TermInSetQuery used to have an accessor to its terms that was removed in #12173
to protect leaking internal encoding details. This introduces an accessor to the
term data in the query that doesn't expose internals but merely allows iterating
over the decoded BytesRef, making inspection of the querys content possible again.
Closes#13804
After adjusting tests that truly exercise intra-merge parallelism, more issues have arisen. See: https://github.com/apache/lucene/issues/13798
To be risk adverse & due to the soon to be released/freezed Lucene 10 & 9.12, I am reverting all intra-merge parallelism, except for the parallelism when merging HNSW graphs.
Merging other structures was never really enabled in a release (we disabled it in a bugfix for Lucene 9.11). While this is frustrating as it seems like we leaving lots of perf on the floor, I am err'ing on the side of safety here.
In Lucene 10, we can work on incrementally reenabling intra-merge parallelism.
closes: https://github.com/apache/lucene/issues/13798
At the moment, our skip indexes record min/max ordinal/value per range
of doc IDs. It would be natural to extend it to other pre-aggregated
data such as a sum and value count, which facets could take advantage
of. This change switches `docValuesSkipIndex` from a boolean to an enum
so that we could release such changes in the future in an additive
fashion, by adding constants to this enum and new methods to
`DocValuesSkipper`.
Noticed some visible allocations in CompetitiveImpactAccumulator
during benchmarking and fixed the needless allocation for the comparator
in that class as well as a couple other similar spots where needless
classes and/or objects could easily be replaced by more lightweight
solutions.
We have recently (see #13735) introduced this utility method that creates a
collector manager which only works when a searcher does not have an executor
set, otherwise it throws exception once we attempt to create a new collector
for more than one slice.
While we discussed it should be safe to use in some specific scenarios like the
monitor module, we should be careful exposing this utility publicly, because
while we'd like to ease migration from the search(Query, Collector) method, we
may end up making users like even worse, in that it exposes them to failures
whenever an executor is set and there are more than one slice created, which
is hard to follow and does not provide a good user experience.
My proposal is that we use a similar collector manager locally, where safe and
required, but we don't expose it to users. In most places, we should rather
expose collector managers that do support search concurrency, rather than working
around the lack of those.
4 and 7 bit quantization still work.
It's a bit tricky because 9.11 indices may have 8 bit compressed
vectors which are buggy at search time (and users may not realize it,
or may not be using them at search time). But the index is still
intact since we keep the original full float precision vectors. So,
users can force rewrite all their 9.11 written segments (or reindex
those docs), and can change to 4 or 7 bit quantization for newly
indexed documents. The 9.11 index is still usable.
(I added a couple test cases confirming that one can indeed change
their mind, indexing a given vector field first with 4 bit
quantization, then later (new IndexWriter / Codec) with 7 bit or with
no quantization.)
I added MIGRATE.md explanation.
Separately, I also tightned up the `compress` boolean to throw an
exception unless bits=4. Previously (for 7 bit compression) it
silently ignored `compress=true` for 7, 8 bit quantization. And tried
to improve its javadocs a bit.
Closes#13519.