4 and 7 bit quantization still work.
It's a bit tricky because 9.11 indices may have 8 bit compressed
vectors which are buggy at search time (and users may not realize it,
or may not be using them at search time). But the index is still
intact since we keep the original full float precision vectors. So,
users can force rewrite all their 9.11 written segments (or reindex
those docs), and can change to 4 or 7 bit quantization for newly
indexed documents. The 9.11 index is still usable.
(I added a couple test cases confirming that one can indeed change
their mind, indexing a given vector field first with 4 bit
quantization, then later (new IndexWriter / Codec) with 7 bit or with
no quantization.)
I added MIGRATE.md explanation.
Separately, I also tightned up the `compress` boolean to throw an
exception unless bits=4. Previously (for 7 bit compression) it
silently ignored `compress=true` for 7, 8 bit quantization. And tried
to improve its javadocs a bit.
Closes#13519.
With the introduction of intra-segment concurrency, we have introduced a new
protected search(LeafReaderContextPartition[], Weight, Collector) method. The
previous variant that accepts a list of leaf reader contexts was left deprecated
as there is one leftover usages coming from search(Query, Collector). The hope was
that the latter was going to be removed soon as well, but there is actually no
need to tie the two removals. It is easier to fold this method into its only
caller, in order for it to still bypass the collector manager based methods.
This way we fold two deprecated methods into a single one.
We have been encoding docBase and the score in MaxScoreAccumulator#accumulate.
That makes the assumption that segments are going to be processed in doc order
and implements global max score accounting across segments searched concurrently.
With the introduction of intra-segment concurrency, the same segment may be seen
multiple times, once per segment partition. Partitions are all going to have the same
docBase, hence you may end up with topN results with higher docIds than expected,
because the search early terminates before docs with same score and lower doc ids
are seen.
This commit encodes the docId in the accumulator in place of the docBase to resolve
the described issue.
Trivial commit that makes HnswLock final and LockedRow a record. This is general clean up and helps the JIT a little when reasoning about these types - which show quite a bit in indexing and search profiles.
There's two tests where we use 250_000 as number of collected hits, but we only
ever index max 2000 docs. That makes use create a priority queue of size
250_000 for each segment partition which causes out of memory errors when the
number of partitions is higher than a few.
With this commit I propose that we lower the threshold to 2000 for those tests
that need a high number of collected hits. The assumption that a priority queue
is not built within the LargeNumHitsTopDocsCollector still holds so this change
should not defeat the purpose of the tests.
ProfilerCollector did not have until now a corresponding collector manager.
This commit introduces one and switches TestProfilerCollector to use search
concurrency and move away from the deprecated search(Query, Collector) method.
Note that the collector manager does not support children collectors. Figuring
out a generic API for that is rather complicated. Users can always create
their own collector manager depending on their collector hierarchy.
Relates to #12892
This has caused a few recent test failures with jdk23 and jdk24. It should
get fixed replacing the length with the lengt of the array, rather than using
the new length.
During segment merge we must verify that a given field has vectors and exists. The typical knn format checks assume the per-field format is used and thus only check for `null`.
But we should check for field existence in the field info and verify it has dense vectors
Additionally, this commit unifies how the knn formats work and they will throw if a non-existing field is queried. Except for PerField format, which will return null (like the other per field formats)
@iverase has uncovered a potential issue with intra-merge CMS parallelism.
This commit helps expose this problem by forcing tests to use intra-merge parallelism instead of always (well, usually) delegating to a SameThreadExecutorService.
When intra-merge parallelism is used, norms, doc_values, stored_values, etc. are all merged in a separate thread than the thread that was used to construct their merge optimized instances.
This trips assertions numerous assertions in AssertingCodec.assertThread where we assume that the thread that called getMergeInstance() is also the thread getting the values to merge.
In addition to the better testing, this corrects poor merge state handling in the event of parallelism.
Lists are invariant, so current API doesn't allow passing something like List<CollectorManagerImplementation> where `class CollectorManagerImplementation implements CollectorManager<A, B>`. Invariant makes it so `List<CollectorManagerImplementation>` does not extend `List<CollectorManager<A, B>>`. Using bounded wildcard type allows to overcome that.
This commit is a micro optimisation to the HNSW Lock implementation, which avoids integer and vararg boxing when determining the hash of the node and level. Trivially we provide our own arity specialised hash, rather than the more generic java.util.Objects.hash(Object...).
This commit introduces support for optionally creating slices that target leaf reader context partitions, which allow them to be searched concurrently. This is good to maximize resource usage when searching force-merged indices, or indices with rather big segments, by parallelizig search execution across subsets of segments being searched.
Note: this commit does not affect default generation of slices. Segments can be partitioned by overriding the `IndexSearcher#slices(List<LeafReaderContext>)` method to plug in ad-hoc slices creation. Moreover, the existing `IndexSearcher#slices` static method now creates segment partitions when the additional `allowSegmentsPartitions` argument is set to `true`.
The overall design of this change is based on the existing search concurrency support that is based on `LeafSlice` and `CollectorManager`. A new `LeafReaderContextPartition` abstraction is introduced, that holds a reference to a `LeafReaderContext` and the range of doc ids it targets. A `LeafSlice` noew targets segment partitions, each identified by a `LeafReaderContext` instance and a range of doc ids. It is possible for a partition to target a whole segment, and for partitions of different segments to be combined into the same leaf slices freely, hence searched by the same thread. It is not possible for multiple partitions of the same segment to be added to the same leaf slice.
Segment partitions are searched concurrently leveraging the existing `BulkScorer#score(LeafCollector collector, Bits acceptDocs, int min, int max)` method, that allows to score a specific subset of documents for a provided
`LeafCollector`, in place of the `BulkScorer#score(LeafCollector collector, Bits acceptDocs)` that would instead score all documents.
## Changes that require migration
The migrate guide has the following new clarifying items around the contract and breaking changes required to support intra-segment concurrency:
- `Collector#getLeafCollector` may be called multiple times for the same leaf across distinct `Collector` instances created by a `CollectorManager`. Logic that relies on `getLeafCollector` being called once per leaf per search needs updating.
- a `Scorer`, `ScorerSupplier` or `BulkScorer` may be requested multiple times for the same leaf
- `IndexSearcher#searchLeaf` change of signature to accept the range of doc ids
- `BulkScorer#score(LeafCollector, BitSet)` is removed in favour of `BulkScorer#score(LeafCollector, BitSet, int, int)`
- static `IndexSearcher#slices` method changed to take a last boolean argument that optionally enables the creation of segment partitions
- `TotalHitCountCollectorManager` now requires that an array of `LeafSlice`s, retrieved via `IndexSearcher#getSlices`,
is provided to its constructor
Note: `DrillSideways` is the only component that does not support intra-segment concurrency and needs considerable work to do so, due to its requirement that the entire set of docs in a segment gets scored in one go.
The default searcher slicing is not affected by this PR, but `LuceneTestCase` now randomly leverages intra-segment concurrency. An additional `newSearcher` method is added that takes a `Concurrency` enum as the last argument in place of the `useThreads` boolean flag. This is important to disable intra-segment concurrency for `DrillSideways` related tests that do support inter-segment concurrency but not intra-segment concurrency.
## Next step
While this change introduces support for intra-segment concurrency, it only sets up the foundations of it. There is still a performance penalty for queries that require segment-level computation ahead of time, such as points/range queries. This is an implementation limitation that we expect to improve in future releases, see #13745.
Additionally, we will need to decide what to do about the lack of support for intra-segment concurrency in `DrillSideways` before we can enable intra-segment slicing by default. See #13753 .
Closes#9721
We recently introduced support for keepScores to FacetsCollectorManager.
The returned reduced FacetsCollector instance though does not reflect that in
that its inner keepScores flag is always false. This commit fixes that.
Previously all regexp parsing required determinization and minimization up-front, which can be costly (exponential time).
Lucene 10 removes the determinization and minimization from RegExp and allows the user to choose:
* determinize() the result and get the DFA query execution of previous releases.
* don't determinize() and possibly get a new NFA query that determinizes-as-it-goes.
Complement of arbitrary automata is incompatible with this choice, as it requires determinization for correctness. It was previously a non-default operator that could be enabled with a special flag: RegExp.COMPLEMENT, or would be included with RegExp.ALL, which turns on all special syntax flags. Lucene 10 removed the operator, as it can't be supported while still giving the user the NFA/DFA choice, and requires exponential time during parsing.
To ease transition: add RegExp.DEPRECATED_COMPLEMENT syntax flag and Kind.DEPRECATED_COMPLEMENT node:
* syntax flag can be enabled with RegExp(s, RegExp.DEPRECATED_COMPLEMENT);
* syntax flag is **NOT** included by RegExp.ALL: e.g. you must do RegExp(s, RegExp.ALL | RegExp.DEPRECATED_COMPLEMENT) to get ALL flags and also the deprecated complement (~) operator. This enforces a java deprecation reference in the calling code to enable the flag.
* deprecated complement (~) runs with an internal limit: Operations.DEFAULT_DETERMINIZE_WORK_LIMIT. It is not configurable. If it is exceeded, you get TooComplexToDeterminize() exception.
* there is intentionally only a single dead-simple test so that this hack doesn't cause us pain with CI/builds. We don't want random automata testing to only rarely encounter an exponential algorithm!
After lucene 10 is branched, this deprecated support can be removed by reverting this commit.
We have a few public static utility search methods in FacetsCollector that accept
a Collector as last argument. In practice, these are expected to be called
providing a `FacetsCollector` as last argument. Also, we'd like to remove all
the search methods that take a `Collector` in favour of those that take a
`CollectorManager` (see #12892).
This commit adds the corresponding functionality to `FacetsCollectorManager`.
The new methods take a `FacetsCollectorManager` as last argument. The return type
has to be adapted to include also the facets results that were before made
available through the collector argument.
In order for tests to all work I had to add support for `keepScores` to
`FacetsCollectorManager` which was missing.
Closes#13725