This commit introduces support for optionally creating slices that target leaf reader context partitions, which allow them to be searched concurrently. This is good to maximize resource usage when searching force-merged indices, or indices with rather big segments, by parallelizig search execution across subsets of segments being searched.
Note: this commit does not affect default generation of slices. Segments can be partitioned by overriding the `IndexSearcher#slices(List<LeafReaderContext>)` method to plug in ad-hoc slices creation. Moreover, the existing `IndexSearcher#slices` static method now creates segment partitions when the additional `allowSegmentsPartitions` argument is set to `true`.
The overall design of this change is based on the existing search concurrency support that is based on `LeafSlice` and `CollectorManager`. A new `LeafReaderContextPartition` abstraction is introduced, that holds a reference to a `LeafReaderContext` and the range of doc ids it targets. A `LeafSlice` noew targets segment partitions, each identified by a `LeafReaderContext` instance and a range of doc ids. It is possible for a partition to target a whole segment, and for partitions of different segments to be combined into the same leaf slices freely, hence searched by the same thread. It is not possible for multiple partitions of the same segment to be added to the same leaf slice.
Segment partitions are searched concurrently leveraging the existing `BulkScorer#score(LeafCollector collector, Bits acceptDocs, int min, int max)` method, that allows to score a specific subset of documents for a provided
`LeafCollector`, in place of the `BulkScorer#score(LeafCollector collector, Bits acceptDocs)` that would instead score all documents.
## Changes that require migration
The migrate guide has the following new clarifying items around the contract and breaking changes required to support intra-segment concurrency:
- `Collector#getLeafCollector` may be called multiple times for the same leaf across distinct `Collector` instances created by a `CollectorManager`. Logic that relies on `getLeafCollector` being called once per leaf per search needs updating.
- a `Scorer`, `ScorerSupplier` or `BulkScorer` may be requested multiple times for the same leaf
- `IndexSearcher#searchLeaf` change of signature to accept the range of doc ids
- `BulkScorer#score(LeafCollector, BitSet)` is removed in favour of `BulkScorer#score(LeafCollector, BitSet, int, int)`
- static `IndexSearcher#slices` method changed to take a last boolean argument that optionally enables the creation of segment partitions
- `TotalHitCountCollectorManager` now requires that an array of `LeafSlice`s, retrieved via `IndexSearcher#getSlices`,
is provided to its constructor
Note: `DrillSideways` is the only component that does not support intra-segment concurrency and needs considerable work to do so, due to its requirement that the entire set of docs in a segment gets scored in one go.
The default searcher slicing is not affected by this PR, but `LuceneTestCase` now randomly leverages intra-segment concurrency. An additional `newSearcher` method is added that takes a `Concurrency` enum as the last argument in place of the `useThreads` boolean flag. This is important to disable intra-segment concurrency for `DrillSideways` related tests that do support inter-segment concurrency but not intra-segment concurrency.
## Next step
While this change introduces support for intra-segment concurrency, it only sets up the foundations of it. There is still a performance penalty for queries that require segment-level computation ahead of time, such as points/range queries. This is an implementation limitation that we expect to improve in future releases, see #13745.
Additionally, we will need to decide what to do about the lack of support for intra-segment concurrency in `DrillSideways` before we can enable intra-segment slicing by default. See #13753 .
Closes#9721
We recently introduced support for keepScores to FacetsCollectorManager.
The returned reduced FacetsCollector instance though does not reflect that in
that its inner keepScores flag is always false. This commit fixes that.
Previously all regexp parsing required determinization and minimization up-front, which can be costly (exponential time).
Lucene 10 removes the determinization and minimization from RegExp and allows the user to choose:
* determinize() the result and get the DFA query execution of previous releases.
* don't determinize() and possibly get a new NFA query that determinizes-as-it-goes.
Complement of arbitrary automata is incompatible with this choice, as it requires determinization for correctness. It was previously a non-default operator that could be enabled with a special flag: RegExp.COMPLEMENT, or would be included with RegExp.ALL, which turns on all special syntax flags. Lucene 10 removed the operator, as it can't be supported while still giving the user the NFA/DFA choice, and requires exponential time during parsing.
To ease transition: add RegExp.DEPRECATED_COMPLEMENT syntax flag and Kind.DEPRECATED_COMPLEMENT node:
* syntax flag can be enabled with RegExp(s, RegExp.DEPRECATED_COMPLEMENT);
* syntax flag is **NOT** included by RegExp.ALL: e.g. you must do RegExp(s, RegExp.ALL | RegExp.DEPRECATED_COMPLEMENT) to get ALL flags and also the deprecated complement (~) operator. This enforces a java deprecation reference in the calling code to enable the flag.
* deprecated complement (~) runs with an internal limit: Operations.DEFAULT_DETERMINIZE_WORK_LIMIT. It is not configurable. If it is exceeded, you get TooComplexToDeterminize() exception.
* there is intentionally only a single dead-simple test so that this hack doesn't cause us pain with CI/builds. We don't want random automata testing to only rarely encounter an exponential algorithm!
After lucene 10 is branched, this deprecated support can be removed by reverting this commit.
We have a few public static utility search methods in FacetsCollector that accept
a Collector as last argument. In practice, these are expected to be called
providing a `FacetsCollector` as last argument. Also, we'd like to remove all
the search methods that take a `Collector` in favour of those that take a
`CollectorManager` (see #12892).
This commit adds the corresponding functionality to `FacetsCollectorManager`.
The new methods take a `FacetsCollectorManager` as last argument. The return type
has to be adapted to include also the facets results that were before made
available through the collector argument.
In order for tests to all work I had to add support for `keepScores` to
`FacetsCollectorManager` which was missing.
Closes#13725
Some of the test methods were commented out when this test class was added. They got later removed
but the removal left unused method behind. I also adjusted visibility of all the internal methods
that were public and should have been private, which led me to further clean up: `MatchingHitCollector`
was not needed and can be removed.
`Operations#repeat` currently creates an automaton that has one more final
state, and 2x more transitions for each of the final states. With this change,
the returned automaton has a single final state and only 2x more transitions
for state 0.
The queue is created as 1000 * mulFactor, but the test succeeds without the 1000 factor.
That causes out of memories as the mulFactor can be as high as 4096 in some cases, and then
each slice will get a collector with a queue of size 1000 * mulFactor. Only the allocation of
such queue makes the test go OOM.
Closes#11754
Neither this method nor any of the two overrides can throw an IOException.
This change removes the throws clauses from this method in order simplify
not have to handle them on the callers side.
This commit modifies ReadTask to no longer call the deprecated search(Query, Collector).
Instead, it creates a collector manager and calls search(Query, CollectorManager).
The existing protected createCollector method is removed in favour of createCollectorManager that returns
now a CollectorManager in place of a Collector.
SearchWithCollectorTask works the same way if "topScoreDoc" is provided as config. Loading of a custom collector
will no longer work, and needs to be replaced with loading a collector manager by class name instead.
Revert changes by #12150 in jar-checks.gradle, because tasks in this file share internal state between tasks without using files. Because of this all tasks here must always execute together, so they cannot define task outputs.
Advancing within a block consists of finding the first index within an array of
128 values whose value is greater than or equal a target. Given the small size,
it's not obvious whether it's better to perform a linear search, a binary
search or something else... It is surprisingly hard to beat the linear search
that we are using today.
Experiments suggested that the following approach works in practice:
- First check if the next item in the array is greater than or equal to the
target.
- Then find the first 4-values interval that contains our target.
- Then perform a branchless binary search within this interval of 4 values.
This approach still biases heavily towards the case when the target is very
close to the current index, only a bit less than a linear search.
This commit updates the Vectorization Provider to support JDK 23. The API has not changed so the changes minimally bump the major JDK check, and enable the incubating API during testing
We can save some memory in failure scenarios here (and a tiny bit in
every case) by moving our started flag to the `FutureTask` and using the
callable outright. First of all, we save the wrapper callable, but that
also allows us to just `set(null)` on cancellation instead of waiting
for the task to run to set the `null`.
In case we have longer running tasks executing and it would take a while
to get to the cancelled tasks, this saves some memory and allows us to
return from the method earlier.