Neither this method nor any of the two overrides can throw an IOException.
This change removes the throws clauses from this method in order simplify
not have to handle them on the callers side.
This commit modifies ReadTask to no longer call the deprecated search(Query, Collector).
Instead, it creates a collector manager and calls search(Query, CollectorManager).
The existing protected createCollector method is removed in favour of createCollectorManager that returns
now a CollectorManager in place of a Collector.
SearchWithCollectorTask works the same way if "topScoreDoc" is provided as config. Loading of a custom collector
will no longer work, and needs to be replaced with loading a collector manager by class name instead.
Revert changes by #12150 in jar-checks.gradle, because tasks in this file share internal state between tasks without using files. Because of this all tasks here must always execute together, so they cannot define task outputs.
Advancing within a block consists of finding the first index within an array of
128 values whose value is greater than or equal a target. Given the small size,
it's not obvious whether it's better to perform a linear search, a binary
search or something else... It is surprisingly hard to beat the linear search
that we are using today.
Experiments suggested that the following approach works in practice:
- First check if the next item in the array is greater than or equal to the
target.
- Then find the first 4-values interval that contains our target.
- Then perform a branchless binary search within this interval of 4 values.
This approach still biases heavily towards the case when the target is very
close to the current index, only a bit less than a linear search.
This commit updates the Vectorization Provider to support JDK 23. The API has not changed so the changes minimally bump the major JDK check, and enable the incubating API during testing
We can save some memory in failure scenarios here (and a tiny bit in
every case) by moving our started flag to the `FutureTask` and using the
callable outright. First of all, we save the wrapper callable, but that
also allows us to just `set(null)` on cancellation instead of waiting
for the task to run to set the `null`.
In case we have longer running tasks executing and it would take a while
to get to the cancelled tasks, this saves some memory and allows us to
return from the method earlier.
This updates file formats to compute prefix sums by summing up 8 deltas per
long at the same time if the number of bits per value is 4 or less, and 4
deltas per long at the same time if the number of bits per value is between 5
included and 11 included. Otherwise, we keep summing up 2 deltas per long like
today.
The `PostingDecodingUtil` was slightly modified due to the fact that more
numbers of bits per value now need to apply different shifts to the input data.
E.g. now that we store integers that require 5 bits per value as 16-bit
integers under the hood rather than 8, we extract the first values by shifting
by 16-5=11, 16-2*5=6 and 16-3*5=1 and then decode tail values from the
remaining bit per 16-bit integer.
* Only run the ide configuration block for eclipse when explicitly invoked. fix property access ordering here and there.
* Correct dependsOn task name.
* Correct crlf/encoding after versionCatalogFormatDeps finishes.
* Change java-library to java-base in the plugin applied within the eclipse task.
* use ant.fixcrlf to correct line endings.
* Changes entry.
* Simplify fixcrlf
---------
Co-authored-by: Uwe Schindler <uschindler@apache.org>
Note that this results in some silly code duplication so this is meant to be a more temporary fix while we better understand the root cause of the regression.
Our postings use a layout that helps take advantage of Java's
auto-vectorization to be reasonably fast to decode. But we can make it a bit
faster by vectorizing directly from the MemorySegment instead of first
copying data into a long[].
This approach only works when the `Directory` uses `MemorySegmentIndexInput`
under the hood, ie. `MMapDirectory` on JDK 21+.
Co-authored-by: Uwe Schindler <uschindler@apache.org>
This adds a new, ground-up implementation of faceting that computes aggregations while collecting. This has the following advantages over the current faceting module:
1. Allows for flexible aggregation logic instead of just "counts" in a general way (essentially makes what's available in today's "association faceting" available beyond taxonomy-based fields).
2. When aggregating beyond "counts," association value computation can be expensive. This implementation allows values to be computed only once when used across different aggregations.
3. Reduces latency by leveraging concurrency during collection (but potentially with increased overall cost).
This work has been done in the sandbox module for now since it is not yet complete (the current faceting module covers use-cases this doesn't yet) and it needs time to bake to work out API and implementation rough edges.
---------
Co-authored-by: Egor Potemkin <epotyom@amazon.com>
Co-authored-by: Shradha Shankar <shrdsha@amazon.com>
Co-authored-by: Greg Miller <gsmiller@gmail.com>
There is a tricky race condition with DWPT threads. It is possible that a flush starts by advancing the deleteQueue (in charge of creating seqNo). Thus, the referenced deleteQueue, there should be a cap on the number of actions left.
However, it is possible after the advance, but before the DWPT are actually marked for flush, the DWPT gets freed and taken again to be used.
To replicate this extreme behavior, see: https://github.com/apache/lucene/compare/main...benwtrent:lucene:test-replicate-and-debug-13127?expand=1
This commit will prevent DWPT from being added back to the free list if their queue has been advanced. This is because the `maxSeqNo` for that queue was created accounting only for the current number of active threads. If the thread gets passed out again and still references the already advanced queue, it is possible that seqNo actually advances past the set `maxSeqNo`.
closes: https://github.com/apache/lucene/issues/13127
closes: https://github.com/apache/lucene/issues/13571
A number of functions in CandidateMatcher are protected or package-protected,
meaning that client code can't use them, which makes it difficult to build custom
wrapper matchers. This commit makes these functions public