While going through: https://github.com/apache/lucene/pull/12582
I noticed that for a while now, our offheap vector readers haven't changed at all. We just keep copying them around for no reason.
To make adding a new vector codec simpler, this refactors the lucene95 codec to allow its offheap vector storage format (readers/writers) to be used.
Additionally, it will handle writing the appropriate fields for sparse vectors (read/write) to a provided index output/inputs.
This should reduce the churn in new codecs significantly.
Currently FSTCompiler and FST have circular dependencies to each
other. FSTCompiler creates an instance of FST, and on adding node
(add(IntsRef input, T output)), it delegates to FST.addNode() and passes
itself as a variable. This introduces a circular dependency and mixes up
the FST constructing and traversing code.
To make matter worse, this implies one can call FST.addNode with an
arbitrary FSTCompiler (as it's a parameter), but in reality it should be
the compiler which creates the FST.
This commit moves the addNode method to FSTCompiler instead.
Co-authored-by: Anh Dung Bui <buidun@amazon.com>
e.g. by user responding with ^D
```
Press (n)ext page, (q)uit or enter number to jump to a page.
Exception in thread "main" java.lang.NullPointerException: Cannot invoke "String.length()" because "line" is null
at org.apache.lucene.demo.SearchFiles.doPagingSearch(SearchFiles.java:244)
at org.apache.lucene.demo.SearchFiles.main(SearchFiles.java:152)
```
```
Press (p)revious page, (n)ext page, (q)uit or enter number to jump to a page.
n
Only results 1 - 50 of 104 total matching documents collected.
Collect more (y/n) ?
Exception in thread "main" java.lang.NullPointerException: Cannot invoke "String.length()" because "line" is null
at org.apache.lucene.demo.SearchFiles.doPagingSearch(SearchFiles.java:198)
at org.apache.lucene.demo.SearchFiles.main(SearchFiles.java:152)
```
Co-authored-by: Piotrek Żygieło <pzygielo@users.noreply.github.com>
MaxScoreBulkScorer computes windows based on the set of clauses that were
essential in the *previous* window. This usually works well as the set of
essential clauses tends to be stable over time, but there are cases when
clauses get swapped between essential and non-essential clauses, and computing
windows based on the previous window can lead to suboptimal choices.
This PR creates a first proposal for the next score window using essential
clauses from the previous window, and then creates a second proposal once
scorers have been partitioned and their max scores have been updated. If this
second proposal results in a smaller window, it gets used.
On one particular query (`the incredibles`) and a reordered index with BP
(which increases chances that scorers move from essential to non-essential or
vice-versa), this change yielded a 2.3x speedup.
The TaskExecutor used to run concurrent operations may leave running tasks behind when an exception is thrown by one of the tasks. This commit ensures that it instead waits for all tasks to complete before it re-throws the exception. If there's more than one exception thrown, they are going to be added as suppressed exceptions to the first one that was caught.
This is a minor refactor of HNSW graph merging logic.
Instead of directly checking the KnnVectorReader version, this commit adjusts the logic to see if a specific interface is satisfied for returning a view of the HnswGraph.
Given a query that implements Accountable, the LRUQueryCache would increment
its internal accounting by the amount reported by Accountable.ramBytesUsed(), but
only decrement on eviction by the default used for all other queries. This meant that
the cache could eventually think it had run out of space, even if there were no queries
in it at all. This commit ensures that queries that implement Accountable are always
accounted for correctly.
As we introduce more places where we add concurrency (there are
currently three) there is a common pattern around checking whether there
is an executor provided, and then going sequential on the caller thread
or parallel relying on the executor.
That can be improved by internally creating a TaskExecutor that relies
on an executor that executes tasks on the caller thread, which ensures
that the task executor is never null, hence the common conditional is no
longer needed, as the concurrent path that uses the task executor would
be the default and only choice for operations that can be parallelized.
The "create github release" step was missing from the release wizard. We have forgotten about it a few times recently.
While at it, I also expanded the instructions around closing the current milestone and moved them after removing opened issues / PRs from the current milestone.
We recently made TaskExecutor public. It currently exposes two methods:
one to create tasks given a collection of callables, and one to execute
all tasks created at step 1. We can rather expose a single public method
that takes a collection of callables which internally creates the
appropriate tasks. This simplifies the API, and stops us from leaking
the internal Task abstraction which can be kept private.
Note that this is backwards compatible as we have not released yet a
version where the TaskExecutor was made public. It is marked
experimental anyways.
Add RandomAccessInput#length method to the RandomAccessInput interface. In addition it deprecates
ByteBuffersDataInput#size in favour of this new method.
Until now, LuceneTestCase#newSearcher randomly associates the returned
IndexSearcher instance with an executor that is ad-hoc created, which
gets shut down when the index reader is closed.
This has made us catch a couple of cases where we were not properly
closing readers in tests. Most recently, we have been seeing test
failures (OOM - unable to create thread) due to too many executor
instances created as part of the same test. This is to be attributed to
creating too many searcher instance, each one getting its separate
executor, which all get shutdown at the end of the entire suite. The
main offender for this is QueryUtils which creates a new searcher for
each leaf reader, and the top-level reader gets closed in the
AfterClass, hence all the executors will stay around for the entire
duration of the test suite that relies on QueryUtils.
This commit eagerly creates an executor in an additional before class
method for LuceneTestCase, and associates that with each searcher that
is supposed to get a non null executor.
Note that the executor is shutdown in the after class to ensure that
no threads leak in tests.
This has the additional advantage that it removes the need to close the
executor as part of an index reader close listener, which also requires
the reader to have an associated reader cache helper.
This implements a specialized `BlockMaxConjunctionBulkScorer`, which is really
the same as `BlockMaxConjunctionScorer`, but as a `BulkScorer` instead of a
`Scorer`. Also it doesn't support two-phase iterators in order to focus on the
common case when queries, such as term queries, do not have two-phase
iterators. If a clause has a two-phase iterator, it will keep running as a
`BlockMaxConjunctionScorer` wrapped in a `DefaultBulkScorer`.
This commit adds the possibility to read / write binary stored values using a DataInput and the number of bytes. By default the implementations will allocate those bytes in a newly created byte array and call the already existing method.
TaskExecutor is currently package private. We have scenarios where we
want to parallelize the execution and reuse it outside of its package,
hence this commit makes it public (and experimental).
Note that its constructor remains package private as it is supposed to
be created by the index searcher, and later retrieved from it via the
appropriate getter, which is also made public as part of this commit.
Co-authored-by: gf2121 <52390227+gf2121@users.noreply.github.com>
Concurrent search is currently applied once per search call, either when
search is called, or when concurrent query rewrite happens. They
generally don't happen within one another. There are situations in which
we are going to introduce parallelism in places where there could be
multiple inner levels of parallelism requested as each task could try to
parallelize further. In these cases, with certain executor
implementations, like ThreadPoolExecutor, we may deadlock as we are
waiting for all tasks to complete but they are waiting for threads to
free up to complete their execution.
This commit introduces a simple safeguard that makes sure that we only
parallelize via the executor at the top-level invokeAll call. When each
task tries to parallelize further, we just execute them directly instead
of submitting them to the executor.
Co-authored-by: Adrien Grand <jpountz@gmail.com>
When re-using the HNSW graph during segment merges, it is possible that more than the configured M*2 connections could be made per vector.
In those instances, we should allow the graph to still be read from the codec and searchable.