This uses the `IndexInput#prefetch` API for postings. This relies on heuristics, as we don't know ahead of time what data we will need from a postings list:
- Postings lists are prefetched entirely when they are short (< 16kB).
- Impacts enums also prefetch the first page of skip data.
- Postings enums prefetc skip data on the first call to advance().
Positions, offsets and payloads are never prefetched.
Putting the `IndexInput#prefetch` call in `TermsEnum#postings` and `TermsEnum#impacts` works well because `BooleanQuery` will first create postings/impacts enums for all clauses before it starts unioning/intersecting them. This allows the prefetching logic to run in parallel across all clauses of the same query on the same segment.
This commit fixes a corner case in the ScalarQuantizer when just a single vector is present. I ran into this when updating a test that previously passed successfully with Lucene 9.10 but fails in 9.x.
The score error correction is calculated to be NaN, as there are no score docs or variance.
This commit updates the writer to handle the case where there are no values.
Previously (before #13369), there was a check that there were some points values before trying to write, this is no longer the case. The code in writeFieldNDims has an assumption that the values is not empty - an empty values will result in calculating a negative number of splits, and a negate array size to hold the splits.
The fix is trivial, return null when values is empty - null is an allowable return value from this method. Note: writeField1Dim is able to handle an empty values.
The issue outlines the problem. When we have point value dimensions, segment core readers assume that there will be point files.
However, when allowing soft deletes and a document fails indexing failed before a point field could be written, this assumption fails. Consequently, the NRT fails to open. I settled on always flushing a point file if the field info says there are point fields, even if there aren't any docs in the buffer.
closes#13353
This makes `IndexInput#prefetch` take an offset instead of being relative to
the current position. This avoids requiring callers to seek only to call
`prefetch()`.
Follow up to: #13181
I noticed the quantized interface had a slightly different name.
Additionally, testing showed we are inconsistent when there aren't any vectors to score. This makes the response consistent (e.g. null when there aren't any vectors).
Depending on how we quantize and then scale, we can edge down below 0 for dotproduct scores.
This is exceptionally rare, I have only seen it in extreme circumstances in tests (with random data and low dimensionality).
This commit fixes an issue in the default flat vector scorer supplier whereby subsequent scorers created by the supplier can affect previously created scorers.
The issue is that we're sharing the backing array from the vector values, and overwriting it in subsequent scorers. We just need to use the ordinal to protect the scorer instance from mutation.
This adds `IndexInput#prefetch`, which is an optional operation that instructs
the `IndexInput` to start fetching bytes from storage in the background. These
bytes will be picked up by follow-up calls to the `IndexInput#readXXX` methods.
In the future, this will help Lucene move from a maximum of one I/O operation
per search thread to one I/O operation per search thread per `IndexInput`.
Typically, when running a query on two terms, the I/O into the terms dictionary
is sequential today. In the future, we would ideally do these I/Os in parallel
using this new API. Note that this will require API changes to some classes
including `TermsEnum`.
I settled on this API because it's simple and wouldn't require making all
Lucene APIs asynchronous to take advantage of extra I/O concurrency, which I
worry would make the query evaluation logic too complicated.
This change will require follow-ups to start using this new API when working
with terms dictionaries, postings, etc.
Relates #13179
Co-authored-by: Uwe Schindler <uschindler@apache.org>
Elasticsearch (which based on lucene) can automatically infer types for users with its dynamic mapping feature. When users index some low cardinality fields, such as gender / age / status... they often use some numbers to represent the values, while ES will infer these fields as long, and ES uses BKD as the index of long fields.
Just as #541 said, when the data volume grows, building the result set of low-cardinality fields will make the CPU usage and load very high even if we use a boolean query with filter clauses for low-cardinality fields.
One reason is that it uses a ReentrantLock to limit accessing LRUQueryCache. QPS and costs of their queries are often high, which often causes trying locking failures when obtaining the cache, resulting in low concurrency in accessing the cache.
So I replace the ReentrantLock with a ReentrantReadWriteLock. I only use the read lock when I need to get the cache for a query,
With quantized vectors, and with current vectors, we separate out the "scoring" vs. "iteration", requiring the user to always iterate the raw vectors and provide their own similarity function.
While this is flexible, it creates frustration in:
- Just iterating and scoring, especially since the field already has a similarity function stored...Why can't we just know which one to use and use it!
- Iterating and scoring quantized vectors. By default it would be good to be able to iterate and score quantized vectors (e.g. without going through the HNSW graph).
This significantly hampers support for true exact kNN search.
This commit extends the vector value iterators to be able to return a scorer given some vector value (what this PR demonstrates). The scorer contains a copy of the originating iterator and allows for iteration and scoring the most optimized way the provided codec can give.
Users can still iterate vector values directly, read them on heap, and score any way they please.
This commit adds a separate option, tests.defaultvectorization, to allow running Panama Vectorization for all tests with suitable C2 defaults.
For example:
./gradlew :lucene:core:test -Ptests.defaultvectorization=true
---------
Co-authored-by: Uwe Schindler <uschindler@apache.org>
We hit a Codec bug in Elasticsearch, but it went unnoticed because our
tests extend from BaseDocValuesFormatTestCase, which doesn't attempt to
read the doc-values of the same document twice. This change strengthens
BaseDocValuesFormatTestCase checks and randomly inserts that access
pattern.
Because of concurrent merging (#13124), multiple threads may be updating
(different) attributes concurrently, so we need to make reads and writes to
attributes thread-safe.
This updates the int4 dot-product comparison to have an optimized one for when one of the vectors are compressed (the most common search case). This change actually makes the compressed search on ARM faster than the uncompressed. However, on AVX512/256, it still slightly slower than uncompressed, but it still much faster now with this optimization than before (eagerly decompressing).
This optimized is tied tightly with how the vectors are actually compressed and stored, consequently, I added a new scorer that is within the lucene99 codec.
So, this gives us 8x reduction over float32, well more than 2x faster queries than float32, and no need to rerank as the recall and accuracy are excellent.
Improve MissingDoclet linter to check records correctly:
- exclude default ctors
- exclude accessor methods (like with enums)
- on "method" level checking also check that every record component has an @param tag
Background:
Historically IndexWriter treated OutOfMemoryError special, for defensive
reasons. It was expanded to VirtualMachineError, to try to play it safe
in similar disastrous circumstances.
We should treat any Error as a tragedy, as it isn't an Exception, and it
isn't something a "reasonable" application should catch. IndexWriter
should be reasonable. See #7049 for some of the reasoning.
We can't pretend this will detect any possible scenario that might cause
harm, e.g. a jvm bug might simply miscompile some code and cause silent
corruption. But we should try harder by playing by the rules.
Closes#13275Closes#7049
* Change MASKS from int[] to byte[], and assign it with left shift.
* Only set first byte for tmpUTF8.
* Only set first byte value for tmp utf8.
* Change value type from int to byte.
* Remove stale comment.
Instead of making a separate thing pluggable inside of the FieldFormat, this instead keeps the vector similarities as they are, but allows a custom scorer to be provided to the FlatVector storage used by HNSW.
This idea is akin to the compression extensions we have. But in this case, its for vector scorers.
To show how this would work in practice, I took the liberty of adding a new HnswBitVectorsFormat in the sandbox module.
A larger part of the change is a refactor of the `RandomAccessVectorValues<T>` to remove the `<T>`. Nothing actually uses that any longer, and we should instead rely on well defined classes and stop relying on casting with generics (yuck).
This adds more backwards compatibility coverage for scalar quantization. Adding a test that forces the older metadata version to be written and ensures that it can still be read.
Add ability to UnifiedHighlighter to combine matches from multiple fields
to highlight a single field.
FastVectorHighlighter for a long time has an option to highlight a single field
based on matches from several fields. But UnifiedHighlighter was missing this option.
This adds this ability with a new function: `UnifiedHighlighter::withMaskedFieldsFunc`
that sets up a function that given a field retuns a set of masked fields whose matches
are combined to highlight the given field.
You need as many merge threads as necessary to make sure that merges can keep
up with indexing. But this number depends on the data that you are indexing: if
you are only indexing stored fields, merges can copy compressed data directly
and merges are only a small fraction of the total indexing+flushing+merging
cost. But if you primary index knn vectors, merging N docs may require about as
much work as flushing N docs. If you add the fact that documents typically go
through multiple rounds of merging, the merging cost can end up being more than
half of the total indexing+flushing+merging cost.
This change proposes to update the default number of merge threads assuming an
intermediate scenario where merges perform about half of the total
indexing+flushing+merging work, ie. it gives half the threads of the system to
merges.
One goal of this change is to no longer have to configure a custom number of
merge threads on nightly benchmarks, which run on a highly concurrent machine.
This is motivated by the fact that merges can hardly steal all I/O resources
from searches on modern NVMe drives. Merges are still not allowed to use all
CPU since they have a budget for the number of threads which is a fraction of
the number of threads that the host can run.
Closes#13193