This updates the int4 dot-product comparison to have an optimized one for when one of the vectors are compressed (the most common search case). This change actually makes the compressed search on ARM faster than the uncompressed. However, on AVX512/256, it still slightly slower than uncompressed, but it still much faster now with this optimization than before (eagerly decompressing).
This optimized is tied tightly with how the vectors are actually compressed and stored, consequently, I added a new scorer that is within the lucene99 codec.
So, this gives us 8x reduction over float32, well more than 2x faster queries than float32, and no need to rerank as the recall and accuracy are excellent.
Improve MissingDoclet linter to check records correctly:
- exclude default ctors
- exclude accessor methods (like with enums)
- on "method" level checking also check that every record component has an @param tag
Background:
Historically IndexWriter treated OutOfMemoryError special, for defensive
reasons. It was expanded to VirtualMachineError, to try to play it safe
in similar disastrous circumstances.
We should treat any Error as a tragedy, as it isn't an Exception, and it
isn't something a "reasonable" application should catch. IndexWriter
should be reasonable. See #7049 for some of the reasoning.
We can't pretend this will detect any possible scenario that might cause
harm, e.g. a jvm bug might simply miscompile some code and cause silent
corruption. But we should try harder by playing by the rules.
Closes#13275Closes#7049
* Change MASKS from int[] to byte[], and assign it with left shift.
* Only set first byte for tmpUTF8.
* Only set first byte value for tmp utf8.
* Change value type from int to byte.
* Remove stale comment.
Instead of making a separate thing pluggable inside of the FieldFormat, this instead keeps the vector similarities as they are, but allows a custom scorer to be provided to the FlatVector storage used by HNSW.
This idea is akin to the compression extensions we have. But in this case, its for vector scorers.
To show how this would work in practice, I took the liberty of adding a new HnswBitVectorsFormat in the sandbox module.
A larger part of the change is a refactor of the `RandomAccessVectorValues<T>` to remove the `<T>`. Nothing actually uses that any longer, and we should instead rely on well defined classes and stop relying on casting with generics (yuck).
This adds more backwards compatibility coverage for scalar quantization. Adding a test that forces the older metadata version to be written and ensures that it can still be read.
Add ability to UnifiedHighlighter to combine matches from multiple fields
to highlight a single field.
FastVectorHighlighter for a long time has an option to highlight a single field
based on matches from several fields. But UnifiedHighlighter was missing this option.
This adds this ability with a new function: `UnifiedHighlighter::withMaskedFieldsFunc`
that sets up a function that given a field retuns a set of masked fields whose matches
are combined to highlight the given field.
You need as many merge threads as necessary to make sure that merges can keep
up with indexing. But this number depends on the data that you are indexing: if
you are only indexing stored fields, merges can copy compressed data directly
and merges are only a small fraction of the total indexing+flushing+merging
cost. But if you primary index knn vectors, merging N docs may require about as
much work as flushing N docs. If you add the fact that documents typically go
through multiple rounds of merging, the merging cost can end up being more than
half of the total indexing+flushing+merging cost.
This change proposes to update the default number of merge threads assuming an
intermediate scenario where merges perform about half of the total
indexing+flushing+merging work, ie. it gives half the threads of the system to
merges.
One goal of this change is to no longer have to configure a custom number of
merge threads on nightly benchmarks, which run on a highly concurrent machine.
This is motivated by the fact that merges can hardly steal all I/O resources
from searches on modern NVMe drives. Merges are still not allowed to use all
CPU since they have a budget for the number of threads which is a fraction of
the number of threads that the host can run.
Closes#13193
Our per-field vector and doc-values readers use `TreeMap`s but don't rely on
the iteration order, so these `TreeMap`s can be replaced with more
CPU/RAM-efficient `HashMap`s.
The per-field postings reader stays on a `TreeMap` since it relies on the
iteration order.
This fixes index sorting to pass the correct `IOContext` to stored fields and
term vectors writers when index sorting is enabled. This is important for
things like `NRTCachingDirectory`.
This is a large change, refactoring most of the taxonomy facets code and changing internal behaviour, without changing the API. There are specific API changes this sets us up to do later, e.g. retrieving counts from aggregation facets.
1. Move most of the responsibility from TaxonomyFacets implementations to TaxonomyFacets itself. This reduces code duplication and enables future development. Addresses genericity issue mentioned in #12553.
2. As a consequence, introduce sparse values to FloatTaxonomyFacets, which previously used dense values always. This issue is part of #12576.
3. Compute counts for all taxonomy facets always, which enables us to add an API to retrieve counts for association facets in the future. Addresses #11282.
4. As a consequence of having counts, we can check whether we encountered a label while faceting (count > 0), while previously we relied on the aggregation value to be positive. Closes#12585.
5. Introduce the idea of doing multiple aggregations in one go, with association facets doing the aggregation they were already doing, plus a count. We can extend to an arbitrary number of aggregations, as suggested in #12546.
6. Don't change the API. The only change in behaviour users should notice is the fix for non-positive aggregation values, which were previously discarded.
7. Add tests which were missing for sparse/dense values and non-positive aggregations.
This test is setup to reproduce complex failures from TestRandomChains,
e.g. it has SopFilter and other tools for debugging. But it still
resides in the analysis/common module and currently can't be used to
debug any TestRandomChains failures that use other modules (e.g. icu).
relates to #13271
This switches the following files to `ReadAdvice.RANDOM`:
- Stored fields data file.
- Term vectors data file.
- HNSW graph.
- Temporary file storing vectors at merge time that we use to construct the
merged HNSW graph.
- Vector data files, including quantized data files.
I hesitated using `ReadAdvice.RANDOM` on terms, since they have a random access
pattern when running term queries, but a more sequential access pattern when
running multi-term queries. I erred on the conservative side and did not switch
them to `ReadAdvice.RANDOM` for now.
For simplicity, I'm only touching the current codec, not previous codecs. There
are also some known issues:
- These files will keep using a `RANDOM` `ReadAdvice` at merge time. We need
some way for merge instances to get an updated `IOContext`? We have the same
problem with `IOContext#LOAD` today.
- With quantized vectors, raw vectors don't have random access pattern, but it
was challenging to give raw vectors a sequential access pattern when there
are quantized vectors and a random access pattern otherwise. So they assume a
random access pattern all the time.
When intra-merge parallelism was introduced, the validation that numWorkers must ==1 with a null executor service was removed from Lucene99HnswVectorsFormat. However, I forgot to remove that validation from Lucene99HnswScalarQuantizedVectorsFormat.
This corrects that mistake, allowing Lucene99HnswScalarQuantizedVectorsFormat and Lucene99HnswVectorsFormat to take advantage of the merge-schedulers intra-merge threading.
This switches the default `ReadAdvice` from `NORMAL` to `RANDOM`, which is a better fit for the kind of access pattern that Lucene has. This is expected to reduce page cache trashing and contention on the page table.
`NORMAL` is still available, but never used by any of the file formats.
* fix regeneration, upgrade icu jar, fully regenerate sources, tests pass
* Upgrade RBBI grammar to match 74.2 (add instructions on how to do this)
* Make use of Script_Extensions property in tokenization
* document and test nfkc_scf form
* update tokenizer for improved text in UAX#24 5.2
* use indic syllablic category for myanmar tokenizer instead of relying on Gc
* Add timeout support to graph searches in AbstractKnnVectorQuery
* Also timeout exact searches
* Return partial KNN results
* Add tests for partial KNN results
- Refactor tests to base classes
- Also timeout exact searches in Lucene99HnswVectorsReader
* Add CHANGES.txt entry and fix some comments
---------
Co-authored-by: Kaival Parikh <kaivalp2000@gmail.com>
This PR is a culmination of some various streams of work:
- Confidence interval optimizations, unlocked even smaller quantization bytes.
- The ability to quantize down smaller than just int8 or int7
- Adding an optimized int4 (halfbyte) vector API comparison for dot-product.
The idea of further scalar quantization gives users the choice between:
- Further quantizing to gain space through compressing the bits into single byte values
- Or allowing quantization to give guarantees around maximal values that afford faster vector operations.
I didn't add more panama vector APIs as I think trying to micro-optimize int4 for anything other than dot-product was a fools errand. Additionally, I only focused on ARM. I experimented with trying to get better performance on other architectures, but didn't get very far, so I fall back to dotProduct.
* Made DocIdsWriter use DISI when reading documents with an IntersectVisitor.
Instead of calling IntersectVisitor.visit for each doc in the
readDelta16 and readInts32 methods, create a DocIdSetIterator
and call IntersectVisitor.visit(DocIdSetIterator) instead.
This seems to make Lucene faster at sorting and range querying
tasks - the hypothesis being that it is due to fewer virtual calls.
* Spotless
* Changed bulk iteration to use IntsRef instead.
* Clearer comment.
* Decrease cost outside loop
* Added test to make inverse intsref be used.
* Wording improvement in javadoc.
* Added CHANGES.txt entry.