This switches the following files to `ReadAdvice.RANDOM`:
- Stored fields data file.
- Term vectors data file.
- HNSW graph.
- Temporary file storing vectors at merge time that we use to construct the
merged HNSW graph.
- Vector data files, including quantized data files.
I hesitated using `ReadAdvice.RANDOM` on terms, since they have a random access
pattern when running term queries, but a more sequential access pattern when
running multi-term queries. I erred on the conservative side and did not switch
them to `ReadAdvice.RANDOM` for now.
For simplicity, I'm only touching the current codec, not previous codecs. There
are also some known issues:
- These files will keep using a `RANDOM` `ReadAdvice` at merge time. We need
some way for merge instances to get an updated `IOContext`? We have the same
problem with `IOContext#LOAD` today.
- With quantized vectors, raw vectors don't have random access pattern, but it
was challenging to give raw vectors a sequential access pattern when there
are quantized vectors and a random access pattern otherwise. So they assume a
random access pattern all the time.
When intra-merge parallelism was introduced, the validation that numWorkers must ==1 with a null executor service was removed from Lucene99HnswVectorsFormat. However, I forgot to remove that validation from Lucene99HnswScalarQuantizedVectorsFormat.
This corrects that mistake, allowing Lucene99HnswScalarQuantizedVectorsFormat and Lucene99HnswVectorsFormat to take advantage of the merge-schedulers intra-merge threading.
This switches the default `ReadAdvice` from `NORMAL` to `RANDOM`, which is a better fit for the kind of access pattern that Lucene has. This is expected to reduce page cache trashing and contention on the page table.
`NORMAL` is still available, but never used by any of the file formats.
* fix regeneration, upgrade icu jar, fully regenerate sources, tests pass
* Upgrade RBBI grammar to match 74.2 (add instructions on how to do this)
* Make use of Script_Extensions property in tokenization
* document and test nfkc_scf form
* update tokenizer for improved text in UAX#24 5.2
* use indic syllablic category for myanmar tokenizer instead of relying on Gc
* Add timeout support to graph searches in AbstractKnnVectorQuery
* Also timeout exact searches
* Return partial KNN results
* Add tests for partial KNN results
- Refactor tests to base classes
- Also timeout exact searches in Lucene99HnswVectorsReader
* Add CHANGES.txt entry and fix some comments
---------
Co-authored-by: Kaival Parikh <kaivalp2000@gmail.com>
This PR is a culmination of some various streams of work:
- Confidence interval optimizations, unlocked even smaller quantization bytes.
- The ability to quantize down smaller than just int8 or int7
- Adding an optimized int4 (halfbyte) vector API comparison for dot-product.
The idea of further scalar quantization gives users the choice between:
- Further quantizing to gain space through compressing the bits into single byte values
- Or allowing quantization to give guarantees around maximal values that afford faster vector operations.
I didn't add more panama vector APIs as I think trying to micro-optimize int4 for anything other than dot-product was a fools errand. Additionally, I only focused on ARM. I experimented with trying to get better performance on other architectures, but didn't get very far, so I fall back to dotProduct.
* Made DocIdsWriter use DISI when reading documents with an IntersectVisitor.
Instead of calling IntersectVisitor.visit for each doc in the
readDelta16 and readInts32 methods, create a DocIdSetIterator
and call IntersectVisitor.visit(DocIdSetIterator) instead.
This seems to make Lucene faster at sorting and range querying
tasks - the hypothesis being that it is due to fewer virtual calls.
* Spotless
* Changed bulk iteration to use IntsRef instead.
* Clearer comment.
* Decrease cost outside loop
* Added test to make inverse intsref be used.
* Wording improvement in javadoc.
* Added CHANGES.txt entry.
* Binary search the entries when all suffixes have the same length in a leaf block.
* add comment on allEqual.
* BackwardsCompatibility: keep the same logic on fillTerm and SeekStatus(NOT_FOUND, END).
* Update comments: modify scan to binary search.
* Add unit test for binarySearchTermLeaf.
* Format code.
* Assert the value of termsEnum.term() is correct after seeking.
* Add CHANGES entry.
* Clarify "leaf block _of the terms dict_"
* Set suffixesReader's position.
* Advance to the greater term If binary search ended at the less term.
* Assert termsEnum's position after seeking.
* Tidy.
* Advance to the greater term if binary search ended at the less term: nextEnt plus 1.
* Advance to the greater term if binary search ended at the less term and greater term exists.
* Add test case: target greater than the last entry of the matched block.
* Move test case that target greater than the last entry of the matched block to TestLucene90PostingsFormat.
* Move test case for target greater than the last entry of the matched block to TestLucene99PostingsFormat
* Clarify code.
* Replace ternary with verbose if.
* Replace seekExact with seekCeil.
* Replace division by 2 with logical right shift.
* Remove assert ste.termExists.
* Clarify code.
* Remove stale change entry.
* Fix comment.
---------
Co-authored-by: Adrien Grand <jpountz@gmail.com>
Correct estimate for singleton to return `0` and use custom accumulator in tests to
fix assertions. Tried not doing this in #13232 but turns out we need a little complexity here
since the singleton is recognized by the `RamUsageTester` in so far that is only counted
once if it shows up repeatedly in an array or so.
fixes#13249
I repeatably saw some test failures related to `TestParentBlockJoin[Byte|Float]KnnVectorQuery#testVectorEncodingMismatch`. This commit fixes those test failures and actually checks the field type.
Both the KnnWriters & FieldInfo keep track of the vector similarity used by a given field. This commit ensures they are the same and utilizes the FieldInfo one (which, while these are enums, are exactly the same).
Size 256 is very common here throguh the monotonic long values default
page size. In ES we're seing many MB O(10M) of duplicate instances of this
size relatively quickly.
=> adding a singleton for it to save some heap
This updates `IndexWriter` to only call `OneMerge#reorder` when it has a chance
to preserve the block structure, ie. either there are no blocks or blocks are
identified by a parent field.
Furthermore, `MockRandomMergePolicy` is updated to preserve the block structure
when a parent field is configured.
Merging `IOContext`s use a `SEQUENTIAL` `ReadAdvice`. However, some file
formats hardcode `IOContext.LOAD` for some of their files, which silences the
whole merging context, in particular the `SEQUENTIAL` `ReadAdvice`.
This PR switches file formats to
`ioContext.withReadAdvice(ReadAdvice.RANDOM_PRELOAD)` so that merges will use a
`SEQUENTIAL` `ReadAdvice` while searches will use a `RANDOM_PRELOAD`
`ReadAdvice`.
This is not a huge deal for `RANDOM_PRELOAD`, which is only used for very small
files. However, this change becomes more relevant for the new `RANDOM`
`ReadAdvice` as we would like merges to keep using a `SEQUENTIAL` `ReadAdvice`.
Having a single block of all zeros is a fairly common case that is using
a lot of heap for duplicate instances in some use-cases in ES.
=> read a singleton for it to save the duplication
Romanian uses s&t with commas (ș/ț), but for a long time those weren't available, so s&t with cedilla (ş/ţ) were used. Both are still in use, but the comma forms are much more common now. Both should be supported in stopword lists.
The variable-gaps terms format uses the legacy storage layout of storing
metadata at the end of the index file, and storing the start pointer of the
metadata as the last 8 bytes of the index files (just before the footer). This
forces an awkward access pattern at open time when we first need to seek
towards the end to check that a footer is present, then seek some more bytes
backwards to read metadata, and finally read the content of the index that sits
before metadata.
To fix this, meta data and index data are now split into different files. This
way, both files have a clean sequential and read-once access pattern, and can
take advantage of the `ChecksumIndexInput` abstraction for checksum validation.
This further helps clean up `IOContext` by removing the ability to set
`readOnce` to `true` on an existing `IOContext`.
* upgrade snowball to 26db1ab9adbf437f37a6facd3ee2aad1da9eba03
* add back-compat-hack to the factory, too
* remove open of irish package now that we don't have our own stopwords file here anymore
* CHANGES / MIGRATE
This replaces the `load`, `randomAccess` and `readOnce` flags with a
`ReadAdvice` enum, whose values are aligned with the allowed values to
(f|m)advise.
Closes#13211