Our per-field vector and doc-values readers use `TreeMap`s but don't rely on
the iteration order, so these `TreeMap`s can be replaced with more
CPU/RAM-efficient `HashMap`s.
The per-field postings reader stays on a `TreeMap` since it relies on the
iteration order.
This fixes index sorting to pass the correct `IOContext` to stored fields and
term vectors writers when index sorting is enabled. This is important for
things like `NRTCachingDirectory`.
This is a large change, refactoring most of the taxonomy facets code and changing internal behaviour, without changing the API. There are specific API changes this sets us up to do later, e.g. retrieving counts from aggregation facets.
1. Move most of the responsibility from TaxonomyFacets implementations to TaxonomyFacets itself. This reduces code duplication and enables future development. Addresses genericity issue mentioned in #12553.
2. As a consequence, introduce sparse values to FloatTaxonomyFacets, which previously used dense values always. This issue is part of #12576.
3. Compute counts for all taxonomy facets always, which enables us to add an API to retrieve counts for association facets in the future. Addresses #11282.
4. As a consequence of having counts, we can check whether we encountered a label while faceting (count > 0), while previously we relied on the aggregation value to be positive. Closes#12585.
5. Introduce the idea of doing multiple aggregations in one go, with association facets doing the aggregation they were already doing, plus a count. We can extend to an arbitrary number of aggregations, as suggested in #12546.
6. Don't change the API. The only change in behaviour users should notice is the fix for non-positive aggregation values, which were previously discarded.
7. Add tests which were missing for sparse/dense values and non-positive aggregations.
This test is setup to reproduce complex failures from TestRandomChains,
e.g. it has SopFilter and other tools for debugging. But it still
resides in the analysis/common module and currently can't be used to
debug any TestRandomChains failures that use other modules (e.g. icu).
relates to #13271
This switches the following files to `ReadAdvice.RANDOM`:
- Stored fields data file.
- Term vectors data file.
- HNSW graph.
- Temporary file storing vectors at merge time that we use to construct the
merged HNSW graph.
- Vector data files, including quantized data files.
I hesitated using `ReadAdvice.RANDOM` on terms, since they have a random access
pattern when running term queries, but a more sequential access pattern when
running multi-term queries. I erred on the conservative side and did not switch
them to `ReadAdvice.RANDOM` for now.
For simplicity, I'm only touching the current codec, not previous codecs. There
are also some known issues:
- These files will keep using a `RANDOM` `ReadAdvice` at merge time. We need
some way for merge instances to get an updated `IOContext`? We have the same
problem with `IOContext#LOAD` today.
- With quantized vectors, raw vectors don't have random access pattern, but it
was challenging to give raw vectors a sequential access pattern when there
are quantized vectors and a random access pattern otherwise. So they assume a
random access pattern all the time.
When intra-merge parallelism was introduced, the validation that numWorkers must ==1 with a null executor service was removed from Lucene99HnswVectorsFormat. However, I forgot to remove that validation from Lucene99HnswScalarQuantizedVectorsFormat.
This corrects that mistake, allowing Lucene99HnswScalarQuantizedVectorsFormat and Lucene99HnswVectorsFormat to take advantage of the merge-schedulers intra-merge threading.
This switches the default `ReadAdvice` from `NORMAL` to `RANDOM`, which is a better fit for the kind of access pattern that Lucene has. This is expected to reduce page cache trashing and contention on the page table.
`NORMAL` is still available, but never used by any of the file formats.
* fix regeneration, upgrade icu jar, fully regenerate sources, tests pass
* Upgrade RBBI grammar to match 74.2 (add instructions on how to do this)
* Make use of Script_Extensions property in tokenization
* document and test nfkc_scf form
* update tokenizer for improved text in UAX#24 5.2
* use indic syllablic category for myanmar tokenizer instead of relying on Gc
* Add timeout support to graph searches in AbstractKnnVectorQuery
* Also timeout exact searches
* Return partial KNN results
* Add tests for partial KNN results
- Refactor tests to base classes
- Also timeout exact searches in Lucene99HnswVectorsReader
* Add CHANGES.txt entry and fix some comments
---------
Co-authored-by: Kaival Parikh <kaivalp2000@gmail.com>
This PR is a culmination of some various streams of work:
- Confidence interval optimizations, unlocked even smaller quantization bytes.
- The ability to quantize down smaller than just int8 or int7
- Adding an optimized int4 (halfbyte) vector API comparison for dot-product.
The idea of further scalar quantization gives users the choice between:
- Further quantizing to gain space through compressing the bits into single byte values
- Or allowing quantization to give guarantees around maximal values that afford faster vector operations.
I didn't add more panama vector APIs as I think trying to micro-optimize int4 for anything other than dot-product was a fools errand. Additionally, I only focused on ARM. I experimented with trying to get better performance on other architectures, but didn't get very far, so I fall back to dotProduct.
* Made DocIdsWriter use DISI when reading documents with an IntersectVisitor.
Instead of calling IntersectVisitor.visit for each doc in the
readDelta16 and readInts32 methods, create a DocIdSetIterator
and call IntersectVisitor.visit(DocIdSetIterator) instead.
This seems to make Lucene faster at sorting and range querying
tasks - the hypothesis being that it is due to fewer virtual calls.
* Spotless
* Changed bulk iteration to use IntsRef instead.
* Clearer comment.
* Decrease cost outside loop
* Added test to make inverse intsref be used.
* Wording improvement in javadoc.
* Added CHANGES.txt entry.
* Binary search the entries when all suffixes have the same length in a leaf block.
* add comment on allEqual.
* BackwardsCompatibility: keep the same logic on fillTerm and SeekStatus(NOT_FOUND, END).
* Update comments: modify scan to binary search.
* Add unit test for binarySearchTermLeaf.
* Format code.
* Assert the value of termsEnum.term() is correct after seeking.
* Add CHANGES entry.
* Clarify "leaf block _of the terms dict_"
* Set suffixesReader's position.
* Advance to the greater term If binary search ended at the less term.
* Assert termsEnum's position after seeking.
* Tidy.
* Advance to the greater term if binary search ended at the less term: nextEnt plus 1.
* Advance to the greater term if binary search ended at the less term and greater term exists.
* Add test case: target greater than the last entry of the matched block.
* Move test case that target greater than the last entry of the matched block to TestLucene90PostingsFormat.
* Move test case for target greater than the last entry of the matched block to TestLucene99PostingsFormat
* Clarify code.
* Replace ternary with verbose if.
* Replace seekExact with seekCeil.
* Replace division by 2 with logical right shift.
* Remove assert ste.termExists.
* Clarify code.
* Remove stale change entry.
* Fix comment.
---------
Co-authored-by: Adrien Grand <jpountz@gmail.com>
Correct estimate for singleton to return `0` and use custom accumulator in tests to
fix assertions. Tried not doing this in #13232 but turns out we need a little complexity here
since the singleton is recognized by the `RamUsageTester` in so far that is only counted
once if it shows up repeatedly in an array or so.
fixes#13249
I repeatably saw some test failures related to `TestParentBlockJoin[Byte|Float]KnnVectorQuery#testVectorEncodingMismatch`. This commit fixes those test failures and actually checks the field type.
Both the KnnWriters & FieldInfo keep track of the vector similarity used by a given field. This commit ensures they are the same and utilizes the FieldInfo one (which, while these are enums, are exactly the same).
Size 256 is very common here throguh the monotonic long values default
page size. In ES we're seing many MB O(10M) of duplicate instances of this
size relatively quickly.
=> adding a singleton for it to save some heap
This updates `IndexWriter` to only call `OneMerge#reorder` when it has a chance
to preserve the block structure, ie. either there are no blocks or blocks are
identified by a parent field.
Furthermore, `MockRandomMergePolicy` is updated to preserve the block structure
when a parent field is configured.
Merging `IOContext`s use a `SEQUENTIAL` `ReadAdvice`. However, some file
formats hardcode `IOContext.LOAD` for some of their files, which silences the
whole merging context, in particular the `SEQUENTIAL` `ReadAdvice`.
This PR switches file formats to
`ioContext.withReadAdvice(ReadAdvice.RANDOM_PRELOAD)` so that merges will use a
`SEQUENTIAL` `ReadAdvice` while searches will use a `RANDOM_PRELOAD`
`ReadAdvice`.
This is not a huge deal for `RANDOM_PRELOAD`, which is only used for very small
files. However, this change becomes more relevant for the new `RANDOM`
`ReadAdvice` as we would like merges to keep using a `SEQUENTIAL` `ReadAdvice`.
Having a single block of all zeros is a fairly common case that is using
a lot of heap for duplicate instances in some use-cases in ES.
=> read a singleton for it to save the duplication
Romanian uses s&t with commas (ș/ț), but for a long time those weren't available, so s&t with cedilla (ş/ţ) were used. Both are still in use, but the comma forms are much more common now. Both should be supported in stopword lists.