This updates file formats to compute prefix sums by summing up 8 deltas per
long at the same time if the number of bits per value is 4 or less, and 4
deltas per long at the same time if the number of bits per value is between 5
included and 11 included. Otherwise, we keep summing up 2 deltas per long like
today.
The `PostingDecodingUtil` was slightly modified due to the fact that more
numbers of bits per value now need to apply different shifts to the input data.
E.g. now that we store integers that require 5 bits per value as 16-bit
integers under the hood rather than 8, we extract the first values by shifting
by 16-5=11, 16-2*5=6 and 16-3*5=1 and then decode tail values from the
remaining bit per 16-bit integer.
* Only run the ide configuration block for eclipse when explicitly invoked. fix property access ordering here and there.
* Correct dependsOn task name.
* Correct crlf/encoding after versionCatalogFormatDeps finishes.
* Change java-library to java-base in the plugin applied within the eclipse task.
* use ant.fixcrlf to correct line endings.
* Changes entry.
* Simplify fixcrlf
---------
Co-authored-by: Uwe Schindler <uschindler@apache.org>
Note that this results in some silly code duplication so this is meant to be a more temporary fix while we better understand the root cause of the regression.
Our postings use a layout that helps take advantage of Java's
auto-vectorization to be reasonably fast to decode. But we can make it a bit
faster by vectorizing directly from the MemorySegment instead of first
copying data into a long[].
This approach only works when the `Directory` uses `MemorySegmentIndexInput`
under the hood, ie. `MMapDirectory` on JDK 21+.
Co-authored-by: Uwe Schindler <uschindler@apache.org>
This adds a new, ground-up implementation of faceting that computes aggregations while collecting. This has the following advantages over the current faceting module:
1. Allows for flexible aggregation logic instead of just "counts" in a general way (essentially makes what's available in today's "association faceting" available beyond taxonomy-based fields).
2. When aggregating beyond "counts," association value computation can be expensive. This implementation allows values to be computed only once when used across different aggregations.
3. Reduces latency by leveraging concurrency during collection (but potentially with increased overall cost).
This work has been done in the sandbox module for now since it is not yet complete (the current faceting module covers use-cases this doesn't yet) and it needs time to bake to work out API and implementation rough edges.
---------
Co-authored-by: Egor Potemkin <epotyom@amazon.com>
Co-authored-by: Shradha Shankar <shrdsha@amazon.com>
Co-authored-by: Greg Miller <gsmiller@gmail.com>
There is a tricky race condition with DWPT threads. It is possible that a flush starts by advancing the deleteQueue (in charge of creating seqNo). Thus, the referenced deleteQueue, there should be a cap on the number of actions left.
However, it is possible after the advance, but before the DWPT are actually marked for flush, the DWPT gets freed and taken again to be used.
To replicate this extreme behavior, see: https://github.com/apache/lucene/compare/main...benwtrent:lucene:test-replicate-and-debug-13127?expand=1
This commit will prevent DWPT from being added back to the free list if their queue has been advanced. This is because the `maxSeqNo` for that queue was created accounting only for the current number of active threads. If the thread gets passed out again and still references the already advanced queue, it is possible that seqNo actually advances past the set `maxSeqNo`.
closes: https://github.com/apache/lucene/issues/13127
closes: https://github.com/apache/lucene/issues/13571
A number of functions in CandidateMatcher are protected or package-protected,
meaning that client code can't use them, which makes it difficult to build custom
wrapper matchers. This commit makes these functions public
When quantizing vectors in a COSINE vector space, we normalize them. However, there is a bug when building the quantizer quantiles and we didn't always use the normalized vectors. Consequently, we would end up with poorly configured quantiles and recall will drop significantly (especially in sensitive cases like int4).
closes: #13614
Implement Kmeans clustering algorithm for vectors.
Knn algorithms that further reduce memory usage of vectors (such as Product Quantization,
RaBitQ etc) require clustering of vectors. This implements KMeans clustering algorithm.
Co-authored-by: Jim Ferenczi jim.ferenczi@elastic.co
Opening for more pointed discussion. See latest discussion here: #13281
I was hoping to have a full answer for folks who use byte models by Lucene 10, but I just don't have that.
I still want to remove the internal cosine optimized methods. We can do this if we store magnitudes along side the raw vectors. This way we can remove all the internal optimized cosine code as its complicated.
I noticed that single-term readers are an edge case but not that
uncommon in Elasticsearch heap dumps. It seems quite common to have a
constant value for some field across a complete segment (e.g. a version
value that is repeated endlessly in logs).
Seems simple enough to deduplicate here to save a couple MB of heap.
Looking at how these instances are serialized to disk it appears
that the empty output in the FST metadata is always the same as the
rootCode bytes.
Without changing the serialization we could at least deduplicate here,
saving hundreds of MB in some high-segment count use cases I observed in
ES.
Make it so rejected tasks are execute right away on the caller thread.
Users of the API shouldn't have to worry about rejections when we don't
expose any upper limit to the task count that we put on the executor that
would help in sizing a queue for the executor.
This updates the postings format in order to inline skip data into postings. This format is generally similar to the current `Lucene99PostingsFormat`, e.g. it shares the same block encoding logic, but it has a few differences:
- Skip data is inlined into postings to make the access pattern more sequential.
- There are only 2 levels of skip data: on every block (128 docs) and every 32 blocks (4,096 docs).
In general, I found that the fact that skip data is inlined may slow down a bit queries that don't need skip data at all (e.g. `CountOrXXX` tasks that never advance of consult impacts) and speed up a bit queries that advance by small intervals. The fact that the greatest level only allows skipping 4096 docs at once means that we're slower at advancing by large intervals, but data suggests that it doesn't significantly hurt performance.
This adds a Directory wrapper that counts how many times we wait for I/O to
complete before doing something else, and adds tests that the default codec is
able to parallelize I/O for stored fields retrieval and term lookups.
There's a couple of places in the codebase where we extend `IndexSearcher` to customize
per leaf behaviour, and in order to do that, we need to override the entire search method
that loops through the leaves. A good example is `ScorerIndexSearcher`.
Adding a `searchLeaf` method that provides the per leaf behaviour makes those cases a little
easier to deal with.