It's in the title, extracting shared parts across both classes.
Almost exclusively mechanical changes with the exception of the introduction
of an array summing util.
This is a continuation of #13588, where we avoided allocating liveDocs
for segments that have the __soft_deletes field but no values in it.
However, that PR only addressed the reading side. This change fixes the
writing scenario with IndexWriter.
Relates #13588
These two share a lot of code, in particular the impacts implementation is 100% identical.
We can save a lot of code and potentially some cycles for method invocations by
drying things up. The changes are just mechanical field movements with the following exceptions:
1. One of the two implementations was using a bytes ref builder, one a bytes ref for holding the
serialized impacts. The `BytesRef` variant is faster so I used that for both when extracting.
2. Some simple arithmetic simplifications around the levels that should be obvious.
3. Removed the the logic for an index without positions in `BlockImpactsPostingsEnum`, that was dead code,
we only set this thing up if there's positions.
Lazy initialize these fields. They consume/cause a lot of memory/GC because they are
allocated frequently (~7% of all allocations in luceneutil's wikimedia medium run for me).
This does not cause any measurable slowdown as far as runtime is concerned and since these are not
even needed for all instances (in fact they are rarely used in the queries the benchmark explores)
save qutie a few CPU cycles for collecting and allocating them.
* Make generated archive files reproducible
This should ensure deterministic archive files and fix issues with changing checksums even
though the codebase has not changed
The two seeds at #13818 had different root causes:
- The test allows the number of segments to go above the limit, only if none
of the merges are legal. But there are multiple reasons why a merge may be
illegal: because it exceeds the max doc count or because it is too imbalanced.
However these two things were checked independently, so you could run into
cases when the test would think that there are legal merges from the doc count
perspective and from the balance perspective, but all legal merges from the doc
count perspective are illegal from the balance perspective and vice-versa. The
test now checks that there are merges that are good wrt these two criteria
at once.
- `TieredMergePolicy` allows at least `targetSearchConcurrency` segments in an
index. There was a bug in `TieredMergePolicy` where this condition is
applied after "too big" segments have been removed, so it effectively allowed
more segments than necessary in the index.
Closes#13818
`SerialIODirectory` doesn't count reads to files that are open with
`ReadAdvice#RANDOM_PRELOAD` as these files are expected to be loaded in memory.
Unfortunately, we cannot detect such files on compound segments, so this test
now disables compound segments.
Closes#13854
There's no need to allocate a byte array when serializing to heap
buffers and the string fits the remaining capacity without further bounds checks.
If it doesn't fit we could technically do better than the current
`writeLongString` and avoid one round of copying by chunking the string
but that might not be worth the complexity.
In either case we can calculate the utf8 length up-front.
While this costs extra cycles (in the small case) for iterating the string twice it saves
creating an oftentimes 3x oversized byte array, a `BytesRef`, field
reads from the `BytesRef`, copying from it to the buffer and the associated GC with cleaning it up.
Theory and some quick benchmarking suggests this version is likely faster for any string
length than the existing code.
Removing some obvious dead code, turning some fields into locals that don't need to be fields, making things static and deduplicating duplicate "scratch" field.
An object return inside hot code like this is needlessly wasteful.
Escape analysis doesn't catch this one and we end up allocating many GB
of throwaway objects during benchmark runs. We might as well use two
utility methods and accumulate the raw value.
This shows up as allocating tens of GB for iterators in the nightly
benchmarks. We should go the zero-allocation route for RandomAccess
lists, which I'd expect 100% of them will be here for a bit of a speedup.
This commit removes the flattening of ordered and unordered interval sources, as it alters the gap visibility for parent intervals. For example, ordered("a", ordered("b", "c")) should result in a different gap compared to ordered("a", "b", "c").
Phrase/Block operators will continue to flatten their sub-sources since this does not affect the inner gap (which is always 0 in the case of blocks).
The command to remove uploaded artifacts from svn is missing a dash, hence it
fails as it does not match the name of the artifacts uploaded at the previous steps.
There is currently no way to configure two parameters for the multi-leaf collector. For expert extensibility, this commit adds another ctor for advance usage:
closes: #13699
This is a test only change that verifies the behaviour when float vector values are passed to our FlatVectorsScorer implementations. This would have caught the bug causing #13844, subsequently fixed by #13850.
introduced in the major refactor #13779
Off-heap scoring is only present for byte[] vectors, and it isn't enough to verify that the vector provider also satisfies the HasIndexSlice interface. The vectors need to be byte vectors otherwise, the slice iterations and scoring are completely nonsensical leading to HNSW graph building to run until the heat-death of the universe.