This iterates on #13546 to further reduce the overhead of search concurrency by
caching whether the hit count threshold has been reached: once the threshold
has been reached, it cannot get "un-reached" again, so we don't need to pay the
cost of `LongAdder#longValue`.
* lazily write the FST padding byte
* Also write the pad byte when there is emptyOutput
* add comment
* Make Lucene90BlockTreeTermsWriter to write FST off-heap
* Add change log
* Tidy code & Add comments
* use temp IndexOutput for FST writing
* Use IOUtils to delete files
* Update CHANGES.txt
* Update CHANGES.txt
I analyzed a heap dump of Elasticsearch where FixedBitSet uses more than
1GB of memory. Most of these FixedBitSets are used by soft-deletes
reader wrappers, even though these segments have no deletes at all. I
believe these segments previously had soft-deletes, but these deletes
were pruned by merges. The reason we wrap soft-deletes is that the
soft-deletes field exists. Since these segments had soft-deletes
previously, we carried the field-infos into the new segment. Ideally, we
should have ways to check whether the returned docValues iterator is
empty or not so that we can avoid allocating FixedBitSet completely, or
we should prune fields without values after merges.
Currently `MaxScoreBulkScorer` requires its "outer" window to be at least
`WINDOW_SIZE`. The intuition there was that we should make sure we should use
the whole range of the bit set that we are using to collect matches. The
downside is that it may force us to use an upper level in the skip list that
has worse upper bounds for the scores.
This commit uses IOContext.READONCE in more places where the index input is clearly being read once by the thread opening it. We can then enforce that segment files are only opened with READONCE, in the test specific Mock directory wrapper.
Much of the changes in this PR update individual test usage, but there is one non-test change to Directory::copyFrom.
We already have convenient functions for contructing IntervalsSource
for wildcard and fuzzy functions. This adds functions for
regexp and range as well.
This introduces `TermsEnum#prepareSeekExact`, which essentially calls
`IndexInput#prefetch` at the right offset for the given term. Then it takes
advantage of the fact that `BooleanQuery` already calls `Weight#scorerSupplier`
on all clauses, before later calling `ScorerSupplier#get` on all clauses. So
`TermQuery` now calls `TermsEnum#prepareSeekExact` on `Weight#scorerSupplier`
(if scores are not needed), which in-turn means that the I/O all terms
dictionary lookups get parallelized across all term queries of a
`BooleanQuery` on a given segment (intra-segment parallelism).
Use a confined Arena for IOContext.READONCE.
This change will require inputs opened with READONCE to be consumed and closed on the creating thread. Further testing and assertions can be added as a follow up.
this commit makes possible to configure dynamically the interval size for doc values skipperfor testing, and add a new
test suite that changes the interval size randomly.
This commit improves the performance of VectorUtil::xorBitCount on ARM by ~4x.
This change is effectively a workaround for the lack of vectorization of Long::bitCount on ARM.
On x64 there is no issue, the long variant of xorBitCount outperforms the int variant by ~15%.
Single byte writes to BufferedOutputStream show up pretty hot in
indexing benchmarks. We can save the locking overhead introduced by
JEP374 by overriding and providing a no-lock fastpath.
Don't use Comparator.comparingDouble(...) in a hotish loop here, it
causes allocations that escape analysis is not able to remove.
=> lets just manually inline this to get predictable behavior and save
up to 0.5% of all allocations in some benchmark runs.
The value for the global count is incremented a lot more than it is
read, the space overhead of LongAdder seems irrelevant => lets use
LongAdder. The performance gain from using it is the higher the more
threads you use, but at 4 threads already very visible in benchmarks.
When an executor is provided to the IndexSearcher constructor, the searcher now executes tasks on the thread that invoked a search as well as its configured executor. Users should reduce the executor's thread-count by
1 to retain the previous level of parallelism. Moreover, it is now possible to start searches from the same executor that is configured in the IndexSearcher without risk of deadlocking. A separate executor for starting searches is no longer required.
Previously, a separate executor was required to prevent deadlock, and all the tasks were offloaded to it unconditionally, wasting resources in some scenarios due to unnecessary forking, and the caller thread having to wait for all tasks to be completed anyways. it can now actively contribute to the execution as well.
The backport of #13524 found a hole in the testing of `Lucene40BlockTreeTerms`
for versions before we moved metadata to its own file. This PR adds explicit bw
testing for this version. Adding the correct if/else statements made the code
extremely complicated so I opted for restoring the file as it was at the time
when we bumped the version.
This also fixes the bug that we introduced in #13524.
We don't need to clone the index input we hold on to in OffHeapFSTStore
since we only use it for slicing from known coordinates anyway.
-> remove the cloning and add the infrastructure to initialize
OffHeapFSTStore without seeking the input to the starting offset.
It is relatively easy to consume a massive amount of memory
for the minimize operation, with its lists of boxed Integer (even though these are mostly cached,
it's still more than 4b per instance to store them instead of plain storage) and neverending
duplicate+empty StateList instances.
The boxed integer situation we can fix and probably speedup by using the hppc primitive collections.
To fix the duplicate/empty StateList instances, we can use a constant. This requires some hacky forking
on the write path but that's about it.
This is partly motivated by ES users at times creating broken, very long prefix queries that can then eat up
GBs of heap. With this change, the examples I've been looking at become about 6x cheaper heap wise, making it
less likely that kind of mistakes impacts stability.