* LUCENE-9142 Refactor SortedIntSet for equality
Split SortedIntSet into a class heirarchy to make comparisons to
FrozenIntSet more meaningful. Use Arrays.equals for more efficient
comparison. Add tests for IntSet to verify correctness.
If you have repeating intervals in an ordered or unordered interval source, you currently
get somewhat confusing behaviour:
* `ORDERED(a, a, b)` will return an extra interval over just a b if it first matches a a b, meaning
that you can get incorrect results if used in a `CONTAINING` filter -
`CONTAINING(ORDERED(x, y), ORDERED(a, a, b))` will match on the document `a x a b y`
* `UNORDERED(a, a)` will match on documents that just containg a single a.
This commit adds a RepeatingIntervalsSource that correctly handles repeats within
ordered and unordered sources. It also changes the way that gaps are calculated within
ordered and unordered sources, by using a new width() method on IntervalIterator. The
default implementation just returns end() - start() + 1, but RepeatingIntervalsSource
instead returns the sum of the widths of its child iterators. This preserves maxgaps filtering
on ordered and unordered sources that contain repeats.
In order to correctly handle matches in this scenario, IntervalsSource#matches now always
returns an explicit IntervalsMatchesIterator rather than a plain MatchesIterator, which adds
gaps() and width() methods so that submatches can be combined in the same way that
subiterators are. Extra checks have been added to checkIntervals() to ensure that the same
intervals are returned by both iterator and matches, and a fix to
DisjunctionIntervalIterator#matches() is also included - DisjunctionIntervalIterator minimizes
its intervals, while MatchesUtils.disjunction does not, so there was a discrepancy between
the two methods.
IndexMergeTool previously had no options and always forceMerge(1)
the resulting index. This can result in wasted work and confusing
performance (unbalancing the index).
Instead the default is to not do anything, except merges from the
merge policy.
This replaces the index of stored fields and term vectors with two
`DirectMonotonic` arrays. `DirectMonotonicWriter` requires to know the number
of values to write up-front, so incoming doc IDs and file pointers are buffered
on disk using temporary files that never get fsynced, but have index headers
and footers to make sure any corruption in these files wouldn't propagate to the
index.
`DirectMonotonicReader` gets a specialized `binarySearch` implementation that
leverages the metadata in order to avoid going to the IndexInput as often as
possible. Actually in the common case, it would only go to a single
sub `DirectReader` which, combined with the size of blocks of 1k values, helps
bound the number of page faults to 2.
SOLR-14095 Introduced an issue for rolling restarts (Incompatible Java serialization). This change fixes the compatibility issue while keeping the functionality in SOLR-14095
The entire precommit task will still fail with unsupported java version
(subsequent checks do not support the newer javadocs format).
But this allows the ECJ linter to run, which checks for things such as
unused imports.
This triggers various places in the Streaming Expressions code that use background threads
to confirm that the expected credentails (or lack of) are propogarded along.
Test currently has comments + workarounds for 2 known client issues:
- SOLR-14226: SolrStream reports AuthN/AuthZ failures (401|403) as IOException w/o details
- SOLR-14222: CloudSolrClient converts (update) 403 error to 500 error
Fuzzy queries with an edit distance of 1 or 2 must visit all blocks whose prefix
length is 1 or 2. By not compressing those, we can trade very little space (a
couple MBs in the case of the wikibigall index) for better query efficiency.
Changes include:
- Removed LZ4 compression of suffix lengths which didn't save much space
anyway.
- For stats, LZ4 was only really used for run-length compression of terms whose
docFreq is 1. This has been replaced by explicit run-length compression.
- Since we only use LZ4 for suffix bytes if the compression ration is < 75%, we
now only try LZ4 out if the average suffix length is greater than 6, in order
to reduce index-time overhead.
The issue is that MockDirectoryWrapper's disk full check is horribly
inefficient. On every writeByte/etc, it totally recomputes disk space
across all files. This means it calls listAll() on the underlying
Directory (which sorts all the underlying files), then sums up fileLength()
for each of those files.
This leads to many pathological cases in the disk full tests... but the
number of tests impacted by this is minimal, and the logic is scary.