Java 13 adds a new doclint check under "accessibility" that the html
header nesting level isn't crazy.
Many are incorrect because the html4-style javadocs had horrible
font-sizes, so developers used the wrong header level to work around it.
This is no issue in trunk (always html5).
Java recommends against using such structured tags at all in javadocs,
but that is a more involved change: this just "shifts" header levels
in documents to be correct.
the "missing javadocs" checker needed tweaks to work with the format
changes of java 13.
As a followup we may investigate javadoc (maybe the new doclet api). It
has its own missing checks too now, but they are black vs white (either
fully documented or not checked), whereas this python tool allows us to
"improve", e.g. enforce that all classes have doc, even if all
methods do not yet.
Current javadocs declare an HTML5 doctype: !DOCTYPE HTML. Some HTML5
features are used, but unfortunately also some constructs that do not
exist in HTML5 are used as well.
Because of this, we have no checking of any html syntax. jtidy is
disabled because it works with html4. doclint is disabled because it
works with html5. our docs are neither.
javadoc "doclint" feature can efficiently check that the html isn't
crazy. we just have to fix really ancient removed/deprecated stuff
(such as use of tt tag).
This enables the html checking in both ant and gradle. The docs are
fixed via straightforward transformations.
One exception is table cellpadding, for this some helper CSS classes
were added to make the transition easier (since it must apply padding
to inner th/td, not possible inline). I added TODOs, we should clean
this up. Most problems look like they may have been generated from a
GUI or similar and not a human.
* LUCENE-9142 Refactor SortedIntSet for equality
Split SortedIntSet into a class heirarchy to make comparisons to
FrozenIntSet more meaningful. Use Arrays.equals for more efficient
comparison. Add tests for IntSet to verify correctness.
If you have repeating intervals in an ordered or unordered interval source, you currently
get somewhat confusing behaviour:
* `ORDERED(a, a, b)` will return an extra interval over just a b if it first matches a a b, meaning
that you can get incorrect results if used in a `CONTAINING` filter -
`CONTAINING(ORDERED(x, y), ORDERED(a, a, b))` will match on the document `a x a b y`
* `UNORDERED(a, a)` will match on documents that just containg a single a.
This commit adds a RepeatingIntervalsSource that correctly handles repeats within
ordered and unordered sources. It also changes the way that gaps are calculated within
ordered and unordered sources, by using a new width() method on IntervalIterator. The
default implementation just returns end() - start() + 1, but RepeatingIntervalsSource
instead returns the sum of the widths of its child iterators. This preserves maxgaps filtering
on ordered and unordered sources that contain repeats.
In order to correctly handle matches in this scenario, IntervalsSource#matches now always
returns an explicit IntervalsMatchesIterator rather than a plain MatchesIterator, which adds
gaps() and width() methods so that submatches can be combined in the same way that
subiterators are. Extra checks have been added to checkIntervals() to ensure that the same
intervals are returned by both iterator and matches, and a fix to
DisjunctionIntervalIterator#matches() is also included - DisjunctionIntervalIterator minimizes
its intervals, while MatchesUtils.disjunction does not, so there was a discrepancy between
the two methods.
IndexMergeTool previously had no options and always forceMerge(1)
the resulting index. This can result in wasted work and confusing
performance (unbalancing the index).
Instead the default is to not do anything, except merges from the
merge policy.
This replaces the index of stored fields and term vectors with two
`DirectMonotonic` arrays. `DirectMonotonicWriter` requires to know the number
of values to write up-front, so incoming doc IDs and file pointers are buffered
on disk using temporary files that never get fsynced, but have index headers
and footers to make sure any corruption in these files wouldn't propagate to the
index.
`DirectMonotonicReader` gets a specialized `binarySearch` implementation that
leverages the metadata in order to avoid going to the IndexInput as often as
possible. Actually in the common case, it would only go to a single
sub `DirectReader` which, combined with the size of blocks of 1k values, helps
bound the number of page faults to 2.
SOLR-14095 Introduced an issue for rolling restarts (Incompatible Java serialization). This change fixes the compatibility issue while keeping the functionality in SOLR-14095
The entire precommit task will still fail with unsupported java version
(subsequent checks do not support the newer javadocs format).
But this allows the ECJ linter to run, which checks for things such as
unused imports.
This triggers various places in the Streaming Expressions code that use background threads
to confirm that the expected credentails (or lack of) are propogarded along.
Test currently has comments + workarounds for 2 known client issues:
- SOLR-14226: SolrStream reports AuthN/AuthZ failures (401|403) as IOException w/o details
- SOLR-14222: CloudSolrClient converts (update) 403 error to 500 error
Fuzzy queries with an edit distance of 1 or 2 must visit all blocks whose prefix
length is 1 or 2. By not compressing those, we can trade very little space (a
couple MBs in the case of the wikibigall index) for better query efficiency.