- Reduce overhead of non-concurrent search by preserving original execution
- Improve readability by factoring into separate functions
---------
Co-authored-by: Kaival Parikh <kaivalp2000@gmail.com>
This change introduces `MultiTermQuery#CONSTANT_SCORE_BLENDED_REWRITE`, a new rewrite method
meant to be used in place of `MultiTermQuery#CONSTANT_SCORE_REWRITE` as the default for multi-term
queries that act as a filter. Currently, multi-term queries with a filter rewrite internally rewrite to a
disjunction if 16 terms or less match the query. Otherwise postings lists of
matching terms are collected into a `DocIdSetBuilder`. This change replaces the
latter with a mixed approach where a disjunction is created between the 16
terms that have the highest document frequency and an iterator produced from
the `DocIdSetBuilder` that collects all other terms. On fields that have a
zipfian distribution, it's quite likely that no high-frequency terms make it to
the `DocIdSetBuilder`. This provides two main benefits:
- Queries are less likely to allocate a FixedBitSet of size `maxDoc`.
- Queries are better at skipping or early terminating.
On the other hand, queries that need to consume most or all matching documents may get a slowdown, so
users can still opt-in to the "full filter rewrite" functionality by overriding the rewrite method. This is the new
default for PrefixQuery, WildcardQuery and TermRangeQuery.
Co-authored-by: Adrien Grand <jpountz@gmail.com> / Greg Miller <gsmiller@gmail.com>
The default implementation of merging doc values resolves the ordinal of a
document in `nextDoc()`. But sometimes, doc values iterators are consumed
without retrieving ordinals, e.g. to write the set of documents that have a
value, so this may be wasteful.
With this change, ordinals get resolved lazily upon `ordValue()`.
`LogMergePolicy` has this boundary at the floor level that prevents merging
segments above the minimum segment size with segments below this size. I cannot
see a benefit from doing this, and no tests fail if I remove it, while this
boundary has the downside of not running merges that seem legit to me. Should
we remove this boundary check?
Indexing simple keywords through a `TokenStream` abstraction introduces a bit
of overhead due to attribute management. Not much, but indexing keywords boils
down to adding to a hash map and appending to a postings list, which is quite
cheap too so even some low overhead can significantly impact indexing speed.
The two minor performance improvements are around count and Weight#scorer.
segmentStarts is a monotonically increasing start for each scored document indexed by leaf-segment ordinal. Consequently, if the upper and lower segments are equivalent, that means no docs match for this segment.
Count is similarly calculated by the difference between upper and lower segmentStarts according to the segment ordinal.
Similar to use of ScorerSupplier in #12129, implement it here too,
because creation of a Scorer requires lookupTerm() operations in the DV
terms dictionary. This results in wasted effort/random accesses, if, based on the cost(),
IndexOrDocValuesQuery decides not to use this query.
The helper class DocAndScoreQuery implements advanceShallow to help skip
non-competitive documents. This method doesn't actually keep track of where it
has advanced, which means it can do extra work.
Overall the complexity here didn't seem worth it, given the low cost of
collecting matching kNN docs. This PR switches to a simpler approach, which uses
a fixed upper bound on the max score.
This change adjusts the cache policy to ensure that all segments in the
max tier to be cached. Before, we cache segments that have more than 3%
of the total documents in the index; now cache segments have more than
half of the average documents per leave of the index.
Closes#12140
The test assumes a single segment is created (because only one scorer is created from the leaf contexts).
But, force merging wasn't done before getting the reader. Forcemerge to a single segment before getting the reader.
In TestUnifiedHighlighterTermVec, we have a special reader which counts the
number of times term vectors are accessed, so that we can assert that caching
works correctly here. There is some special logic in place to skip the check
when the test framework wraps readers with CheckIndex or ParallelReader;
however, this logic no longer works with ParallelReader in particular, because
term vectors are now accessed through an anonymous class.
A simpler solution here is to call newSearcher(reader, false), which disables
wrapping, meaning that we can remove this extra logic entirely.
Fixes#12115
* Remove implicit addition of vector 0
Removes logic to add 0 vector implicitly. This is in preparation for
adding nodes from other graphs to initialize a new graph. Having the
implicit addition of node 0 complicates this logic.
Signed-off-by: John Mazanec <jmazane@amazon.com>
* Enable out of order insertion of nodes in hnsw
Enables nodes to be added into OnHeapHnswGraph in out of order fashion.
To do so, additional operations have to be taken to resort the
nodesByLevel array. Optimizations have been made to avoid sorting
whenever possible.
Signed-off-by: John Mazanec <jmazane@amazon.com>
* Add ability to initialize from graph
Adds method to initialize an HNSWGraphBuilder from another HNSWGraph.
Initialization can only happen when the builder's graph is empty.
Signed-off-by: John Mazanec <jmazane@amazon.com>
* Utilize merge with graph init in HNSWWriter
Uses HNSWGraphBuilder initialization from graph functionality in
Lucene95HnswVectorsWriter. Selects the largest graph to initialize the
new graph produced by the HNSWGraphBuilder for merge.
Signed-off-by: John Mazanec <jmazane@amazon.com>
* Minor modifications to Lucene95HnswVectorsWriter
Signed-off-by: John Mazanec <jmazane@amazon.com>
* Use TreeMap for graph structure for levels > 0
Refactors OnHeapHnswGraph to use TreeMap to represent graph structure of
levels greater than 0. Refactors NodesIterator to support set
representation of nodes.
Signed-off-by: John Mazanec <jmazane@amazon.com>
* Refactor initializer to be in static create method
Refeactors initialization from graph to be accessible via a create
static method in HnswGraphBuilder.
Signed-off-by: John Mazanec <jmazane@amazon.com>
* Address review comments
Signed-off-by: John Mazanec <jmazane@amazon.com>
* Add change log entry
Signed-off-by: John Mazanec <jmazane@amazon.com>
* Remove empty iterator for neighborqueue
Signed-off-by: John Mazanec <jmazane@amazon.com>
---------
Signed-off-by: John Mazanec <jmazane@amazon.com>
`KeywordField` is a combination of `StringField` and `SortedSetDocValuesField`,
similarly to how `LongField` is a combination of `LongPoint` and
`SortedNumericDocValuesField`. This makes it easier for users to create fields
that can be used for filtering, sorting and faceting.
* Optimize the common case that docs only have single values for the field
* In the multivalued case, terminate reading docvalues if they are > maximum set ordinal
* Implement ScorerSupplier, so that (potentially large) number of ordinal lookups aren't performed just to get the cost()
* Graduate to Sorted(Set)DocValuesField.newSlowSetQuery to complement newSlowRangeQuery, newSlowExactQuery
Like other slow queries in these classes, it's currently only recommended to use with points, e.g. IndexOrDocValuesQuery(new PointInSetQuery, newSlowSetQuery)
LongHashSet is used for the set of numbers, but it has some issues:
* tries to hard to extend AbstractSet, mostly for testing
* causes traps with boxing if you aren't careful
* complex hashcode/equals
Practically we should take advantage of the fact numbers come in sorted
order for multivalued fields: just like range queries do. So we use
min/max to our advantage, including termination of docvalues iteration
Actually it is generally a win to just check min/max even in the single-valued
case: these constant time comparisons are cheap and can avoid hashing,
etc.
In the worst-case, if all of your query Sets contain both the minimum and maximum
possible values, then it won't help, but it doesn't hurt either.
There's no need to make things abstract: DocValues does the right thing
Optimizing for where no docs for the field in the segment exist is easy, simple null check (replacing the existing one!)
Currently stored fields have to look at binaryValue(), stringValue() and
numericValue() to guess the type of the value and then store it. This has a few
issues:
- If there is a problem, e.g. all of these 3 methods return null, it's
currently discovered late, when we already passed the responsibility of
writing data from IndexingChain to the codec.
- numericValue() is used both for numeric doc values and storage. This makes
it impossible to implement a `double` field that is stored and doc-valued,
as numericValue() needs to return simultaneously a number that consists of
the double for storage, and the long bits of the double for doc values.
- binaryValue() is used both for sorted(_set) doc values and storage. This
makes it impossible to implement `keyword` fields that is stored and
doc-valued, as the field returns a non-null value for both binaryValue() and
stringValue() and stored fields no longer know which field to store.
This commit introduces `IndexableField#storedValue()`, which is used only for
stored fields. This addresses the above issues. IndexingChain passes the
storedValue() directly to the codec, so it's impossible for a stored fields
format to mistakenly use binaryValue()/stringValue()/numericValue() instead of
storedValue().
Sometimes the random search lucene test searcher will wrap the reader. Consequently, we need to make sure to use the reader provided by the test IndexSearcher or the reader may be different between creating the weight with the searcher vs. accessing the leaf context for the scorer.
While FeatureQuery is a powerful tool in the scoring case, there are scenarios when caching should be allowed and scoring disabled.
A particular case is when the FeatureQuery is used in conjunction with learned-sparse retrieval. It is useful to iterate and calculate the entire matching doc set when combined with various other queries.
related to: https://github.com/apache/lucene/issues/11799
Two of the methods (squareDistance and dotProduct) that take byte arrays return a float while
the variable used to store the value is an int. They can just return an int.
A recent test failure signaled that when the simple text codec was randomly selected, byte vectors could not be written.
This commit addressed that by adding support for writing byte vectors to SimpleTextKnnVectorsWriter.
Note that while support is added to the BufferingKnnVectorsWriter base class, 90, 91 and 92 writers don't need to support
byte vectors and will throw unsupported operation exception when attempting to do that.
Follow-up of #12105 to remove the deprecated classes for the next major version.
Removes KnnVectorField, KnnVectorQuery, VectorValues and LeafReader#getVectorValues.
When a field indexes numeric doc values, `MemoryIndex` does an unchecked cast
to `java.lang.Long`. However, the new `IntField` represents the value as a
`java.lang.Integer` so this cast fails. This commit aligns `MemoryIndex` with
`IndexingChain` by casting to `Number` and calling `Number#longValue` instead
of casting to `Long`.