used in parallel() streaming expression. Hash algorithm is different.
* Simpler
* Don't use Filter (to be removed)
* Do use TwoPhaseIterator, not PostFilter
* Don't pre-compute matching docs (wasteful)
* Support more fields, and more field types
* Faster hash on Strings (avoid Char conversion)
* Stronger hash when using multiple fields
* Fix JSON Faceting on EnumFieldType if allBuckets, numBuckets or missing is set.
* Enhance hash method of JSON faceting to support EnumFieldType and perhaps some other/custom field types
Co-authored-by: Thomas Wöckinger <two@silbergrau.com>
Co-authored-by: David Smiley <dsmiley@apache.org>
Partial (AKA Atomic) updates could encounter "LazyField" instances in the document
cache and not know hot to deal with them when writing the updated doc to the update log.
For now this is just a copy of Lucene90PostingsFormat. The existing
Lucene84PostingsFormat was moved to backwards-codecs, along with its utility
classes.
* Updated SOLR-8138 files for Solr 9.
This code was mostly written by Michael Suzuki, i just tweaked it to load, and updated the version of ui-grid to the 4.10 version.
* unused file, we use the .min version.
* add an entry for the ui-grid project to license file.
Co-authored-by: epugh@opensourceconnections.com <>
The 'testBackupAndRestore' method in this class was asserting that the
collection created by restore had the expected number of cores-per-node,
but the logic to compute that expected cores-per-node value failed to
account for a rarely-triggered branch that adds a 'createNodeSet' param
to the restore.
This commit updates the test logic to compute the expected
cores-per-node value when createNodeSet is passed.
The recent addition of support for a "readonly" mode for collections
opens the door to restoring to already-existing collections.
This commit adds a codepath to allow this. Any compatible existing
collection may be used for restoration, including the collection that
was the original source of the backup.
Upgrade from icu 62.2 to 68.2, with Unicode 13 support.
Modify GenerateUTR30DataFiles to take the release tag as a program
argument. Gradle populates this automatically, removing a manual step
from regeneration process.
* relocate xslt related classes into scripting contrib
* relocating files to scripting and seperating out unit tests
* relocate files under test-files/scripting/solr, similar to how we do it in other contribs. deals with some issues in finding files
* Reformatting using the Google Java Format...
* use actual param name, not the variable to properly test api!
* Clean up references to paths, and deal with the mish mash of Xslt and XSLT in class names.
* Move XSLT processing out of XMLLoader
* Move TransformerProvider.Dedupe getTransformer logic.
Co-authored-by: epugh@opensourceconnections.com <>
Co-authored-by: David Smiley <dsmiley@apache.org>
SOLR-13608 introduces a new "incremental" backup format, which allows
storage of multiple backup "points" in the same location. This
development introduces a need for APIs to manage these potentially
plural backups.
This commit introduces /admin/collections?action=LISTBACKUPS and
/admin/collections?action=DELETEBACKUP to handle these backups.