* Updated SOLR-8138 files for Solr 9.
This code was mostly written by Michael Suzuki, i just tweaked it to load, and updated the version of ui-grid to the 4.10 version.
* unused file, we use the .min version.
* add an entry for the ui-grid project to license file.
Co-authored-by: epugh@opensourceconnections.com <>
The 'testBackupAndRestore' method in this class was asserting that the
collection created by restore had the expected number of cores-per-node,
but the logic to compute that expected cores-per-node value failed to
account for a rarely-triggered branch that adds a 'createNodeSet' param
to the restore.
This commit updates the test logic to compute the expected
cores-per-node value when createNodeSet is passed.
The recent addition of support for a "readonly" mode for collections
opens the door to restoring to already-existing collections.
This commit adds a codepath to allow this. Any compatible existing
collection may be used for restoration, including the collection that
was the original source of the backup.
Upgrade from icu 62.2 to 68.2, with Unicode 13 support.
Modify GenerateUTR30DataFiles to take the release tag as a program
argument. Gradle populates this automatically, removing a manual step
from regeneration process.
* relocate xslt related classes into scripting contrib
* relocating files to scripting and seperating out unit tests
* relocate files under test-files/scripting/solr, similar to how we do it in other contribs. deals with some issues in finding files
* Reformatting using the Google Java Format...
* use actual param name, not the variable to properly test api!
* Clean up references to paths, and deal with the mish mash of Xslt and XSLT in class names.
* Move XSLT processing out of XMLLoader
* Move TransformerProvider.Dedupe getTransformer logic.
Co-authored-by: epugh@opensourceconnections.com <>
Co-authored-by: David Smiley <dsmiley@apache.org>
SOLR-13608 introduces a new "incremental" backup format, which allows
storage of multiple backup "points" in the same location. This
development introduces a need for APIs to manage these potentially
plural backups.
This commit introduces /admin/collections?action=LISTBACKUPS and
/admin/collections?action=DELETEBACKUP to handle these backups.
In Solr 8.6.3, minCompetitiveScore of WANDScorer resets to zero for each index segment and remain zero until maxScore is updated.
There are two causes of this problem:
* MaxScoreCollector does not set minCompetitiveScore of MinCompetitiveScoreAwareScorable newly generated for another index segment.
* MaxScoreCollector updates minCompetitiveScore only if maxScore is updated. This behavior is correct considering the purpose of MaxScoreCollector.
For details, see the attached pdf https://issues.apache.org/jira/secure/attachment/13019548/wand.pdf.
The initial incremental-backup commit introduced several test failures
on Windows test runs that I neglected to catch before committing. Most
of these failures were the result of bad 'location' path handling in the
test logic itself, though there were a few tweaks made to Solr code
itself to better handle Windows paths as well.
Solr supports two different ways to write v2 APIs: a JSON spec based
approach, and one based on annotated POJOs. The POJO method is now
preferred.
This commit switches the /v2/collections APIs over to the
annotation-based approach. Since V2RequestSupport only works with
jsonspec-based APIs, this commit also changes CollectionAdminRequest
to no longer implement that interface.
* Make all Tool option descriptions follow the same general pattern for describing them.
* Figure out a switch to determine level of either cluster or collections(s)
* better wording on what cluster versus collection params mean
Co-authored-by: epugh@opensourceconnections.com <>
This commit introduces a new way for Solr to do backups (with a new
underlying file structure). This new "incremental" backup process
improves over the existing backup mechanism in several ways:
- multiple backups "points" can now be stored at a given backup
location/name, allowing users to choose which point in time they want
to restore
- subsequent backups skip over uploading files that were uploaded by
previous backups, saving time and network time.
- files are checksumed as they're uploaded, ensuring that corrupted
indices aren't persisted and accidentally restored later.
Incremental backups are now the default, and traditional backups
should now be considered 'deprecated' but can still be created by
passing an `incremental=false` parameter on backup requests.