This has the same logic as the previous python, but no longer relies
upon parsing HTML output, instead using java's doclet processor.
The errors are reported like "normal" javadoc errors with source file
name and line number and happen when running "gradlew javadoc"
Although the "rules" are the same as the previous python, the python had
some bugs where the checker didn't quite do exactly what we wanted, so
some fixes were applied throughout.
Co-authored-by: Dawid Weiss <dawid.weiss@carrotsearch.com>
Co-authored-by: Uwe Schindler <uschindler@apache.org>
* Remove DIH example directory
* Remove contrib code directories
* Remove contrib package related configurations for build tools
* Remove mention of DIH example
* remove dih as build dependencies and no-longer needed version pins
* Remove README references to DIH
* Remove dih mention from the script that probably does need to exist at all
* More build artifact references
* More removed dependencies leftovers (licenses/versions)
* No need to smoke exclude DIH anymore
* Remove Admin UI's DIH integration
* Remove DIH from shortname package list
* Remove unused DIH (related? not?) dataset
Unclear what is happening here, but there is no reference to that directory anywhere else
The other parallel directories ARE referenced in a TestConfigSetsAPI.java
* Hidden Idea files references
* No DIH to ignore anymore
* Remove last Derby DB references
* Remove DIH from documentation
Add the information in Major Changes document with the link to the external repo
* Added/updated a mention to CHANGES
* Fix leftover library mentions
* Fix Spellings
Prior to this commit, the wrapup logic at the end of
DocBuilder.execute() closed out a series of DIH objects, but did so
in a way that an exception closing any of them resulted in the remainder
staying open. This is especially problematic since Writer.close()
throws exceptions that DIH uses to determine the success/failure of the
run.
In practice this caused network errors sending DIH data to other Solr
nodes to result in leaked JDBC connections.
This commit changes DocBuilder's termination logic to handle exceptions
more gracefully, ensuring that errors closing a DIHWriter (for example)
don't prevent the closure of entity-processor and DataSource objects.
* SOLR-14588: Implement Circuit Breakers
This commit consists of two parts: initial circuit breakers infrastructure and real JVM memory based
circuit breaker which monitors incoming search requests and rejects them with SERVICE_TOO_BUSY error
if the defined threshold is breached, thus giving headroom to existing indexing and search requests
to complete.
SOLR-12823: remove clusterstate.json in Lucene/Solr 9.0 - fix TestZKPropertiesWriter
TestZKPropertiesWriter relied on removed legacy features of the SolrCloud cluster to work.
Start a MiniSolrCloudCluster (implies config set and other test resources config) and have the test use the core of a created collection.
Add support for BlockMax WAND via a minExactHits parameter. Hits will be counted accurately at least until this value, and above that, the count will be an approximation. In distributed search requests, the count will be per shard, so potentially the count will be accurately counted until numShards * minExactHits. The response will include the value numFoundExact which can be true (The value in numFound is exact) or false (the value in numFound is an approximation).
* Remove SRL.listConfigDir (unused)
* Remove SRL.getDataDir
* Remove SRL.getCoreName
* Remove SRL.getCoreProperties
XmlConfigFile needs to be passed in the substitutableProperties
IndexSchema needs to be passed in the substitutableProperties
Remove redundant Properties from CoreContainer constructors
* Remove SRL.newAdminHandlerInstance (unused)
* Remove SRL.openSchema and openConfig
* Avoid SRL.getConfigDir
Also harmonized similar initialization logic between DIH Tika processor & ExtractingRequestHandler.
* Ensure SRL.addToClassLoader and reloadLuceneSPI are called at most once
Don't auto-load "lib" in constructor; wrong place for this logic.
* Avoid SRL.getInstancePath
Added SolrCore.getInstancePath instead
Use CoreContainer.getSolrHome instead
NodeConfig should track solrHome separate from SolrResourceLoader
* Simplify some SolrCore constructors
* Move locateSolrHome to new SolrPaths
* Move "User Files" stuff to SolrPaths
Previous situation:
* The snowball base classes (Among, SnowballProgram, etc) had accumulated local performance-related changes. There was a task that would also "patch" generated classes (e.g. GermanStemmer) after-the-fact.
* Snowball classes had many "non-changes" from the original such as removal of tabs addition of javadocs, license headers, etc.
* Snowball test data (inputs and expected stems) was incorporated into lucene testing, but this was maintained manually. Also files had become large, making the test too slow (Nightly).
* Snowball stopwords lists from their website were manually maintained. In some cases encoding fixes were manually applied.
* Some generated stemmers (such as Estonian and Armenian) exist in lucene, but have no corresponding `.sbl` file in snowball sources at all.
Besides this mess, snowball project is "moving along" and acquiring new languages, adding non-BSD-licensed test data, huge test data, and other complexity. So it is time to automate the integration better.
New situation:
* Lucene has a `gradle snowball` regeneration task. It works on Linux or Mac only. It checks out their repos, applies the `snowball.patch` in our repository, compiles snowball stemmers, regenerates all java code, applies any adjustments so that our build is happy.
* Tests data is automatically regenerated from the commit hash of the snowball test data repository. Not all languages are tested from their data: only where the license is simple BSD. Test data is also (deterministically) sampled, so that we don't have huge files. We just want to make sure our integration works.
* Randomized tests are still set to test every language with generated fake words. The regeneration task ensures all languages get tested (it writes a simple text file list of them).
* Stopword files are automatically regenerated from the commit hash of the snowball website repository.
* The regeneration procedure is idempotent. This way when stuff does change, you know exactly what happened. For example if test data changes to a different license, you may see a git deletion. Or if a new language/stopwords/test data gets added, you will see git additions.