* Remove distrib=false from /terms handler so that terms are returned from across all shards instead of a single local shard.
* cleanup shards parameter handling in TermsComponent. This is handled in HttpShardHandler
* Remove redundant tests for shard whitelist
* remove redundant terms params from ScoreNodeStream
Regression from 8.6
Multipart POST would fail due to a NoClassDefFoundError of Jetty MultiPart. Solr cannot access many Jetty classes, which is not noticeable in our tests.
...StringIndexOutOfBoundsException on bad syntax
* failOnMissingParams: should have been returning null (failing) on bad syntax cases
Co-authored-by: Christine Poerschke <cpoerschke@apache.org>
When set to true, Solr will overwrite an existing configset in ZooKeeper in an UPLOAD.
A new cleanup parameter can also be passed to let Solr know what to do with the files that existed in the old configset, but no longer exist in the new configset (remove or keep)
* Replace Auth plugin with mocks
* Remove unused password param
* Start cluster only once
* Use SolrCloudTestCase
* Use MiniSolrCloudCluster's methods to remove collections and configsets
AbstractSpatialPrefixTreeFieldType (and its children) create index
fields based on a prototype with options frozen in PrefixTreeStrategy,
regardless of options specified in the schema. This works fine most of
the time, but causes problems when QParsers or other query optimization
logic makes decisions based on these options (which are potentially out
of sync with the underlying index data). Most commonly this causes
issues with "exists" (e.g. [* TO *]) queries.
This commit enforces fieldType defaults that line up with the 'hardcoded'
FieldType used by PrefixTreeStrategy. Options on either the fieldType
or the field itself which contradict these defaults will result in
exceptions at schema load/modification time.
* Updated implicit definition with terms=true, distrib=false
* Commented out terms handler with notice, as this is the config used in tests
* Remove spurious mentions cluttering other test configs
* Remove implicit terms=true param
* Remove definitions from shipped configsets
* Improve documentation
* Add CHANGES record
Allow using placement plugins to compute replica placement on the cluster for Collection API calls.
This is the first code drop for the replacement of the Autoscaling feature.
Javadoc of sample plugin org.apache.solr.cluster.placement.plugins.SamplePluginAffinityReplicaPlacement details how to enable this replica placement strategy.
PR's #1684 then #1845
Prior to this commit the docValuesTermsFilterTopLevel method of the
{!terms} query parser would return zero results when run against a
single-valued String. This commit fixes this by wrapping the
single-valued 'SortedDocValues' in a 'SortedSetDocValues' object.
This has the same logic as the previous python, but no longer relies
upon parsing HTML output, instead using java's doclet processor.
The errors are reported like "normal" javadoc errors with source file
name and line number and happen when running "gradlew javadoc"
Although the "rules" are the same as the previous python, the python had
some bugs where the checker didn't quite do exactly what we wanted, so
some fixes were applied throughout.
Co-authored-by: Dawid Weiss <dawid.weiss@carrotsearch.com>
Co-authored-by: Uwe Schindler <uschindler@apache.org>
Pass CloudConfig instance representing the solrcloud section of solr.xml configuration from Overseer to the Collection and Config Set API commands it executes.
* Remove DIH example directory
* Remove contrib code directories
* Remove contrib package related configurations for build tools
* Remove mention of DIH example
* remove dih as build dependencies and no-longer needed version pins
* Remove README references to DIH
* Remove dih mention from the script that probably does need to exist at all
* More build artifact references
* More removed dependencies leftovers (licenses/versions)
* No need to smoke exclude DIH anymore
* Remove Admin UI's DIH integration
* Remove DIH from shortname package list
* Remove unused DIH (related? not?) dataset
Unclear what is happening here, but there is no reference to that directory anywhere else
The other parallel directories ARE referenced in a TestConfigSetsAPI.java
* Hidden Idea files references
* No DIH to ignore anymore
* Remove last Derby DB references
* Remove DIH from documentation
Add the information in Major Changes document with the link to the external repo
* Added/updated a mention to CHANGES
* Fix leftover library mentions
* Fix Spellings
This commit does two things:
* Allow users to plug-in different implementations of the handler (they must extend HealthCheckHandler)
* Remove the HealthCheckHandler from the implicit SolrCore plugins
Add IndexWriter merge-on-refresh feature to selectively merge
small segments on getReader, subject to a configurable timeout,
to improve search performance by reducing the number of small
segments for searching.
Co-authored-by: Mike McCandless <mikemccand@apache.org>
This commit introduces CPU based circuit breaker. This circuit breaker
tracks the average CPU load per minute and triggers if the value exceeds
a configurable value.
This commit also adds a specific control flag for Memory Circuit Breaker
to allow enabling/disabling the same.
This commit introduces two functionalities: request rate limiting and ability to identify requests based on type (indexing, search, admin). The default rate limiter rate limits query requests based on configurable parameters which can be set in web.xml. Note that this rate limiting works at a JVM level, not a core/collection level.
This issue occurs only while fetching uncommitted doc through /get.
Instead of directly calling stringValue() on IndexableField use
FieldType's toExtern() or toObject() to get the writable value for the field.
* SolrRrdBackendFactory should not be created if history is disabled
* Disable MetricsHistoryHandler by default in tests
* Await shutdown of all executors
(minor refactoring)
Also:
* SolrCore's constructors don't need a "name" since it's guaranteed to always be the name in the coreDescriptor. I checked.
* SolrCore's constructor shouldn't call coreContainer.solrCores.addCoreDescriptor(cd); because it's the container's responsibility to manage such things. I made SolrCores.putCore ensure the descriptor is added, and this is called by CoreContainer.registerCore which is called after new SolrCore instances are created.
* solrCore.setName should only be called when we expect the name to change. Furthermore that shouldn't ever happen in SolrCloud so I added checks.
* solrCore.setName calls coreMetricManager.afterCoreSetName() which is something that is really only related to a rename, not name initialization (from the constructor). I renamed that method and further only call it if the name did change from non-null.
This adds a lot of "under the covers" improvements to how JSON Faceting FacetField processors work, to enable
"sweeping" support when the SlotAcc used for sorting support it (currently just "relatedness()")
This is a squash commit of all changes on https://github.com/magibney/lucene-solr/tree/SOLR-13132
Up to and including ca7a8e0b39840d00af9022c048346a7d84bf280d.
Co-authored-by: Chris Hostetter <hossman@apache.org>
Co-authored-by: Michael Gibney <michael@michaelgibney.net>
* Add an example explaining how to use
* fix up JavaDoc formatting
* Respond to feedback from @janhoy
Co-authored-by: ohtwadi <harinder.hanjan@gmail.com>
* SOLR-14588: Implement Circuit Breakers
This commit consists of two parts: add circuit breakers infrastructure and a "real" JVM heap memory based
circuit breaker which monitors incoming search requests and rejects them with SERVICE_TOO_BUSY error
if the defined threshold is breached, thus giving headroom to existing indexing and search requests
to complete.