SOLR-10871: remove backticks for monospace type in headings

This commit is contained in:
Cassandra Targett 2017-06-12 12:30:51 -05:00
parent 5a737a3aab
commit 0411504dd8
41 changed files with 199 additions and 202 deletions

View File

@ -46,7 +46,7 @@ The table below summarizes Solr's common query parameters, which are supported b
The following sections describe these parameters in detail.
[[CommonQueryParameters-ThedefTypeParameter]]
== The `defType` Parameter
== The defType Parameter
The defType parameter selects the query parser that Solr should use to process the main query parameter (`q`) in the request. For example:
@ -55,7 +55,7 @@ The defType parameter selects the query parser that Solr should use to process t
If no defType param is specified, then by default, the <<the-standard-query-parser.adoc#the-standard-query-parser,The Standard Query Parser>> is used. (eg: `defType=lucene`)
[[CommonQueryParameters-ThesortParameter]]
== The `sort` Parameter
== The sort Parameter
The `sort` parameter arranges search results in either ascending (`asc`) or descending (`desc`) order. The parameter can be used with either numerical or alphabetical content. The directions can be entered in either all lowercase or all uppercase letters (i.e., both `asc` or `ASC`).
@ -88,7 +88,7 @@ Regarding the sort parameter's arguments:
** When more than one sort criteria is provided, the second entry will only be used if the first entry results in a tie. If there is a third entry, it will only be used if the first AND second entries are tied. This pattern continues with further entries.
[[CommonQueryParameters-ThestartParameter]]
== The `start` Parameter
== The start Parameter
When specified, the `start` parameter specifies an offset into a query's result set and instructs Solr to begin displaying results from this offset.
@ -99,14 +99,14 @@ Setting the `start` parameter to some other number, such as 3, causes Solr to sk
You can use the `start` parameter this way for paging. For example, if the `rows` parameter is set to 10, you could display three successive pages of results by setting start to 0, then re-issuing the same query and setting start to 10, then issuing the query again and setting start to 20.
[[CommonQueryParameters-TherowsParameter]]
== The `rows` Parameter
== The rows Parameter
You can use the rows parameter to paginate results from a query. The parameter specifies the maximum number of documents from the complete result set that Solr should return to the client at one time.
The default value is 10. That is, by default, Solr returns 10 documents at a time in response to a query.
[[CommonQueryParameters-Thefq_FilterQuery_Parameter]]
== The `fq` (Filter Query) Parameter
== The fq (Filter Query) Parameter
The `fq` parameter defines a query that can be used to restrict the superset of documents that can be returned, without influencing score. It can be very useful for speeding up complex queries, since the queries specified with `fq` are cached independently of the main query. When a later query uses the same filter, there's a cache hit, and filter results are returned quickly from the cache.
@ -132,7 +132,7 @@ fq=+popularity:[10 TO *] +section:0
* As with all parameters: special characters in an URL need to be properly escaped and encoded as hex values. Online tools are available to help you with URL-encoding. For example: http://meyerweb.com/eric/tools/dencoder/.
[[CommonQueryParameters-Thefl_FieldList_Parameter]]
== The `fl` (Field List) Parameter
== The fl (Field List) Parameter
The `fl` parameter limits the information included in a query response to a specified list of fields. The fields need to either be `stored="true"` or `docValues="true"``.`
@ -204,7 +204,7 @@ fl=id,sales_price:price,secret_sauce:prod(price,popularity),why_score:[explain s
----
[[CommonQueryParameters-ThedebugParameter]]
== The `debug` Parameter
== The debug Parameter
The `debug` parameter can be specified multiple times and supports the following arguments:
@ -219,7 +219,7 @@ For backwards compatibility with older versions of Solr, `debugQuery=true` may i
The default behavior is not to include debugging information.
[[CommonQueryParameters-TheexplainOtherParameter]]
== The `explainOther` Parameter
== The explainOther Parameter
The `explainOther` parameter specifies a Lucene query in order to identify a set of documents. If this parameter is included and is set to a non-blank value, the query will return debugging information, along with the "explain info" of each document that matches the Lucene query, relative to the main query (which is specified by the q parameter). For example:
@ -233,7 +233,7 @@ The query above allows you to examine the scoring explain info of the top matchi
The default value of this parameter is blank, which causes no extra "explain info" to be returned.
[[CommonQueryParameters-ThetimeAllowedParameter]]
== The `timeAllowed` Parameter
== The timeAllowed Parameter
This parameter specifies the amount of time, in milliseconds, allowed for a search to complete. If this time expires before the search is complete, any partial results will be returned, but values such as `numFound`, <<faceting.adoc#faceting,facet>> counts, and result <<the-stats-component.adoc#the-stats-component,stats>> may not be accurate for the entire result set.
@ -245,7 +245,7 @@ This value is only checked at the time of:
As this check is periodically performed, the actual time for which a request can be processed before it is aborted would be marginally greater than or equal to the value of `timeAllowed`. If the request consumes more time in other stages, e.g., custom components, etc., this parameter is not expected to abort the request.
[[CommonQueryParameters-ThesegmentTerminateEarlyParameter]]
== The `segmentTerminateEarly` Parameter
== The segmentTerminateEarly Parameter
This parameter may be set to either true or false.
@ -258,19 +258,19 @@ Similar to using <<CommonQueryParameters-ThetimeAllowedParameter,the `timeAllowe
The default value of this parameter is false.
[[CommonQueryParameters-TheomitHeaderParameter]]
== The `omitHeader` Parameter
== The omitHeader Parameter
This parameter may be set to either true or false.
If set to true, this parameter excludes the header from the returned results. The header contains information about the request, such as the time it took to complete. The default value for this parameter is false.
[[CommonQueryParameters-ThewtParameter]]
== The `wt` Parameter
== The wt Parameter
The `wt` parameter selects the Response Writer that Solr should use to format the query's response. For detailed descriptions of Response Writers, see <<response-writers.adoc#response-writers,Response Writers>>.
[[CommonQueryParameters-Thecache_falseParameter]]
== The `cache=false` Parameter
== The cache=false Parameter
Solr caches the results of all queries and filter queries by default. To disable result caching, set the `cache=false` parameter.
@ -296,7 +296,7 @@ fq={!frange l=10 u=100 cache=false cost=100}mul(popularity,price)
----
[[CommonQueryParameters-ThelogParamsListParameter]]
== The `logParamsList` Parameter
== The logParamsList Parameter
By default, Solr logs all parameters of requests. Set this parameter to restrict which parameters of a request are logged. This may help control logging to only those parameters considered important to your organization.
@ -314,7 +314,7 @@ This parameter does not only apply to query requests, but to any kind of request
====
[[CommonQueryParameters-TheechoParamsParameter]]
== The `echoParams` Parameter
== The echoParams Parameter
The `echoParams` parameter controls what information about request parameters is included in the response header.

View File

@ -212,7 +212,7 @@ For more information about user-defined properties, see the section <<configurin
See also the section <<ConfigAPI-CreatingandUpdatingUser-DefinedProperties,Creating and Updating User-Defined Properties>> below for examples of how to use this type of command.
[[ConfigAPI-HowtoMapsolrconfig.xmlPropertiestoJSON]]
== How to Map `solrconfig.xml` Properties to JSON
== How to Map solrconfig.xml Properties to JSON
By using this API, you will be generating JSON representations of properties defined in `solrconfig.xml`. To understand how properties should be represented with the API, let's take a look at a few examples.

View File

@ -121,7 +121,7 @@ The path and name of the `solrcore.properties` file can be overridden using the
====
[[Configuringsolrconfig.xml-Userdefinedpropertiesfromcore.properties]]
=== User-Defined Properties in `core.properties`
=== User-Defined Properties in core.properties
Every Solr core has a `core.properties` file, automatically created when using the APIs. When you create a SolrCloud collection, you can pass through custom parameters to go into each core.properties that will be created, by prefixing the parameter name with "property." as a URL parameter. Example:

View File

@ -289,25 +289,25 @@ Either `path` or `targetCore` parameter must be specified but not both. The rang
The `core` index will be split into as many pieces as the number of `path` or `targetCore` parameters.
==== Usage with two `targetCore` parameters:
==== Usage with two targetCore parameters:
`\http://localhost:8983/solr/admin/cores?action=SPLIT&core=core0&targetCore=core1&targetCore=core2`
Here the `core` index will be split into two pieces and merged into the two `targetCore` indexes.
==== Usage with two `path` parameters:
==== Usage with two path parameters:
`\http://localhost:8983/solr/admin/cores?action=SPLIT&core=core0&path=/path/to/index/1&path=/path/to/index/2`
The `core` index will be split into two pieces and written into the two directory paths specified.
==== Usage with the `split.key` parameter:
==== Usage with the split.key parameter:
`\http://localhost:8983/solr/admin/cores?action=SPLIT&core=core0&targetCore=core1&split.key=A!`
Here all documents having the same route key as the `split.key` i.e. 'A!' will be split from the `core` index and written to the `targetCore`.
==== Usage with `ranges` parameter:
==== Usage with ranges parameter:
`\http://localhost:8983/solr/admin/cores?action=SPLIT&core=core0&targetCore=core1&targetCore=core2&targetCore=core3&ranges=0-1f4,1f5-3e8,3e9-5dc`

View File

@ -20,7 +20,7 @@
Where and how Solr stores its indexes are configurable options.
== Specifying a Location for Index Data with the `dataDir` Parameter
== Specifying a Location for Index Data with the dataDir Parameter
By default, Solr stores its index data in a directory called `/data` under the core's instance directory (`instanceDir`). If you would like to specify a different directory for storing index data, you can configure the `dataDir` in the `core.properties` file for the core, or use the `<dataDir>` parameter in the `solrconfig.xml` file. You can specify another directory either with an absolute path or a pathname relative to the instanceDir of the SolrCore. For example:

View File

@ -47,7 +47,7 @@ Of course the `signatureField` could be the unique field, but generally you want
There are two places in Solr to configure de-duplication: in `solrconfig.xml` and in `schema.xml`.
[[De-Duplication-Insolrconfig.xml]]
=== In `solrconfig.xml`
=== In solrconfig.xml
The `SignatureUpdateProcessorFactory` has to be registered in `solrconfig.xml` as part of an <<update-request-processors.adoc#update-request-processors,Update Request Processor Chain>>, as in this example:
@ -87,7 +87,7 @@ A Signature implementation for generating a signature hash. The full classpath o
|===
[[De-Duplication-Inschema.xml]]
=== In `schema.xml`
=== In schema.xml
If you are using a separate field for storing the signature, you must have it indexed:

View File

@ -67,7 +67,7 @@ Here is an example of a minimal LangDetect `langid` configuration in `solrconfig
----
[[DetectingLanguagesDuringIndexing-langidParameters]]
== `langid` Parameters
== langid Parameters
As previously mentioned, both implementations of the `langid` UpdateRequestProcessor take the same parameters.

View File

@ -34,7 +34,7 @@ When not using SolrCloud, it is up to you to get all your documents indexed on e
In the legacy distributed mode, Solr does not calculate universal term/doc frequencies. For most large-scale implementations, it is not likely to matter that Solr calculates TF/IDF at the shard level. However, if your collection is heavily skewed in its distribution across servers, you may find misleading relevancy results in your searches. In general, it is probably best to randomly distribute documents to your shards.
[[DistributedSearchwithIndexSharding-ExecutingDistributedSearcheswiththeshardsParameter]]
== Executing Distributed Searches with the `shards` Parameter
== Executing Distributed Searches with the shards Parameter
If a query request includes the `shards` parameter, the Solr server distributes the request across all the shards listed as arguments to the parameter. The `shards` parameter uses this syntax:

View File

@ -253,7 +253,7 @@ curl -E solr-ssl.keystore.p12:secret --cacert solr-ssl.cacert.pem ...
NOTE: If your operating system does not include cURL, you can download binaries here: http://curl.haxx.se/download.html
=== Create a SolrCloud Collection using `bin/solr`
=== Create a SolrCloud Collection using bin/solr
Create a 2-shard, replicationFactor=1 collection named mycollection using the default configset (data_driven_schema_configs):
@ -318,7 +318,7 @@ You should get a response that looks like this:
----
[[EnablingSSL-Indexdocumentsusingpost.jar]]
=== Index Documents using `post.jar`
=== Index Documents using post.jar
Use `post.jar` to index some example documents to the SolrCloud collection created above:
@ -340,7 +340,7 @@ curl -E solr-ssl.pem:secret --cacert solr-ssl.pem "https://localhost:8984/solr/m
----
[[EnablingSSL-IndexadocumentusingCloudSolrClient]]
=== Index a document using `CloudSolrClient`
=== Index a document using CloudSolrClient
From a java client using SolrJ, index a document. In the code below, the `javax.net.ssl.*` system properties are set programmatically, but you could instead specify them on the java command line, as in the `post.jar` example above:

View File

@ -31,7 +31,7 @@ The cases where this functionality may be useful include: session analysis, dist
All the fields being sorted and exported must have docValues set to true. For more information, see the section on <<docvalues.adoc#docvalues,DocValues>>.
[[ExportingResultSets-The_exportRequestHandler]]
== The `/export` RequestHandler
== The /export RequestHandler
The `/export` request handler with the appropriate configuration is one of Solr's out-of-the-box request handlers - see <<implicit-requesthandlers.adoc#implicit-requesthandlers,Implicit RequestHandlers>> for more information.

View File

@ -29,12 +29,12 @@ Searchers are presented with the indexed terms, along with numerical counts of h
There are two general parameters for controlling faceting.
[[Faceting-ThefacetParameter]]
=== The `facet` Parameter
=== The facet Parameter
If set to *true*, this parameter enables facet counts in the query response. If set to *false*, a blank or missing value, this parameter disables faceting. None of the other parameters listed below will have any effect unless this parameter is set to *true*. The default value is blank (false).
[[Faceting-Thefacet.queryParameter]]
=== The `facet.query` Parameter
=== The facet.query Parameter
This parameter allows you to specify an arbitrary query in the Lucene default syntax to generate a facet count.
@ -83,7 +83,7 @@ The table below summarizes Solr's field value faceting parameters.
These parameters are described in the sections below.
[[Faceting-Thefacet.fieldParameter]]
=== The `facet.field` Parameter
=== The facet.field Parameter
The `facet.field` parameter identifies a field that should be treated as a facet. It iterates over each Term in the field and generate a facet count using that Term as the constraint. This parameter can be specified multiple times in a query to select multiple facet fields.
@ -93,28 +93,28 @@ If you do not set this parameter to at least one field in the schema, none of th
====
[[Faceting-Thefacet.prefixParameter]]
=== The `facet.prefix` Parameter
=== The facet.prefix Parameter
The `facet.prefix` parameter limits the terms on which to facet to those starting with the given string prefix. This does not limit the query in any way, only the facets that would be returned in response to the query.
This parameter can be specified on a per-field basis with the syntax of `f.<fieldname>.facet.prefix`.
[[Faceting-Thefacet.containsParameter]]
=== The `facet.contains` Parameter
=== The facet.contains Parameter
The `facet.contains` parameter limits the terms on which to facet to those containing the given substring. This does not limit the query in any way, only the facets that would be returned in response to the query.
This parameter can be specified on a per-field basis with the syntax of `f.<fieldname>.facet.contains`.
[[Faceting-Thefacet.contains.ignoreCaseParameter]]
=== The `facet.contains.ignoreCase` Parameter
=== The facet.contains.ignoreCase Parameter
If `facet.contains` is used, the `facet.contains.ignoreCase` parameter causes case to be ignored when matching the given substring against candidate facet terms.
This parameter can be specified on a per-field basis with the syntax of `f.<fieldname>.facet.contains.ignoreCase`.
[[Faceting-Thefacet.sortParameter]]
=== The `facet.sort` Parameter
=== The facet.sort Parameter
This parameter determines the ordering of the facet field constraints.
@ -128,7 +128,7 @@ The default is `count` if `facet.limit` is greater than 0, otherwise, the defaul
This parameter can be specified on a per-field basis with the syntax of `f.<fieldname>.facet.sort`.
[[Faceting-Thefacet.limitParameter]]
=== The `facet.limit` Parameter
=== The facet.limit Parameter
This parameter specifies the maximum number of constraint counts (essentially, the number of facets for a field that are returned) that should be returned for the facet fields. A negative value means that Solr will return unlimited number of constraint counts.
@ -137,7 +137,7 @@ The default value is 100.
This parameter can be specified on a per-field basis to apply a distinct limit to each field with the syntax of `f.<fieldname>.facet.limit`.
[[Faceting-Thefacet.offsetParameter]]
=== The `facet.offset` Parameter
=== The facet.offset Parameter
The `facet.offset` parameter indicates an offset into the list of constraints to allow paging.
@ -146,7 +146,7 @@ The default value is 0.
This parameter can be specified on a per-field basis with the syntax of `f.<fieldname>.facet.offset`.
[[Faceting-Thefacet.mincountParameter]]
=== The `facet.mincount` Parameter
=== The facet.mincount Parameter
The `facet.mincount` parameter specifies the minimum counts required for a facet field to be included in the response. If a field's counts are below the minimum, the field's facet is not returned.
@ -155,7 +155,7 @@ The default value is 0.
This parameter can be specified on a per-field basis with the syntax of `f.<fieldname>.facet.mincount`.
[[Faceting-Thefacet.missingParameter]]
=== The `facet.missing` Parameter
=== The facet.missing Parameter
If set to true, this parameter indicates that, in addition to the Term-based constraints of a facet field, a count of all results that match the query but which have no facet value for the field should be computed and returned in the response.
@ -164,7 +164,7 @@ The default value is false.
This parameter can be specified on a per-field basis with the syntax of `f.<fieldname>.facet.missing`.
[[Faceting-Thefacet.methodParameter]]
=== The `facet.method` Parameter
=== The facet.method Parameter
The facet.method parameter selects the type of algorithm or method Solr should use when faceting a field.
@ -189,7 +189,7 @@ The default value is `fc` (except for fields using the `BoolField` field type an
This parameter can be specified on a per-field basis with the syntax of `f.<fieldname>.facet.method`.
[[Faceting-Thefacet.enum.cache.minDfParameter]]
=== The `facet.enum.cache.minDf` Parameter
=== The facet.enum.cache.minDf Parameter
This parameter indicates the minimum document frequency (the number of documents matching a term) for which the filterCache should be used when determining the constraint count for that term. This is only used with the `facet.method=enum` method of faceting.
@ -200,14 +200,14 @@ The default value is 0, causing the filterCache to be used for all terms in the
This parameter can be specified on a per-field basis with the syntax of `f.<fieldname>.facet.enum.cache.minDf`.
[[Faceting-Thefacet.existsParameter]]
=== The `facet.exists` Parameter
=== The facet.exists Parameter
To cap facet counts by 1, specify `facet.exists=true`. It can be used with `facet.method=enum` or when it's omitted. It can be used only on non-trie fields (such as strings). It may speed up facet counting on large indices and/or high-cardinality facet values..
This parameter can be specified on a per-field basis with the syntax of `f.<fieldname>.facet.exists` or via local parameter` facet.field={!facet.method=enum facet.exists=true}size`.
[[Faceting-Thefacet.excludeTermsParameter]]
=== The `facet.excludeTerms` Parameter
=== The facet.excludeTerms Parameter
If you want to remove terms from facet counts but keep them in the index, the `facet.excludeTerms` parameter allows you to do that.
@ -219,7 +219,7 @@ In some situations, the accuracy in selecting the "top" constraints returned for
In some situations, depending on how your docs are partitioned across your shards, and what `facet.limit` value you used, you may find it advantageous to increase or decrease the amount of over-requesting Solr does. This can be achieved by setting the `facet.overrequest.count` (defaults to 10) and `facet.overrequest.ratio` (defaults to 1.5) parameters.
[[Faceting-Thefacet.threadsParameter]]
=== The `facet.threads` Parameter
=== The facet.threads Parameter
This param will cause loading the underlying fields used in faceting to be executed in parallel with the number of threads specified. Specify as `facet.threads=N` where `N` is the maximum number of threads used. Omitting this parameter or specifying the thread count as 0 will not spawn any threads, and only the main request thread will be used. Specifying a negative number of threads will create up to Integer.MAX_VALUE threads.
@ -244,7 +244,7 @@ You can use Range Faceting on any date field or any numeric field that supports
|===
[[Faceting-Thefacet.rangeParameter]]
=== The `facet.range` Parameter
=== The facet.range Parameter
The `facet.range` parameter defines the field for which Solr should create range facets. For example:
@ -253,7 +253,7 @@ The `facet.range` parameter defines the field for which Solr should create range
`facet.range=lastModified_dt`
[[Faceting-Thefacet.range.startParameter]]
=== The `facet.range.start` Parameter
=== The facet.range.start Parameter
The `facet.range.start` parameter specifies the lower bound of the ranges. You can specify this parameter on a per field basis with the syntax of `f.<fieldname>.facet.range.start`. For example:
@ -262,7 +262,7 @@ The `facet.range.start` parameter specifies the lower bound of the ranges. You c
`f.lastModified_dt.facet.range.start=NOW/DAY-30DAYS`
[[Faceting-Thefacet.range.endParameter]]
=== The `facet.range.end` Parameter
=== The facet.range.end Parameter
The facet.range.end specifies the upper bound of the ranges. You can specify this parameter on a per field basis with the syntax of `f.<fieldname>.facet.range.end`. For example:
@ -271,7 +271,7 @@ The facet.range.end specifies the upper bound of the ranges. You can specify thi
`f.lastModified_dt.facet.range.end=NOW/DAY+30DAYS`
[[Faceting-Thefacet.range.gapParameter]]
=== The `facet.range.gap` Parameter
=== The facet.range.gap Parameter
The span of each range expressed as a value to be added to the lower bound. For date fields, this should be expressed using the {solr-javadocs}/solr-core/org/apache/solr/util/DateMathParser.html[`DateMathParser` syntax] (such as, `facet.range.gap=%2B1DAY ... '+1DAY'`). You can specify this parameter on a per-field basis with the syntax of `f.<fieldname>.facet.range.gap`. For example:
@ -280,7 +280,7 @@ The span of each range expressed as a value to be added to the lower bound. For
`f.lastModified_dt.facet.range.gap=+1DAY`
[[Faceting-Thefacet.range.hardendParameter]]
=== The `facet.range.hardend` Parameter
=== The facet.range.hardend Parameter
The `facet.range.hardend` parameter is a Boolean parameter that specifies how Solr should handle cases where the `facet.range.gap` does not divide evenly between `facet.range.start` and `facet.range.end`.
@ -289,7 +289,7 @@ If *true*, the last range constraint will have the `facet.range.end` value as an
This parameter can be specified on a per field basis with the syntax `f.<fieldname>.facet.range.hardend`.
[[Faceting-Thefacet.range.includeParameter]]
=== The `facet.range.include` Parameter
=== The facet.range.include Parameter
By default, the ranges used to compute range faceting between `facet.range.start` and `facet.range.end` are inclusive of their lower bounds and exclusive of the upper bounds. The "before" range defined with the `facet.range.other` parameter is exclusive and the "after" range is inclusive. This default, equivalent to "lower" below, will not result in double counting at the boundaries. You can use the `facet.range.include` parameter to modify this behavior using the following options:
@ -313,7 +313,7 @@ To ensure you avoid double-counting, do not choose both `lower` and `upper`, do
====
[[Faceting-Thefacet.range.otherParameter]]
=== The `facet.range.other` Parameter
=== The facet.range.other Parameter
The `facet.range.other` parameter specifies that in addition to the counts for each range constraint between `facet.range.start` and `facet.range.end`, counts should also be computed for these options:
@ -332,7 +332,7 @@ The `facet.range.other` parameter specifies that in addition to the counts for e
This parameter can be specified on a per field basis with the syntax of `f.<fieldname>.facet.range.other`. In addition to the `all` option, this parameter can be specified multiple times to indicate multiple choices, but `none` will override all other options.
[[Faceting-Thefacet.range.methodParameter]]
=== The `facet.range.method` Parameter
=== The facet.range.method Parameter
The `facet.range.method` parameter selects the type of algorithm or method Solr should use for range faceting. Both methods produce the same results, but performance may vary.
@ -343,7 +343,7 @@ dv:: This method iterates the documents that match the main query, and for each
Default value for this parameter is "filter".
[[Faceting-Thefacet.mincountParameterinRangeFaceting]]
=== The `facet.mincount` Parameter in Range Faceting
=== The facet.mincount Parameter in Range Faceting
The `facet.mincount` parameter, the same one as used in field faceting is also applied to range faceting. When used, no ranges with a count below the minimum will be included in the response.
@ -653,14 +653,14 @@ If you are concerned about the performance of your searches you should test with
This method will use <<docvalues.adoc#docvalues,docValues>> if they are enabled for the field, will use fieldCache otherwise.
[[Faceting-Thefacet.intervalparameter]]
=== The `facet.interval` parameter
=== The facet.interval parameter
This parameter Indicates the field where interval faceting must be applied. It can be used multiple times in the same request to indicate multiple fields.
`facet.interval=price&facet.interval=size`
[[Faceting-Thefacet.interval.setparameter]]
=== The `facet.interval.set` parameter
=== The facet.interval.set parameter
This parameter is used to set the intervals for the field, it can be specified multiple times to indicate multiple intervals. This parameter is global, which means that it will be used for all fields indicated with `facet.interval` unless there is an override for a specific field. To override this parameter on a specific field you can use: `f.<fieldname>.facet.interval.set`, for example:

View File

@ -28,7 +28,7 @@ A field type definition can include four types of information:
* Field type properties - depending on the implementation class, some properties may be mandatory.
[[FieldTypeDefinitionsandProperties-FieldTypeDefinitionsinschema.xml]]
== Field Type Definitions in `schema.xml`
== Field Type Definitions in schema.xml
Field types are defined in `schema.xml`. Each field type is defined between `fieldType` elements. They can optionally be grouped within a `types` element. Here is an example of a field type definition for a type called `text_general`:

View File

@ -251,7 +251,7 @@ The FastVector Highlighter will occasionally truncate highlighted words. To prev
Solr supports two boundary scanners: `breakIterator` and `simple`.
[[Highlighting-ThebreakIteratorBoundaryScanner]]
==== The `breakIterator` Boundary Scanner
==== The breakIterator Boundary Scanner
The `breakIterator` boundary scanner offers excellent performance right out of the box by taking locale and boundary type into account. In most cases you will want to use the `breakIterator` boundary scanner. To implement the `breakIterator` boundary scanner, add this code to the `highlighting` section of your `solrconfig.xml` file, adjusting the type, language, and country values as appropriate to your application:
@ -269,7 +269,7 @@ The `breakIterator` boundary scanner offers excellent performance right out of t
Possible values for the `hl.bs.type` parameter are WORD, LINE, SENTENCE, and CHARACTER.
[[Highlighting-ThesimpleBoundaryScanner]]
==== The `simple` Boundary Scanner
==== The simple Boundary Scanner
The `simple` boundary scanner scans term boundaries for a specified maximum character value (`hl.bs.maxScan`) and for common delimiters such as punctuation marks (`hl.bs.chars`). The `simple` boundary scanner may be useful for some custom To implement the `simple` boundary scanner, add this code to the `highlighting` section of your `solrconfig.xml` file, adjusting the values as appropriate to your application:

View File

@ -111,7 +111,7 @@ The example below shows a possible 'master' configuration for the `ReplicationHa
----
[[IndexReplication-Replicatingsolrconfig.xml]]
==== Replicating `solrconfig.xml`
==== Replicating solrconfig.xml
In the configuration file on the master server, include a line like the following:

View File

@ -33,7 +33,7 @@ By default, the settings are commented out in the sample `solrconfig.xml` includ
== Writing New Segments
[[IndexConfiginSolrConfig-ramBufferSizeMB]]
=== `ramBufferSizeMB`
=== ramBufferSizeMB
Once accumulated document updates exceed this much memory space (defined in megabytes), then the pending updates are flushed. This can also create new segments or trigger a merge. Using this setting is generally preferable to `maxBufferedDocs`. If both `maxBufferedDocs` and `ramBufferSizeMB` are set in `solrconfig.xml`, then a flush will occur when either limit is reached. The default is 100Mb.
@ -43,7 +43,7 @@ Once accumulated document updates exceed this much memory space (defined in mega
----
[[IndexConfiginSolrConfig-maxBufferedDocs]]
=== `maxBufferedDocs`
=== maxBufferedDocs
Sets the number of document updates to buffer in memory before they are flushed as a new segment. This may also trigger a merge. The default Solr configuration sets to flush by RAM usage (`ramBufferSizeMB`).
@ -66,7 +66,7 @@ Controls whether newly written (and not yet merged) index segments should use th
== Merging Index Segments
[[IndexConfiginSolrConfig-mergePolicyFactory]]
=== `mergePolicyFactory`
=== mergePolicyFactory
Defines how merging segments is done.
@ -118,7 +118,7 @@ If the configuration options for the built-in merge policies do not fully suit y
The example above shows Solr's {solr-javadocs}/solr-core/org/apache/solr/index/SortingMergePolicyFactory.html[`SortingMergePolicyFactory`] being configured to sort documents in merged segments by `"timestamp desc"`, and wrapped around a `TieredMergePolicyFactory` configured to use the values `maxMergeAtOnce=10` and `segmentsPerTier=10` via the `inner` prefix defined by `SortingMergePolicyFactory`'s `wrapped.prefix` option. For more information on using `SortingMergePolicyFactory`, see <<common-query-parameters.adoc#CommonQueryParameters-ThesegmentTerminateEarlyParameter,the segmentTerminateEarly parameter>>.
[[IndexConfiginSolrConfig-mergeScheduler]]
=== `mergeScheduler`
=== mergeScheduler
The merge scheduler controls how merges are performed. The default `ConcurrentMergeScheduler` performs merges in the background using separate threads. The alternative, `SerialMergeScheduler`, does not perform merges with separate threads.
@ -128,7 +128,7 @@ The merge scheduler controls how merges are performed. The default `ConcurrentMe
----
[[IndexConfiginSolrConfig-mergedSegmentWarmer]]
=== `mergedSegmentWarmer`
=== mergedSegmentWarmer
When using Solr in for <<near-real-time-searching.adoc#near-real-time-searching,Near Real Time Searching>> a merged segment warmer can be configured to warm the reader on the newly merged segment, before the merge commits. This is not required for near real-time search, but will reduce search latency on opening a new near real-time reader after a merge completes.
@ -159,7 +159,7 @@ Many <<IndexConfiginSolrConfig-MergingIndexSegments,Merge Policy>> implementatio
== Index Locks
[[IndexConfiginSolrConfig-lockType]]
=== `lockType`
=== lockType
The LockFactory options specify the locking implementation to use.
@ -178,7 +178,7 @@ For more information on the nuances of each LockFactory, see http://wiki.apache.
----
[[IndexConfiginSolrConfig-writeLockTimeout]]
=== `writeLockTimeout`
=== writeLockTimeout
The maximum time to wait for a write lock on an IndexWriter. The default is 1000, expressed in milliseconds.

View File

@ -44,7 +44,7 @@ For more information on indexing in Solr, see the https://wiki.apache.org/solr/F
When starting Solr with the "-e" option, the `example/` directory will be used as base directory for the example Solr instances that are created. This directory also includes an `example/exampledocs/` subdirectory containing sample documents in a variety of formats that you can use to experiment with indexing into the various examples.
[[IntroductiontoSolrIndexing-ThecurlUtilityforTransferringFiles]]
== The `curl` Utility for Transferring Files
== The curl Utility for Transferring Files
Many of the instructions and examples in this section make use of the `curl` utility for transferring content through a URL. `curl` posts and retrieves data over HTTP, FTP, and many other protocols. Most Linux distributions include a copy of `curl`. You'll find curl downloads for Linux, Windows, and many other operating systems at http://curl.haxx.se/download.html. Documentation for `curl` is available here: http://curl.haxx.se/docs/manpage.html.

View File

@ -64,7 +64,7 @@ is equivilent to:
`fq={!type=lucene df=summary}solr rocks`
== Specifying the Parameter Value with the `v` Key
== Specifying the Parameter Value with the v Key
A special key of `v` within local parameters is an alternate way to specify the value of that parameter.

View File

@ -84,6 +84,6 @@ Regardless of whether it is explicitly declared, or used as an implicit global d
DELETESHARD and DELETEREPLICA now default to deleting the instance directory, data directory, and index directory for any replica they delete. Please review the <<collections-api.adoc#collections-api,Collection API>> documentation for details on new request parameters to prevent this behavior if you wish to keep all data on disk when using these commands
== `facet.date.*` Parameters Removed
== facet.date.* Parameters Removed
The `facet.date` parameter (and associated `facet.date.*` parameters) that were deprecated in Solr 3.x have been removed completely. If you have not yet switched to using the equivalent <<faceting.adoc#faceting,`facet.range`>> functionality you must do so now before upgrading.

View File

@ -28,7 +28,7 @@ To merge indexes, they must meet these requirements:
Optimally, the two indexes should be built using the same schema.
[[MergingIndexes-UsingIndexMergeTool]]
== Using `IndexMergeTool`
== Using IndexMergeTool
To merge the indexes, do the following:

View File

@ -31,7 +31,7 @@ However, pay special attention to cache and autowarm settings as they can have a
A commit operation makes index changes visible to new search requests. A *hard commit* uses the transaction log to get the id of the latest document changes, and also calls `fsync` on the index files to ensure they have been flushed to stable storage and no data loss will result from a power failure. The current transaction log is closed and a new one is opened. See the "transaction log" discussion below for data loss issues.
A *soft commit* is much faster since it only makes index changes visible and does not `fsync` index files, or write a new index descriptor or start a new transaction log. Search collections that have NRT requirements (that want index changes to be quickly visible to searches) will want to soft commit often but hard commit less frequently. A softCommit may be "less expensive", but it is not free, since it can slow throughput. See the "transaction log" discussion below for data loss issues.
A *soft commit* is much faster since it only makes index changes visible and does not `fsync` index files, or write a new index descriptor or start a new transaction log. Search collections that have NRT requirements (that want index changes to be quickly visible to searches) will want to soft commit often but hard commit less frequently. A softCommit may be "less expensive", but it is not free, since it can slow throughput. See the "transaction log" discussion below for data loss issues.
An *optimize* is like a *hard commit* except that it forces all of the index segments to be merged into a single segment first. Depending on the use, this operation should be performed infrequently (e.g., nightly), if at all, since it involves reading and re-writing the entire index. Segments are normally merged over time anyway (as determined by the merge policy), and optimize just forces these merges to occur immediately.
@ -52,7 +52,7 @@ Use `maxDocs` and `maxTime` judiciously to fine-tune your commit strategies.
=== Transaction Logs (tlogs)
Transaction logs are a "rolling window" of at least the last `N` (default 100) documents indexed. Tlogs are configured in solrconfig.xml, including the value of `N`. The current transaction log is closed and a new one opened each time any variety of hard commit occurs. Soft commits have no effect on the transaction log.
When tlogs are enabled, documents being added to the index are written to the tlog before the indexing call returns to the client. In the event of an un-graceful shutdown (power loss, JVM crash, `kill -9` etc) any documents written to the tlog that was open when Solr stopped are replayed on startup.
When Solr is shut down gracefully (i.e. using the `bin/solr stop` command and the like) Solr will close the tlog file and index segments so no replay will be necessary on startup.
@ -76,7 +76,7 @@ For example:
It's better to use `maxTime` rather than `maxDocs` to modify an `autoSoftCommit`, especially when indexing a large number of documents through the commit operation. It's also better to turn off `autoSoftCommit` for bulk indexing.
[[NearRealTimeSearching-OptionalAttributesforcommitandoptimize]]
=== Optional Attributes for `commit` and `optimize`
=== Optional Attributes for commit and optimize
// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
@ -100,7 +100,7 @@ Example of `commit` and `optimize` with optional attributes:
----
[[NearRealTimeSearching-PassingcommitandcommitWithinparametersaspartoftheURL]]
=== Passing `commit` and `commitWithin` Parameters as Part of the URL
=== Passing commit and commitWithin Parameters as Part of the URL
Update handlers can also get `commit`-related parameters as part of the update URL. This example adds a small test document and causes an explicit commit to happen immediately afterwards:
@ -133,7 +133,7 @@ curl http://localhost:8983/solr/my_collection/update?commitWithin=10000
----
[[NearRealTimeSearching-ChangingdefaultcommitWithinBehavior]]
=== Changing default `commitWithin` Behavior
=== Changing default commitWithin Behavior
The `commitWithin` settings allow forcing document commits to happen in a defined time period. This is used most frequently with <<near-real-time-searching.adoc#near-real-time-searching,Near Real Time Searching>>, and for that reason the default is to perform a soft commit. This does not, however, replicate new documents to slave servers in a master/slave environment. If that's a requirement for your implementation, you can force a hard commit by adding a parameter, as in this example:

View File

@ -180,7 +180,7 @@ SELECT fieldA as fa, fieldB as fb, fieldC as fc FROM tableA WHERE fieldC = 'term
We've covered many syntax options with this example, so let's walk through what's possible below.
=== `WHERE` Clause and Boolean Predicates
=== WHERE Clause and Boolean Predicates
[IMPORTANT]
====
@ -226,7 +226,7 @@ To specify NOT queries, you use the `AND NOT` syntax as follows:
WHERE (fieldA = 'term1') AND NOT (fieldB = 'term2')
----
==== Supported `WHERE` Operators
==== Supported WHERE Operators
The parallel SQL interface supports and pushes down most common SQL operators, specifically:
@ -247,7 +247,7 @@ Some operators that are not supported are BETWEEN, LIKE and IN. However, there a
* BETWEEN can be supported with a range query, such as `field = [50 TO 100]`.
* A simplistic LIKE can be used with a wildcard, such as `field = 'sam*'`.
=== `ORDER BY` Clause
=== ORDER BY Clause
The `ORDER BY` clause maps directly to Solr fields. Multiple `ORDER BY` fields and directions are supported.
@ -257,7 +257,7 @@ If the `ORDER BY` clause contains the exact fields in the `GROUP BY` clause, the
Order by fields are case sensitive.
=== `LIMIT` Clause
=== LIMIT Clause
Limits the result set to the specified size. In the example above the clause `LIMIT 100` will limit the result set to 100 records.
@ -267,7 +267,7 @@ There are a few differences to note between limited and unlimited queries:
* Limited queries allow any stored field in the field list. Unlimited queries require the fields to be stored as a DocValues field.
* Limited queries allow any indexed field in the `ORDER BY` list. Unlimited queries require the fields to be stored as a DocValues field.
=== `SELECT DISTINCT` Queries
=== SELECT DISTINCT Queries
The SQL interface supports both MapReduce and Facet implementations for `SELECT DISTINCT` queries.
@ -293,13 +293,13 @@ Because these functions never require data to be shuffled, the aggregations are
SELECT count(*) as count, sum(fieldB) as sum FROM tableA WHERE fieldC = 'Hello'
----
=== `GROUP BY` Aggregations
=== GROUP BY Aggregations
The SQL interface also supports `GROUP BY` aggregate queries.
As with `SELECT DISTINCT` queries, the SQL interface supports both a MapReduce implementation and a Facet implementation. The MapReduce implementation can build aggregations over extremely high cardinality fields. The Facet implementations provides high performance aggregation over fields with moderate levels of cardinality.
==== Basic `GROUP BY` with Aggregates
==== Basic GROUP BY with Aggregates
Here is a basic example of a GROUP BY query that requests aggregations:
@ -327,7 +327,7 @@ The non-function fields in the field list determine the fields to calculate the
The `GROUP BY` clause can contain up to 4 fields in the Solr index. These fields should correspond with the non-function fields in the field list.
=== `HAVING` Clause
=== HAVING Clause
The `HAVING` clause may contain any function listed in the field list. Complex `HAVING` clauses such as this are supported:

View File

@ -55,7 +55,7 @@ FastLRUCache and LFUCache support `showItems` attribute. This is the number of c
Details of each cache are described below.
[[QuerySettingsinSolrConfig-filterCache]]
=== `filterCache`
=== filterCache
This cache is used by `SolrIndexSearcher` for filters (DocSets) for unordered sets of all documents that match a query. The numeric attributes control the number of entries in the cache.
@ -72,7 +72,7 @@ Solr also uses this cache for faceting when the configuration parameter `facet.m
----
[[QuerySettingsinSolrConfig-queryResultCache]]
=== `queryResultCache`
=== queryResultCache
This cache holds the results of previous searches: ordered lists of document IDs (DocList) based on a query, a sort, and the range of documents requested.
@ -88,7 +88,7 @@ The `queryResultCache` has an additional (optional) setting to limit the maximum
----
[[QuerySettingsinSolrConfig-documentCache]]
=== `documentCache`
=== documentCache
This cache holds Lucene Document objects (the stored fields for each document). Since Lucene internal document IDs are transient, this cache is not auto-warmed. The size for the `documentCache` should always be greater than `max_results` times the `max_concurrent_queries`, to ensure that Solr does not need to refetch a document during a request. The more fields you store in your documents, the higher the memory usage of this cache will be.
@ -120,7 +120,7 @@ If you want auto-warming of your cache, include a `regenerator` attribute with t
== Query Sizing and Warming
[[QuerySettingsinSolrConfig-maxBooleanClauses]]
=== `maxBooleanClauses`
=== maxBooleanClauses
This sets the maximum number of clauses allowed in a boolean query. This can affect range or prefix queries that expand to a query with a large number of boolean terms. If this limit is exceeded, an exception is thrown.
@ -135,7 +135,7 @@ This option modifies a global property that effects all Solr cores. If multiple
====
[[QuerySettingsinSolrConfig-enableLazyFieldLoading]]
=== `enableLazyFieldLoading`
=== enableLazyFieldLoading
If this parameter is set to true, then fields that are not directly requested will be loaded lazily as needed. This can boost performance if the most common queries only need a small subset of fields, especially if infrequently accessed fields are large in size.
@ -145,7 +145,7 @@ If this parameter is set to true, then fields that are not directly requested wi
----
[[QuerySettingsinSolrConfig-useFilterForSortedQuery]]
=== `useFilterForSortedQuery`
=== useFilterForSortedQuery
This parameter configures Solr to use a filter to satisfy a search. If the requested sort does not include "score", the `filterCache` will be checked for a filter matching the query. For most situations, this is only useful if the same search is requested often with different sort options and none of them ever use "score".
@ -155,7 +155,7 @@ This parameter configures Solr to use a filter to satisfy a search. If the reque
----
[[QuerySettingsinSolrConfig-queryResultWindowSize]]
=== `queryResultWindowSize`
=== queryResultWindowSize
Used with the `queryResultCache`, this will cache a superset of the requested number of document IDs. For example, if the a search in response to a particular query requests documents 10 through 19, and `queryWindowSize` is 50, documents 0 through 49 will be cached.
@ -165,7 +165,7 @@ Used with the `queryResultCache`, this will cache a superset of the requested nu
----
[[QuerySettingsinSolrConfig-queryResultMaxDocsCached]]
=== `queryResultMaxDocsCached`
=== queryResultMaxDocsCached
This parameter sets the maximum number of documents to cache for any entry in the `queryResultCache`.
@ -175,7 +175,7 @@ This parameter sets the maximum number of documents to cache for any entry in th
----
[[QuerySettingsinSolrConfig-useColdSearcher]]
=== `useColdSearcher`
=== useColdSearcher
This setting controls whether search requests for which there is not a currently registered searcher should wait for a new searcher to warm up (false) or proceed immediately (true). When set to "false", requests will block until the searcher has warmed its caches.
@ -185,7 +185,7 @@ This setting controls whether search requests for which there is not a currently
----
[[QuerySettingsinSolrConfig-maxWarmingSearchers]]
=== `maxWarmingSearchers`
=== maxWarmingSearchers
This parameter sets the maximum number of searchers that may be warming up in the background at any given time. Exceeding this limit will raise an error. For read-only slaves, a value of two is reasonable. Masters should probably be set a little higher.
@ -227,7 +227,7 @@ The (commented out) examples below can be found in the `solrconfig.xml` file of
====
The above code comes from a _sample_ `solrconfig.xml`.
A key best practice is to modify these defaults before taking your application to production, but please note: while the sample queries are commented out in the section for the "newSearcher", the sample query is not commented out for the "firstSearcher" event.
A key best practice is to modify these defaults before taking your application to production, but please note: while the sample queries are commented out in the section for the "newSearcher", the sample query is not commented out for the "firstSearcher" event.
There is no point in auto-warming your Index Searcher with the query string "static firstSearcher warming in solrconfig.xml" if that is not relevant to your search application.
====

View File

@ -30,7 +30,7 @@ In a SolrCloud cluster each individual node load balances read requests across a
Even if some nodes in the cluster are offline or unreachable, a Solr node will be able to correctly respond to a search request as long as it can communicate with at least one replica of every shard, or one replica of every _relevant_ shard if the user limited the search via the `shards` or `\_route_` parameters. The more replicas there are of every shard, the more likely that the Solr cluster will be able to handle search results in the event of node failures.
[[ReadandWriteSideFaultTolerance-zkConnected]]
=== `zkConnected`
=== zkConnected
A Solr node will return the results of a search request as long as it can communicate with at least one replica of every shard that it knows about, even if it can _not_ communicate with ZooKeeper at the time it receives the request. This is normally the preferred behavior from a fault tolerance standpoint, but may result in stale or incorrect results if there have been major changes to the collection structure that the node has not been informed of via ZooKeeper (i.e., shards may have been added or removed, or split into sub-shards)
@ -57,7 +57,7 @@ A `zkConnected` header is included in every search response indicating if the no
----
[[ReadandWriteSideFaultTolerance-shards.tolerant]]
=== `shards.tolerant`
=== shards.tolerant
In the event that one or more shards queried are completely unavailable, then Solr's default behavior is to fail the request. However, there are many use-cases where partial results are acceptable and so Solr provides a boolean `shards.tolerant` parameter (default `false`).

View File

@ -148,7 +148,7 @@ curl http://localhost:8983/solr/techproducts/config/params/myQueries
----
[[RequestParametersAPI-TheuseParamsParameter]]
== The `useParams` Parameter
== The useParams Parameter
When making a request, the `useParams` parameter applies the request parameters sent to the request. This is translated at request time to the actual parameters.

View File

@ -23,7 +23,7 @@ The `requestDispatcher` element of `solrconfig.xml` controls the way the Solr HT
Included are parameters for defining if it should handle `/select` urls (for Solr 1.1 compatibility), if it will support remote streaming, the maximum size of file uploads and how it will respond to HTTP cache headers in requests.
[[RequestDispatcherinSolrConfig-handleSelectElement]]
== `handleSelect` Element
== handleSelect Element
[IMPORTANT]
====
@ -42,7 +42,7 @@ In recent versions of Solr, a `/select` requestHandler is defined by default, so
----
[[RequestDispatcherinSolrConfig-requestParsersElement]]
== `requestParsers` Element
== requestParsers Element
The `<requestParsers>` sub-element controls values related to parsing requests. This is an empty XML element that doesn't have any content, only attributes.
@ -65,7 +65,7 @@ The attribute `addHttpRequestToContext` can be used to indicate that the origina
----
[[RequestDispatcherinSolrConfig-httpCachingElement]]
== `httpCaching` Element
== httpCaching Element
The `<httpCaching>` element controls HTTP cache control headers. Do not confuse these settings with Solr's internal cache configuration. This element controls caching of HTTP responses as defined by the W3C HTTP specifications.
@ -91,7 +91,7 @@ This element allows for three attributes and one sub-element. The attributes of
----
[[RequestDispatcherinSolrConfig-cacheControlElement]]
=== `cacheControl` Element
=== cacheControl Element
In addition to these attributes, `<httpCaching>` accepts one child element: `<cacheControl>`. The content of this element will be sent as the value of the Cache-Control header on HTTP responses. This header is used to modify the default caching behavior of the requesting client. The possible values for the Cache-Control header are defined by the HTTP 1.1 specification in http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9[Section 14.9].

View File

@ -51,7 +51,7 @@ Note that the XSLT Response Writer can be used to convert the XML produced by th
The behavior of the XML Response Writer can be driven by the following query parameters.
[[ResponseWriters-TheversionParameter]]
=== The `version` Parameter
=== The version Parameter
The `version` parameter determines the XML protocol used in the response. Clients are strongly encouraged to _always_ specify the protocol version, so as to ensure that the format of the response they receive does not change unexpectedly if the Solr server is upgraded and a new default format is introduced.
@ -66,7 +66,7 @@ Currently supported version values are:
The default value is the latest supported.
[[ResponseWriters-ThestylesheetParameter]]
=== The `stylesheet` Parameter
=== The stylesheet Parameter
The `stylesheet` parameter can be used to direct Solr to include a `<?xml-stylesheet type="text/xsl" href="..."?>` declaration in the XML response it returns.
@ -78,7 +78,7 @@ Use of the `stylesheet` parameter is discouraged, as there is currently no way t
====
[[ResponseWriters-TheindentParameter]]
=== The `indent` Parameter
=== The indent Parameter
If the `indent` parameter is used, and has a non-blank value, then Solr will make some attempts at indenting its XML response to make it more readable by humans.
@ -90,7 +90,7 @@ The default behavior is not to indent.
The XSLT Response Writer applies an XML stylesheet to output. It can be used for tasks such as formatting results for an RSS feed.
[[ResponseWriters-trParameter]]
=== `tr` Parameter
=== tr Parameter
The XSLT Response Writer accepts one parameter: the `tr` parameter, which identifies the XML transformation to use. The transformation must be found in the Solr `conf/xslt` directory.

View File

@ -55,7 +55,7 @@ If you wish to explicitly configure `ManagedIndexSchemaFactory` the following op
With the default configuration shown above, you can use the <<schema-api.adoc#schema-api,Schema API>> to modify the schema as much as you want, and then later change the value of `mutable` to *false* if you wish to "lock" the schema in place and prevent future changes.
[[SchemaFactoryDefinitioninSolrConfig-Classicschema.xml]]
== Classic `schema.xml`
== Classic schema.xml
An alternative to using a managed schema is to explicitly configure a `ClassicIndexSchemaFactory`. `ClassicIndexSchemaFactory` requires the use of a `schema.xml` configuration file, and disallows any programatic changes to the Schema at run time. The `schema.xml` file must be edited manually and is only loaded only when the collection is loaded.
@ -65,7 +65,7 @@ An alternative to using a managed schema is to explicitly configure a `ClassicIn
----
[[SchemaFactoryDefinitioninSolrConfig-Switchingfromschema.xmltoManagedSchema]]
=== Switching from `schema.xml` to Managed Schema
=== Switching from schema.xml to Managed Schema
If you have an existing Solr collection that uses `ClassicIndexSchemaFactory`, and you wish to convert to use a managed schema, you can simply modify the `solrconfig.xml` to specify the use of the `ManagedIndexSchemaFactory`.
@ -79,7 +79,7 @@ Once Solr is restarted and it detects that a `schema.xml` file exists, but the `
You are now free to use the <<schema-api.adoc#schema-api,Schema API>> as much as you want to make changes, and remove the `schema.xml.bak`.
[[SchemaFactoryDefinitioninSolrConfig-SwitchingfromManagedSchematoManuallyEditedschema.xml]]
=== Switching from Managed Schema to Manually Edited `schema.xml`
=== Switching from Managed Schema to Manually Edited schema.xml
If you have started Solr with managed schema enabled and you would like to switch to manually editing a `schema.xml` file, you should take the following steps:

View File

@ -99,7 +99,7 @@ When used with `BBoxField`, additional options are supported:
|===
[[SpatialSearch-geofilt]]
=== `geofilt`
=== geofilt
The `geofilt` filter allows you to retrieve results based on the geospatial distance (AKA the "great circle distance") from a given point. Another way of looking at it is that it creates a circular shape filter. For example, to find all documents within five kilometers of a given lat/lon point, you could enter `&q=*:*&fq={!geofilt sfield=store}&pt=45.15,-93.85&d=5`. This filter returns all results within a circle of the given radius around the initial point:
@ -107,7 +107,7 @@ image::images/spatial-search/circle.png[image]
[[SpatialSearch-bbox]]
=== `bbox`
=== bbox
The `bbox` filter is very similar to `geofilt` except it uses the _bounding box_ of the calculated circle. See the blue box in the diagram below. It takes the same parameters as geofilt.
@ -162,7 +162,7 @@ There are four distance function queries:
For more information about these function queries, see the section on <<function-queries.adoc#function-queries,Function Queries>>.
[[SpatialSearch-geodist]]
=== `geodist`
=== geodist
`geodist` is a distance function that takes three optional parameters: `(sfield,latitude,longitude)`. You can use the `geodist` function to sort results by distance or score return results.

View File

@ -26,7 +26,7 @@ The basis for these suggestions can be terms in a field in Solr, externally crea
== Configuring the SpellCheckComponent
[[SpellChecking-DefineSpellCheckinsolrconfig.xml]]
=== Define Spell Check in `solrconfig.xml`
=== Define Spell Check in solrconfig.xml
The first step is to specify the source of terms in `solrconfig.xml`. There are three approaches to spell checking in Solr, discussed below.
@ -205,12 +205,12 @@ The SpellCheck component accepts the parameters described in the table below.
|===
[[SpellChecking-ThespellcheckParameter]]
=== The `spellcheck` Parameter
=== The spellcheck Parameter
This parameter turns on SpellCheck suggestions for the request. If *true*, then spelling suggestions will be generated.
[[SpellChecking-Thespellcheck.qorqParameter]]
=== The `spellcheck.q` or `q` Parameter
=== The spellcheck.q or q Parameter
This parameter specifies the query to spellcheck. If `spellcheck.q` is defined, then it is used; otherwise the original input query is used. The `spellcheck.q` parameter is intended to be the original query, minus any extra markup like field names, boosts, and so on. If the `q` parameter is specified, then the `SpellingQueryConverter` class is used to parse it into tokens; otherwise the <<tokenizers.adoc#Tokenizers-WhiteSpaceTokenizer,`WhitespaceTokenizer`>> is used. The choice of which one to use is up to the application. Essentially, if you have a spelling "ready" version in your application, then it is probably better to use `spellcheck.q`. Otherwise, if you just want Solr to do the job, use the `q` parameter.
@ -220,44 +220,44 @@ The SpellingQueryConverter class does not deal properly with non-ASCII character
====
[[SpellChecking-Thespellcheck.buildParameter]]
=== The `spellcheck.build` Parameter
=== The spellcheck.build Parameter
If set to *true*, this parameter creates the dictionary that the SolrSpellChecker will use for spell-checking. In a typical search application, you will need to build the dictionary before using the SolrSpellChecker. However, it's not always necessary to build a dictionary first. For example, you can configure the spellchecker to use a dictionary that already exists.
The dictionary will take some time to build, so this parameter should not be sent with every request.
[[SpellChecking-Thespellcheck.reloadParameter]]
=== The `spellcheck.reload` Parameter
=== The spellcheck.reload Parameter
If set to true, this parameter reloads the spellchecker. The results depend on the implementation of `SolrSpellChecker.reload()`. In a typical implementation, reloading the spellchecker means reloading the dictionary.
[[SpellChecking-Thespellcheck.countParameter]]
=== The `spellcheck.count` Parameter
=== The spellcheck.count Parameter
This parameter specifies the maximum number of suggestions that the spellchecker should return for a term. If this parameter isn't set, the value defaults to 1. If the parameter is set but not assigned a number, the value defaults to 5. If the parameter is set to a positive integer, that number becomes the maximum number of suggestions returned by the spellchecker.
[[SpellChecking-Thespellcheck.onlyMorePopularParameter]]
=== The `spellcheck.onlyMorePopular` Parameter
=== The spellcheck.onlyMorePopular Parameter
If *true*, Solr will to return suggestions that result in more hits for the query than the existing query. Note that this will return more popular suggestions even when the given query term is present in the index and considered "correct".
[[SpellChecking-Thespellcheck.maxResultsForSuggestParameter]]
=== The `spellcheck.maxResultsForSuggest` Parameter
=== The spellcheck.maxResultsForSuggest Parameter
For example, if this is set to 5 and the user's query returns 5 or fewer results, the spellchecker will report "correctlySpelled=false" and also offer suggestions (and collations if requested). Setting this greater than zero is useful for creating "did-you-mean?" suggestions for queries that return a low number of hits.
[[SpellChecking-Thespellcheck.alternativeTermCountParameter]]
=== The `spellcheck.alternativeTermCount` Parameter
=== The spellcheck.alternativeTermCount Parameter
Specify the number of suggestions to return for each query term existing in the index and/or dictionary. Presumably, users will want fewer suggestions for words with docFrequency>0. Also setting this value turns "on" context-sensitive spell suggestions.
[[SpellChecking-Thespellcheck.extendedResultsParameter]]
=== The `spellcheck.extendedResults` Parameter
=== The spellcheck.extendedResults Parameter
This parameter causes to Solr to include additional information about the suggestion, such as the frequency in the index.
[[SpellChecking-Thespellcheck.collateParameter]]
=== The `spellcheck.collate` Parameter
=== The spellcheck.collate Parameter
If *true*, this parameter directs Solr to take the best suggestion for each token (if one exists) and construct a new query from the suggestions. For example, if the input query was "jawa class lording" and the best suggestion for "jawa" was "java" and "lording" was "loading", then the resulting collation would be "java class loading".
@ -266,27 +266,27 @@ The spellcheck.collate parameter only returns collations that are guaranteed to
NOTE: This only returns a query to be used. It does not actually run the suggested query.
[[SpellChecking-Thespellcheck.maxCollationsParameter]]
=== The `spellcheck.maxCollations` Parameter
=== The spellcheck.maxCollations Parameter
The maximum number of collations to return. The default is *1*. This parameter is ignored if `spellcheck.collate` is false.
[[SpellChecking-Thespellcheck.maxCollationTriesParameter]]
=== The `spellcheck.maxCollationTries` Parameter
=== The spellcheck.maxCollationTries Parameter
This parameter specifies the number of collation possibilities for Solr to try before giving up. Lower values ensure better performance. Higher values may be necessary to find a collation that can return results. The default value is `0`, which maintains backwards-compatible (Solr 1.4) behavior (do not check collations). This parameter is ignored if `spellcheck.collate` is false.
[[SpellChecking-Thespellcheck.maxCollationEvaluationsParameter]]
=== The `spellcheck.maxCollationEvaluations` Parameter
=== The spellcheck.maxCollationEvaluations Parameter
This parameter specifies the maximum number of word correction combinations to rank and evaluate prior to deciding which collation candidates to test against the index. This is a performance safety-net in case a user enters a query with many misspelled words. The default is *10,000* combinations, which should work well in most situations.
[[SpellChecking-Thespellcheck.collateExtendedResultsParameter]]
=== The `spellcheck.collateExtendedResults` Parameter
=== The spellcheck.collateExtendedResults Parameter
If *true*, this parameter returns an expanded response format detailing the collations Solr found. The default value is *false* and this is ignored if `spellcheck.collate` is false.
[[SpellChecking-Thespellcheck.collateMaxCollectDocsParameter]]
=== The `spellcheck.collateMaxCollectDocs` Parameter
=== The spellcheck.collateMaxCollectDocs Parameter
This parameter specifies the maximum number of documents that should be collect when testing potential collations against the index. A value of *0* indicates that all documents should be collected, resulting in exact hit-counts. Otherwise an estimation is provided as a performance optimization in cases where exact hit-counts are unnecessary the higher the value specified, the more precise the estimation.
@ -294,23 +294,23 @@ The default value for this parameter is *0*, but when `spellcheck.collateExtende
[[SpellChecking-Thespellcheck.collateParam._ParameterPrefix]]
=== The `spellcheck.collateParam.*` Parameter Prefix
=== The spellcheck.collateParam.* Parameter Prefix
This parameter prefix can be used to specify any additional parameters that you wish to the Spellchecker to use when internally validating collation queries. For example, even if your regular search results allow for loose matching of one or more query terms via parameters like `q.op=OR` and `mm=20%` you can specify override params such as `spellcheck.collateParam.q.op=AND&spellcheck.collateParam.mm=100%` to require that only collations consisting of words that are all found in at least one document may be returned.
[[SpellChecking-Thespellcheck.dictionaryParameter]]
=== The `spellcheck.dictionary` Parameter
=== The spellcheck.dictionary Parameter
This parameter causes Solr to use the dictionary named in the parameter's argument. The default setting is "default". This parameter can be used to invoke a specific spellchecker on a per request basis.
[[SpellChecking-Thespellcheck.accuracyParameter]]
=== The `spellcheck.accuracy` Parameter
=== The spellcheck.accuracy Parameter
Specifies an accuracy value to be used by the spell checking implementation to decide whether a result is worthwhile or not. The value is a float between 0 and 1. Defaults to `Float.MIN_VALUE`.
[[spellcheck_DICT_NAME]]
=== The `spellcheck.<DICT_NAME>.key` Parameter
=== The spellcheck.<DICT_NAME>.key Parameter
Specifies a key/value pair for the implementation handling a given dictionary. The value that is passed through is just `key=value` (`spellcheck.<DICT_NAME>.` is stripped off.

View File

@ -58,7 +58,7 @@ In addition to the common request parameter, highlighting parameters, and simple
The sections below explain these parameters in detail.
[[TheDisMaxQueryParser-TheqParameter]]
=== The `q` Parameter
=== The q Parameter
The `q` parameter defines the main "query" constituting the essence of the search. The parameter supports raw input strings provided by users with no special escaping. The + and - characters are treated as "mandatory" and "prohibited" modifiers for terms. Text wrapped in balanced quote characters (for example, "San Jose") is treated as a phrase. Any query containing an odd number of quote characters is evaluated as if there were no quote characters at all.
@ -70,13 +70,13 @@ The `q` parameter does not support wildcard characters such as *.
====
[[TheDisMaxQueryParser-Theq.altParameter]]
=== The `q.alt` Parameter
=== The q.alt Parameter
If specified, the `q.alt` parameter defines a query (which by default will be parsed using standard query parsing syntax) when the main q parameter is not specified or is blank. The `q.alt` parameter comes in handy when you need something like a query to match all documents (don't forget `&rows=0` for that one!) in order to get collection-wide faceting counts.
[[TheDisMaxQueryParser-Theqf_QueryFields_Parameter]]
=== The `qf` (Query Fields) Parameter
=== The qf (Query Fields) Parameter
The `qf` parameter introduces a list of fields, each of which is assigned a boost factor to increase or decrease that particular field's importance in the query. For example, the query below:
@ -86,7 +86,7 @@ assigns `fieldOne` a boost of 2.3, leaves `fieldTwo` with the default boost (bec
[[TheDisMaxQueryParser-Themm_MinimumShouldMatch_Parameter]]
=== The `mm` (Minimum Should Match) Parameter
=== The mm (Minimum Should Match) Parameter
When processing queries, Lucene/Solr recognizes three types of clauses: mandatory, prohibited, and "optional" (also known as "should" clauses). By default, all words or phrases specified in the `q` parameter are treated as "optional" clauses unless they are preceded by a "+" or a "-". When dealing with these "optional" clauses, the `mm` parameter makes it possible to say that a certain minimum number of those clauses must match. The DisMax query parser offers great flexibility in how the minimum number can be specified.
@ -116,7 +116,7 @@ The default value of `mm` is 100% (meaning that all clauses must match).
[[TheDisMaxQueryParser-Thepf_PhraseFields_Parameter]]
=== The `pf` (Phrase Fields) Parameter
=== The pf (Phrase Fields) Parameter
Once the list of matching documents has been identified using the `fq` and `qf` parameters, the `pf` parameter can be used to "boost" the score of documents in cases where all of the terms in the q parameter appear in close proximity.
@ -124,19 +124,19 @@ The format is the same as that used by the `qf` parameter: a list of fields and
[[TheDisMaxQueryParser-Theps_PhraseSlop_Parameter]]
=== The `ps` (Phrase Slop) Parameter
=== The ps (Phrase Slop) Parameter
The `ps` parameter specifies the amount of "phrase slop" to apply to queries specified with the pf parameter. Phrase slop is the number of positions one token needs to be moved in relation to another token in order to match a phrase specified in a query.
[[TheDisMaxQueryParser-Theqs_QueryPhraseSlop_Parameter]]
=== The `qs` (Query Phrase Slop) Parameter
=== The qs (Query Phrase Slop) Parameter
The `qs` parameter specifies the amount of slop permitted on phrase queries explicitly included in the user's query string with the `qf` parameter. As explained above, slop refers to the number of positions one token needs to be moved in relation to another token in order to match a phrase specified in a query.
[[TheDisMaxQueryParser-Thetie_TieBreaker_Parameter]]
=== The `tie` (Tie Breaker) Parameter
=== The tie (Tie Breaker) Parameter
The `tie` parameter specifies a float value (which should be something much less than 1) to use as tiebreaker in DisMax queries.
@ -146,7 +146,7 @@ A value of "0.0" - the default - makes the query a pure "disjunction max query":
[[TheDisMaxQueryParser-Thebq_BoostQuery_Parameter]]
=== The `bq` (Boost Query) Parameter
=== The bq (Boost Query) Parameter
The `bq` parameter specifies an additional, optional, query clause that will be added to the user's main query to influence the score. For example, if you wanted to add a relevancy boost for recent documents:
@ -160,7 +160,7 @@ You can specify multiple `bq` parameters. If you want your query to be parsed as
[[TheDisMaxQueryParser-Thebf_BoostFunctions_Parameter]]
=== The `bf` (Boost Functions) Parameter
=== The bf (Boost Functions) Parameter
The `bf` parameter specifies functions (with optional boosts) that will be used to construct FunctionQueries which will be added to the user's main query as optional clauses that will influence the score. Any function supported natively by Solr can be used, along with a boost value. For example:

View File

@ -39,64 +39,64 @@ In addition to supporting all the DisMax query parser parameters, Extended Disma
In addition to all the <<the-dismax-query-parser.adoc#TheDisMaxQueryParser-DisMaxParameters,DisMax parameters>>, Extended DisMax includes these query parameters:
[[TheExtendedDisMaxQueryParser-ThesowParameter]]
=== The `sow` Parameter
=== The sow Parameter
Split on whitespace: if set to `false`, whitespace-separated term sequences will be provided to text analysis in one shot, enabling proper function of analysis filters that operate over term sequences, e.g. multi-word synonyms and shingles. Defaults to `true`: text analysis is invoked separately for each individual whitespace-separated term.
[[TheExtendedDisMaxQueryParser-Themm.autoRelaxParameter]]
=== The `mm.autoRelax` Parameter
=== The mm.autoRelax Parameter
If true, the number of clauses required (<<the-dismax-query-parser.adoc#TheDisMaxQueryParser-Themm_MinimumShouldMatch_Parameter,minimum should match>>) will automatically be relaxed if a clause is removed (by e.g. stopwords filter) from some but not all <<the-dismax-query-parser.adoc#TheDisMaxQueryParser-Theqf_QueryFields_Parameter,`qf`>> fields. Use this parameter as a workaround if you experience that queries return zero hits due to uneven stopword removal between the `qf` fields.
Note that relaxing mm may cause undesired side effects, hurting the precision of the search, depending on the nature of your index content.
[[TheExtendedDisMaxQueryParser-TheboostParameter]]
=== The `boost` Parameter
=== The boost Parameter
A multivalued list of strings parsed as queries with scores multiplied by the score from the main query for all matching documents. This parameter is shorthand for wrapping the query produced by eDisMax using the `BoostQParserPlugin`
[[TheExtendedDisMaxQueryParser-ThelowercaseOperatorsParameter]]
=== The `lowercaseOperators` Parameter
=== The lowercaseOperators Parameter
A Boolean parameter indicating if lowercase "and" and "or" should be treated the same as operators "AND" and "OR".
[[TheExtendedDisMaxQueryParser-ThepsParameter]]
=== The `ps` Parameter
=== The ps Parameter
Default amount of slop on phrase queries built with `pf`, `pf2` and/or `pf3` fields (affects boosting).
[[TheExtendedDisMaxQueryParser-Thepf2Parameter]]
=== The `pf2` Parameter
=== The pf2 Parameter
A multivalued list of fields with optional weights, based on pairs of word shingles.
[[TheExtendedDisMaxQueryParser-Theps2Parameter]]
=== The `ps2` Parameter
=== The ps2 Parameter
This is similar to `ps` but overrides the slop factor used for `pf2`. If not specified, `ps` is used.
[[TheExtendedDisMaxQueryParser-Thepf3Parameter]]
=== The `pf3` Parameter
=== The pf3 Parameter
A multivalued list of fields with optional weights, based on triplets of word shingles. Similar to `pf`, except that instead of building a phrase per field out of all the words in the input, it builds a set of phrases for each field out of each triplet of word shingles.
[[TheExtendedDisMaxQueryParser-Theps3Parameter]]
=== The `ps3` Parameter
=== The ps3 Parameter
This is similar to `ps` but overrides the slop factor used for `pf3`. If not specified, `ps` is used.
[[TheExtendedDisMaxQueryParser-ThestopwordsParameter]]
=== The `stopwords` Parameter
=== The stopwords Parameter
A Boolean parameter indicating if the `StopFilterFactory` configured in the query analyzer should be respected when parsing the query: if it is false, then the `StopFilterFactory` in the query analyzer is ignored.
[[TheExtendedDisMaxQueryParser-TheufParameter]]
=== The `uf` Parameter
=== The uf Parameter
Specifies which schema fields the end user is allowed to explicitly query. This parameter supports wildcards. The default is to allow all fields, equivalent to `uf=\*`. To allow only title field, use `uf=title`. To allow title and all fields ending with '_s', use `uf=title,*_s`. To allow all fields except title, use `uf=*,-title`. To disallow all fielded searches, use `uf=-*`.
[[TheExtendedDisMaxQueryParser-Fieldaliasingusingper-fieldqfoverrides]]
=== Field aliasing using per-field `qf` overrides
=== Field aliasing using per-field qf overrides
Per-field overrides of the `qf` parameter may be specified to provide 1-to-many aliasing from field names specified in the query string, to field names used in the underlying query. By default, no aliasing is used and field names specified in the query string are treated as literal field names in the index.
@ -223,7 +223,7 @@ Finally, in addition to the phrase fields (`pf`) parameter, `edismax` also suppo
[[TheExtendedDisMaxQueryParser-Usingthe_magicfields__val_and_query_]]
== Using the 'magic fields' `\_val_` and `\_query_`
== Using the "magic fields" \_val_ and \_query_
The Solr Query Parser's use of `\_val_` and `\_query_` differs from the Lucene Query Parser in the following ways:

View File

@ -74,7 +74,7 @@ The Query Elevation Search Component takes the following arguments:
|===
[[TheQueryElevationComponent-elevate.xml]]
=== `elevate.xml`
=== elevate.xml
Elevated query results are configured in an external XML file specified in the `config-file` argument. An `elevate.xml` file might look like this:
@ -100,7 +100,7 @@ In this example, the query "foo bar" would first return documents 1, 2 and 3, th
== Using the Query Elevation Component
[[TheQueryElevationComponent-TheenableElevationParameter]]
=== The `enableElevation` Parameter
=== The enableElevation Parameter
For debugging it may be useful to see results with and without the elevated docs. To hide results, use `enableElevation=false`:
@ -109,21 +109,21 @@ For debugging it may be useful to see results with and without the elevated docs
`\http://localhost:8983/solr/techproducts/elevate?q=ipod&df=text&debugQuery=true&enableElevation=false`
[[TheQueryElevationComponent-TheforceElevationParameter]]
=== The `forceElevation` Parameter
=== The forceElevation Parameter
You can force elevation during runtime by adding `forceElevation=true` to the query URL:
`\http://localhost:8983/solr/techproducts/elevate?q=ipod&df=text&debugQuery=true&enableElevation=true&forceElevation=true`
[[TheQueryElevationComponent-TheexclusiveParameter]]
=== The `exclusive` Parameter
=== The exclusive Parameter
You can force Solr to return only the results specified in the elevation file by adding `exclusive=true` to the URL:
`\http://localhost:8983/solr/techproducts/elevate?q=ipod&df=text&debugQuery=true&exclusive=true`
[[TheQueryElevationComponent-DocumentTransformersandthemarkExcludesParameter]]
=== Document Transformers and the `markExcludes` Parameter
=== Document Transformers and the markExcludes Parameter
The `[elevated]` <<transforming-result-documents.adoc#transforming-result-documents,Document Transformer>> can be used to annotate each document with information about whether or not it was elevated:
@ -134,7 +134,7 @@ Likewise, it can be helpful when troubleshooting to see all matching documents
`\http://localhost:8983/solr/techproducts/elevate?q=ipod&df=text&markExcludes=true&fl=id,[elevated],[excluded]`
[[TheQueryElevationComponent-TheelevateIdsandexcludeIdsParameters]]
=== The `elevateIds` and `excludeIds` Parameters
=== The elevateIds and excludeIds Parameters
When the elevation component is in use, the pre-configured list of elevations for a query can be overridden at request time to use the unique keys specified in these request parameters.
@ -149,6 +149,6 @@ For example, in the request below documents IW-02 and F8V7067-APL-KIT will be el
`\http://localhost:8983/solr/techproducts/elevate?q=ipod&df=text&elevateIds=IW-02,F8V7067-APL-KIT`
[[TheQueryElevationComponent-ThefqParameter]]
=== The `fq` Parameter
=== The fq Parameter
Query elevation respects the standard filter query (`fq`) parameter. That is, if the query contains the `fq` parameter, all results will be within that filter even if `elevate.xml` adds other documents to the result set.

View File

@ -187,7 +187,7 @@ The brackets around a query determine its inclusiveness.
[[TheStandardQueryParser-BoostingaTermwith_]]
=== Boosting a Term with `^`
=== Boosting a Term with "^"
Lucene/Solr provides the relevance level of matching documents based on the terms found. To boost a term use the caret symbol `^` with a boost factor (a number) at the end of the term you are searching. The higher the boost factor, the more relevant the term will be.
@ -205,7 +205,7 @@ By default, the boost factor is 1. Although the boost factor must be positive, i
[[TheStandardQueryParser-ConstantScorewith_]]
=== Constant Score with `^=`
=== Constant Score with "^="
Constant score queries are created with `<query_clause>^=<score>`, which sets the entire clause to the specified score for any documents matching that clause. This is desirable when you only care about matches for a particular clause and don't want other relevancy factors such as term frequency (the number of times the term appears in the field) or inverse document frequency (a measure across the whole index for how rare a term is in a field).
@ -279,7 +279,7 @@ or
[[TheStandardQueryParser-TheBooleanOperator_]]
=== The Boolean Operator `+`
=== The Boolean Operator "+"
The `+` symbol (also known as the "required" operator) requires that the term after the `+` symbol exist somewhere in a field in at least one document in order for the query to return a match.
@ -296,7 +296,7 @@ This operator is supported by both the standard query parser and the DisMax quer
[[TheStandardQueryParser-TheBooleanOperatorAND_]]
=== The Boolean Operator AND (`&&`)
=== The Boolean Operator AND ("&&")
The AND operator matches documents where both terms exist anywhere in the text of a single document. This is equivalent to an intersection using sets. The symbol `&&` can be used in place of the word AND.
@ -308,7 +308,7 @@ To search for documents that contain "jakarta apache" and "Apache Lucene," use e
[[TheStandardQueryParser-TheBooleanOperatorNOT_]]
=== The Boolean Operator NOT (`!`)
=== The Boolean Operator NOT ("!")
The NOT operator excludes documents that contain the term after NOT. This is equivalent to a difference using sets. The symbol `!` can be used in place of the word NOT.
@ -319,7 +319,7 @@ The following queries search for documents that contain the phrase "jakarta apac
`"jakarta apache" ! "Apache Lucene"`
[[TheStandardQueryParser-TheBooleanOperator-]]
=== The Boolean Operator `-`
=== The Boolean Operator "-"
The `-` symbol or "prohibit" operator excludes documents that contain the term after the `-` symbol.

View File

@ -51,7 +51,7 @@ The sections below discuss exactly what these various transformers do.
[[TransformingResultDocuments-_value_-ValueAugmenterFactory]]
=== `[value]` - ValueAugmenterFactory
=== [value] - ValueAugmenterFactory
Modifies every document to include the exact same value, as if it were a stored field in every document:
@ -95,7 +95,7 @@ The "```value```" option forces an explicit value to always be used, while the "
[[TransformingResultDocuments-_explain_-ExplainAugmenterFactory]]
=== `[explain]` - ExplainAugmenterFactory
=== [explain] - ExplainAugmenterFactory
Augments each document with an inline explanation of its score exactly like the information available about each document in the debug section:
@ -116,7 +116,7 @@ Supported values for "```style```" are "```text```", and "```html```", and "nl"
"value":1.052226,
"description":"weight(features:cache in 2) [DefaultSimilarity], result of:",
"details":[{
...
}]}}]}}
----
A default style can be configured by specifying an "args" parameter in your configuration:
@ -130,7 +130,7 @@ A default style can be configured by specifying an "args" parameter in your conf
[[TransformingResultDocuments-_child_-ChildDocTransformerFactory]]
=== `[child]` - ChildDocTransformerFactory
=== [child] - ChildDocTransformerFactory
This transformer returns all <<uploading-data-with-index-handlers.adoc#UploadingDatawithIndexHandlers-NestedChildDocuments,descendant documents>> of each parent document matching your query in a flat list nested inside the matching parent document. This is useful when you have indexed nested child documents and want to retrieve the child documents for the relevant parent documents for any type of search query.
@ -148,7 +148,7 @@ When using this transformer, the `parentFilter` parameter must be specified, and
[[TransformingResultDocuments-_shard_-ShardAugmenterFactory]]
=== `[shard]` - ShardAugmenterFactory
=== [shard] - ShardAugmenterFactory
This transformer adds information about what shard each individual document came from in a distributed request.
@ -156,7 +156,7 @@ ShardAugmenterFactory does not support any request parameters, or configuration
[[TransformingResultDocuments-_docid_-DocIdAugmenterFactory]]
=== `[docid]` - DocIdAugmenterFactory
=== [docid] - DocIdAugmenterFactory
This transformer adds the internal Lucene document id to each document this is primarily only useful for debugging purposes.
@ -164,7 +164,7 @@ DocIdAugmenterFactory does not support any request parameters, or configuration
[[TransformingResultDocuments-_elevated_and_excluded_]]
=== `[elevated]` and `[excluded]`
=== [elevated] and [excluded]
These transformers are available only when using the <<the-query-elevation-component.adoc#the-query-elevation-component,Query Elevation Component>>.
@ -191,12 +191,12 @@ fl=id,[elevated],[excluded]&excludeIds=GB18030TEST&elevateIds=6H500F0&markExclud
"id":"SP2514N",
"[elevated]":false,
"[excluded]":false},
...
]}}
----
[[TransformingResultDocuments-_json_xml_]]
=== `[json]` / `[xml]`
=== [json] / [xml]
These transformers replace field value containing a string representation of a valid XML or JSON structure with the actual raw XML or JSON structure rather than just the string value. Each applies only to the specific writer, such that `[json]` only applies to `wt=json` and `[xml]` only applies to `wt=xml`.
@ -207,7 +207,7 @@ fl=id,source_s:[json]&wt=json
[[TransformingResultDocuments-_subquery_]]
=== `[subquery]`
=== [subquery]
This transformer executes a separate query per transforming document passing document fields as an input for subquery parameters. It's usually used with `{!join}` and `{!parent}` query parsers, and is intended to be an improvement for `[child]`.
@ -246,17 +246,14 @@ Here is how it looks like in various formats:
"id":1,
"subject":["parentDocument"],
"title":["xrxvomgu"],
"children":{
"children":{
"numFound":1, "start":0,
"docs":[
{ "id":2,
"cat":["childDocument"]
}
]
}},
{
"id":4,
...
}}]}}
----
[source,java]
@ -311,7 +308,7 @@ If subquery collection has a different unique key field name (let's say `foo_id`
[[TransformingResultDocuments-_geo_-Geospatialformatter]]
=== `[geo]` - Geospatial formatter
=== [geo] - Geospatial formatter
Formats spatial data from a spatial field using a designated format type name. Two inner parameters are required: `f` for the field name, and `w` for the format name. Example: `geojson:[geo f=mySpatialField w=GeoJSON]`.
@ -321,7 +318,7 @@ In addition, this feature is very useful with the `RptWithGeometrySpatialField`
[[TransformingResultDocuments-_features_-LTRFeatureLoggerTransformerFactory]]
=== `[features]` - LTRFeatureLoggerTransformerFactory
=== [features] - LTRFeatureLoggerTransformerFactory
The "LTR" prefix stands for <<learning-to-rank.adoc#learning-to-rank,Learning To Rank>>. This transformer returns the values of features and it can be used for feature extraction and feature logging.

View File

@ -33,7 +33,7 @@ The settings in this section are configured in the `<updateHandler>` element in
Data sent to Solr is not searchable until it has been _committed_ to the index. The reason for this is that in some cases commits can be slow and they should be done in isolation from other possible commit requests to avoid overwriting data. So, it's preferable to provide control over when data is committed. Several options are available to control the timing of commits.
[[UpdateHandlersinSolrConfig-commitandsoftCommit]]
=== `commit` and `softCommit`
=== commit and softCommit
In Solr, a `commit` is an action which asks Solr to "commit" those changes to the Lucene index files. By default commit actions result in a "hard commit" of all the Lucene index files to stable storage (disk). When a client includes a `commit=true` parameter with an update request, this ensures that all index segments affected by the adds & deletes on an update are written to disk as soon as index updates are completed.
@ -42,7 +42,7 @@ If an additional flag `softCommit=true` is specified, then Solr performs a 'soft
For more information about Near Real Time operations, see <<near-real-time-searching.adoc#near-real-time-searching,Near Real Time Searching>>.
[[UpdateHandlersinSolrConfig-autoCommit]]
=== `autoCommit`
=== autoCommit
These settings control how often pending updates will be automatically pushed to the index. An alternative to `autoCommit` is to use `commitWithin`, which can be defined when making the update request to Solr (i.e., when pushing documents), or in an update RequestHandler.
@ -79,7 +79,7 @@ You can also specify 'soft' autoCommits in the same way that you can specify 'so
----
[[UpdateHandlersinSolrConfig-commitWithin]]
=== `commitWithin`
=== commitWithin
The `commitWithin` settings allow forcing document commits to happen in a defined time period. This is used most frequently with <<near-real-time-searching.adoc#near-real-time-searching,Near Real Time Searching>>, and for that reason the default is to perform a soft commit. This does not, however, replicate new documents to slave servers in a master/slave environment. If that's a requirement for your implementation, you can force a hard commit by adding a parameter, as in this example:

View File

@ -148,7 +148,7 @@ When using the Join query parser in a Delete By Query, you should use the `score
The rollback command rolls back all add and deletes made to the index since the last commit. It neither calls any event listeners nor creates a new searcher. Its syntax is simple: `<rollback/>`.
[[UploadingDatawithIndexHandlers-UsingcurltoPerformUpdates]]
=== Using `curl` to Perform Updates
=== Using curl to Perform Updates
You can use the `curl` utility to perform any of the above commands, using its `--data-binary` option to append the XML message to the `curl` command, and generating a HTTP POST request. For example:

View File

@ -48,7 +48,7 @@ While Apache Tika is quite powerful, it is not perfect and fails on some files.
====
[[UploadingDatawithSolrCellusingApacheTika-TryingoutTikawiththeSolrtechproductsExample]]
== Trying out Tika with the Solr `techproducts` Example
== Trying out Tika with the Solr techproducts Example
You can try out the Tika framework using the `techproducts` example included in Solr.
@ -138,7 +138,7 @@ Here is the order in which the Solr Cell framework, using the Extracting Request
4. If `uprefix` is specified, any unknown field names are prefixed with that value, else if `defaultField` is specified, any unknown fields are copied to the default field.
[[UploadingDatawithSolrCellusingApacheTika-ConfiguringtheSolrExtractingRequestHandler]]
== Configuring the Solr `ExtractingRequestHandler`
== Configuring the Solr ExtractingRequestHandler
If you are not working with the supplied `sample_techproducts_configs `or` data_driven_schema_configs` <<config-sets.adoc#config-sets,config set>>, you must configure your own `solrconfig.xml` to know about the Jar's containing the `ExtractingRequestHandler` and its dependencies:

View File

@ -58,7 +58,7 @@ It's a good idea to keep these files under version control.
[[UsingZooKeepertoManageConfigurationFiles-UploadingConfigurationFilesusingbin_solrorSolrJ]]
== Uploading Configuration Files using `bin/solr` or SolrJ
== Uploading Configuration Files using bin/solr or SolrJ
In production situations, <<config-sets.adoc#config-sets,Config Sets>> can also be uploaded to ZooKeeper independent of collection creation using either Solr's <<solr-control-script-reference.adoc#solr-control-script-reference,Solr Control Script>> or the {solr-javadocs}/solr-solrj/org/apache/solr/client/solrj/impl/CloudSolrClient.html[CloudSolrClient.uploadConfig] java method.

View File

@ -53,8 +53,8 @@ As you can see, the date format includes colon characters separating the hours,
This is normally an invalid query: `datefield:1972-05-20T17:33:18.772Z`
These are valid queries: +
`datefield:1972-05-20T17\:33\:18.772Z` +
These are valid queries: +
`datefield:1972-05-20T17\:33\:18.772Z` +
`datefield:"1972-05-20T17:33:18.772Z"` +
`datefield:[1972-05-20T17:33:18.772Z TO *]`
@ -108,7 +108,7 @@ Note that while date math is most commonly used relative to `NOW` it can be appl
=== Request Parameters That Affect Date Math
[[WorkingwithDates-NOW]]
==== `NOW`
==== NOW
The `NOW` parameter is used internally by Solr to ensure consistent date math expression parsing across multiple nodes in a distributed request. But it can be specified to instruct Solr to use an arbitrary moment in time (past or future) to override for all situations where the the special value of "```NOW```" would impact date math expressions.
@ -119,7 +119,7 @@ Example:
`q=solr&fq=start_date:[* TO NOW]&NOW=1384387200000`
[[WorkingwithDates-TZ]]
==== `TZ`
==== TZ
By default, all date math expressions are evaluated relative to the UTC TimeZone, but the `TZ` parameter can be specified to override this behaviour, by forcing all date based addition and rounding to be relative to the specified http://docs.oracle.com/javase/8/docs/api/java/util/TimeZone.html[time zone].

View File

@ -21,7 +21,7 @@
The EnumField type allows defining a field whose values are a closed set, and the sort order is pre-determined but is not alphabetic nor numeric. Examples of this are severity lists, or risk definitions.
[[WorkingwithEnumFields-DefininganEnumFieldinschema.xml]]
== Defining an EnumField in `schema.xml`
== Defining an EnumField in schema.xml
The EnumField type definition is quite simple, as in this example defining field types for "priorityLevel" and "riskLevel" enumerations:
@ -52,7 +52,7 @@ In this example, there are two value lists defined. Each list is between `enum`
<value>Low</value>
<value>Medium</value>
<value>High</value>
<value>Urgent</value>
<value>Urgent</value>
</enum>
<enum name="risk">
<value>Unknown</value>
@ -60,7 +60,7 @@ In this example, there are two value lists defined. Each list is between `enum`
<value>Low</value>
<value>Medium</value>
<value>High</value>
<value>Critical</value>
<value>Critical</value>
</enum>
</enumsConfig>
----

View File

@ -19,7 +19,7 @@
// under the License.
[[WorkingwithExternalFilesandProcesses-TheExternalFileFieldType]]
== The `ExternalFileField` Type
== The ExternalFileField Type
The `ExternalFileField` type makes it possible to specify the values for a field in a file outside the Solr index. For such a field, the file contains mappings from a key field to the field value. Another way to think of this is that, instead of specifying the field in documents as they are indexed, Solr finds values for this field in the external file.
@ -74,7 +74,7 @@ It's possible to define an event listener to reload an external file when either
----
[[WorkingwithExternalFilesandProcesses-ThePreAnalyzedFieldType]]
== The `PreAnalyzedField` Type
== The PreAnalyzedField Type
The `PreAnalyzedField` type provides a way to send to Solr serialized token streams, optionally with independent stored values of a field, and have this information stored and indexed without any additional text processing applied in Solr. This is useful if user wants to submit field content that was already processed by some existing external text processing pipeline (e.g., it has been tokenized, annotated, stemmed, synonyms inserted, etc.), while using all the rich attributes that Lucene's TokenStream provides (per-token attributes).