SOLR-15092: remove link anchors that are no longer neccessary due to relaxed validation rules

commit generated using: perl -i -ple 's/<<(.*?)\.adoc#\1,/<<.adoc#,/g' src/*.adoc

...with manual cleanup of src/language-analysis.adoc due to adoc syntax ambiguity
This commit is contained in:
Chris Hostetter 2021-02-03 10:35:39 -07:00
parent 2544a2243b
commit d693a61185
174 changed files with 879 additions and 879 deletions

View File

@ -27,17 +27,17 @@ In the scenario above, Solr runs alongside other server applications. For exampl
Solr makes it easy to add the capability to search through the online store through the following steps:
. Define a _schema_. The schema tells Solr about the contents of documents it will be indexing. In the online store example, the schema would define fields for the product name, description, price, manufacturer, and so on. Solr's schema is powerful and flexible and allows you to tailor Solr's behavior to your application. See <<documents-fields-and-schema-design.adoc#documents-fields-and-schema-design,Documents, Fields, and Schema Design>> for all the details.
. Define a _schema_. The schema tells Solr about the contents of documents it will be indexing. In the online store example, the schema would define fields for the product name, description, price, manufacturer, and so on. Solr's schema is powerful and flexible and allows you to tailor Solr's behavior to your application. See <<documents-fields-and-schema-design.adoc#,Documents, Fields, and Schema Design>> for all the details.
. Feed Solr documents for which your users will search.
. Expose search functionality in your application.
Because Solr is based on open standards, it is highly extensible. Solr queries are simple HTTP request URLs and the response is a structured document: mainly JSON, but it could also be XML, CSV, or other formats. This means that a wide variety of clients will be able to use Solr, from other web applications to browser clients, rich client applications, and mobile devices. Any platform capable of HTTP can talk to Solr. See <<client-apis.adoc#client-apis,Client APIs>> for details on client APIs.
Because Solr is based on open standards, it is highly extensible. Solr queries are simple HTTP request URLs and the response is a structured document: mainly JSON, but it could also be XML, CSV, or other formats. This means that a wide variety of clients will be able to use Solr, from other web applications to browser clients, rich client applications, and mobile devices. Any platform capable of HTTP can talk to Solr. See <<client-apis.adoc#,Client APIs>> for details on client APIs.
Solr offers support for the simplest keyword searching through to complex queries on multiple fields and faceted search results. <<searching.adoc#searching,Searching>> has more information about searching and queries.
Solr offers support for the simplest keyword searching through to complex queries on multiple fields and faceted search results. <<searching.adoc#,Searching>> has more information about searching and queries.
If Solr's capabilities are not impressive enough, its ability to handle very high-volume applications should do the trick.
A relatively common scenario is that you have so much data, or so many queries, that a single Solr server is unable to handle your entire workload. In this case, you can scale up the capabilities of your application using <<solrcloud.adoc#solrcloud,SolrCloud>> to better distribute the data, and the processing of requests, across many servers. Multiple options can be mixed and matched depending on the scalability you need.
A relatively common scenario is that you have so much data, or so many queries, that a single Solr server is unable to handle your entire workload. In this case, you can scale up the capabilities of your application using <<solrcloud.adoc#,SolrCloud>> to better distribute the data, and the processing of requests, across many servers. Multiple options can be mixed and matched depending on the scalability you need.
For example: "Sharding" is a scaling technique in which a collection is split into multiple logical pieces called "shards" in order to scale up the number of documents in a collection beyond what could physically fit on a single server. Incoming queries are distributed to every shard in the collection, which respond with merged results. Another technique available is to increase the "Replication Factor" of your collection, which allows you to add servers with additional copies of your collection to handle higher concurrent query load by spreading the requests around to multiple machines. Sharding and replication are not mutually exclusive, and together make Solr an extremely powerful and scalable platform.

View File

@ -16,7 +16,7 @@
// specific language governing permissions and limitations
// under the License.
Like <<tokenizers.adoc#tokenizers,tokenizers>>, <<filter-descriptions.adoc#filter-descriptions,filters>> consume input and produce a stream of tokens. Filters also derive from `org.apache.lucene.analysis.TokenStream`. Unlike tokenizers, a filter's input is another TokenStream. The job of a filter is usually easier than that of a tokenizer since in most cases a filter looks at each token in the stream sequentially and decides whether to pass it along, replace it or discard it.
Like <<tokenizers.adoc#,tokenizers>>, <<filter-descriptions.adoc#,filters>> consume input and produce a stream of tokens. Filters also derive from `org.apache.lucene.analysis.TokenStream`. Unlike tokenizers, a filter's input is another TokenStream. The job of a filter is usually easier than that of a tokenizer since in most cases a filter looks at each token in the stream sequentially and decides whether to pass it along, replace it or discard it.
A filter may also do more complex analysis by looking ahead to consider multiple tokens at once, although this is less common. One hypothetical use for such a filter might be to normalize state names that would be tokenized as two words. For example, the single token "california" would be replaced with "CA", while the token pair "rhode" followed by "island" would become the single token "RI".
@ -60,4 +60,4 @@ The last filter in the above example is a stemmer filter that uses the Porter st
Conversely, applying a stemmer to your query terms will allow queries containing non stem terms, like "hugging", to match documents with different variations of the same stem word, such as "hugged". This works because both the indexer and the query will map to the same stem ("hug").
Word stemming is, obviously, very language specific. Solr includes several language-specific stemmers created by the http://snowball.tartarus.org/[Snowball] generator that are based on the Porter stemming algorithm. The generic Snowball Porter Stemmer Filter can be used to configure any of these language stemmers. Solr also includes a convenience wrapper for the English Snowball stemmer. There are also several purpose-built stemmers for non-English languages. These stemmers are described in <<language-analysis.adoc#language-analysis,Language Analysis>>.
Word stemming is, obviously, very language specific. Solr includes several language-specific stemmers created by the http://snowball.tartarus.org/[Snowball] generator that are based on the Porter stemming algorithm. The generic Snowball Porter Stemmer Filter can be used to configure any of these language stemmers. Solr also includes a convenience wrapper for the English Snowball stemmer. There are also several purpose-built stemmers for non-English languages. These stemmers are described in <<language-analysis.adoc#,Language Analysis>>.

View File

@ -28,7 +28,7 @@ The material as presented assumes that you are familiar with some basic search c
The default port when running Solr is 8983. The samples, URLs and screenshots in this guide may show different ports, because the port number that Solr uses is configurable.
If you have not customized your installation of Solr, please make sure that you use port 8983 when following the examples, or configure your own installation to use the port numbers shown in the examples. For information about configuring port numbers, see the section <<monitoring-solr.adoc#monitoring-solr,Monitoring Solr>>.
If you have not customized your installation of Solr, please make sure that you use port 8983 when following the examples, or configure your own installation to use the port numbers shown in the examples. For information about configuring port numbers, see the section <<monitoring-solr.adoc#,Monitoring Solr>>.
Similarly, URL examples use `localhost` throughout; if you are accessing Solr from a location remote to the server hosting Solr, replace `localhost` with the proper domain or IP where Solr is running.
@ -58,7 +58,7 @@ In many cases, but not all, the parameters and outputs of API calls are the same
Throughout this Guide, we have added examples of both styles with sections labeled "V1 API" and "V2 API". As of the 7.2 version of this Guide, these examples are not yet complete - more coverage will be added as future versions of the Guide are released.
The section <<v2-api.adoc#v2-api,V2 API>> provides more information about how to work with the new API structure, including how to disable it if you choose to do so.
The section <<v2-api.adoc#,V2 API>> provides more information about how to work with the new API structure, including how to disable it if you choose to do so.
All APIs return a response header that includes the status of the request and the time to process it. Some APIs will also include the parameters used for the request. Many of the examples in this Guide omit this header information, which you can do locally by adding the parameter `omitHeader=true` to any request.

View File

@ -16,7 +16,7 @@
// specific language governing permissions and limitations
// under the License.
The job of a <<tokenizers.adoc#tokenizers,tokenizer>> is to break up a stream of text into tokens, where each token is (usually) a sub-sequence of the characters in the text. An analyzer is aware of the field it is configured for, but a tokenizer is not. Tokenizers read from a character stream (a Reader) and produce a sequence of Token objects (a TokenStream).
The job of a <<tokenizers.adoc#,tokenizer>> is to break up a stream of text into tokens, where each token is (usually) a sub-sequence of the characters in the text. An analyzer is aware of the field it is configured for, but a tokenizer is not. Tokenizers read from a character stream (a Reader) and produce a sequence of Token objects (a TokenStream).
Characters in the input stream may be discarded, such as whitespace or other delimiters. They may also be added to or replaced, such as mapping aliases or abbreviations to normalized forms. A token contains various metadata in addition to its text value, such as the location at which the token occurs in the field. Because a tokenizer may produce tokens that diverge from the input text, you should not assume that the text of the token is the same text that occurs in the field, or that its length is the same as the original text. It's also possible for more than one token to have the same position or refer to the same offset in the original text. Keep this in mind if you use token metadata for things like highlighting search results in the field text.
@ -52,7 +52,7 @@ The class named in the tokenizer element is not the actual tokenizer, but rather
A `TypeTokenFilterFactory` is available that creates a `TypeTokenFilter` that filters tokens based on their TypeAttribute, which is set in `factory.getStopTypes`.
For a complete list of the available TokenFilters, see the section <<tokenizers.adoc#tokenizers,Tokenizers>>.
For a complete list of the available TokenFilters, see the section <<tokenizers.adoc#,Tokenizers>>.
== When to Use a CharFilter vs. a TokenFilter

View File

@ -67,7 +67,7 @@ There are presently two types of routed alias: time routed and category routed.
but share some common behavior.
When processing an update for a routed alias, Solr initializes its
<<update-request-processors.adoc#update-request-processors,UpdateRequestProcessor>> chain as usual, but
<<update-request-processors.adoc#,UpdateRequestProcessor>> chain as usual, but
when `DistributedUpdateProcessor` (DUP) initializes, it detects that the update targets a routed alias and injects
`RoutedAliasUpdateProcessor` (RAUP) in front of itself.
RAUP, in coordination with the Overseer, is the main part of a routed alias, and must immediately precede DUP. It is not
@ -83,7 +83,7 @@ WARNING: It's extremely important with all routed aliases that the route values
with a different route value for the same ID produces two distinct documents with the same ID accessible via the alias.
All query time behavior of the routed alias is *_undefined_* and not easily predictable once duplicate ID's exist.
CAUTION: It is a bad idea to use "data driven" mode (aka <<schemaless-mode.adoc#schemaless-mode,schemaless-mode>>) with
CAUTION: It is a bad idea to use "data driven" mode (aka <<schemaless-mode.adoc#,schemaless-mode>>) with
routed aliases, as duplicate schema mutations might happen concurrently leading to errors.

View File

@ -26,6 +26,6 @@ If you click the *Verbose Output* check box, you see more information, including
image::images/analysis-screen/analysis_verbose.png[image,height=400]
In the example screenshot above, several transformations are applied to the input "Running is a sport." The words "is" and "a" have been removed and the word "running" has been changed to its basic form, "run". This is because we are using the field type `text_en` in this scenario, which is configured to remove stop words (small words that usually do not provide a great deal of context) and "stem" terms when possible to find more possible matches (this is particularly helpful with plural forms of words). If you click the question mark next to the *Analyze Fieldname/Field Type* pull-down menu, the <<schema-browser-screen.adoc#schema-browser-screen,Schema Browser window>> will open, showing you the settings for the field specified.
In the example screenshot above, several transformations are applied to the input "Running is a sport." The words "is" and "a" have been removed and the word "running" has been changed to its basic form, "run". This is because we are using the field type `text_en` in this scenario, which is configured to remove stop words (small words that usually do not provide a great deal of context) and "stem" terms when possible to find more possible matches (this is particularly helpful with plural forms of words). If you click the question mark next to the *Analyze Fieldname/Field Type* pull-down menu, the <<schema-browser-screen.adoc#,Schema Browser window>> will open, showing you the settings for the field specified.
The section <<understanding-analyzers-tokenizers-and-filters.adoc#understanding-analyzers-tokenizers-and-filters,Understanding Analyzers, Tokenizers, and Filters>> describes in detail what each option is and how it may transform your data and the section <<running-your-analyzer.adoc#running-your-analyzer,Running Your Analyzer>> has specific examples for using the Analysis screen.
The section <<understanding-analyzers-tokenizers-and-filters.adoc#,Understanding Analyzers, Tokenizers, and Filters>> describes in detail what each option is and how it may transform your data and the section <<running-your-analyzer.adoc#,Running Your Analyzer>> has specific examples for using the Analysis screen.

View File

@ -22,10 +22,10 @@ These sources can be either Solr fields indexed with docValues, or constants.
== Supported Field Types
The following <<field-types-included-with-solr.adoc#field-types-included-with-solr, Solr field types>> are supported.
The following <<field-types-included-with-solr.adoc#, Solr field types>> are supported.
Fields of these types can be either multi-valued and single-valued.
All fields used in analytics expressions *must* have <<docvalues.adoc#docvalues,docValues>> enabled.
All fields used in analytics expressions *must* have <<docvalues.adoc#,docValues>> enabled.
// Since Trie* fields are deprecated as of 7.0, we should consider removing Trie* fields from this list...
@ -77,7 +77,7 @@ There are two possible ways of specifying constant strings, as shown below.
=== Dates
Dates can be specified in the same way as they are in Solr queries. Just use ISO-8601 format.
For more information, refer to the <<working-with-dates.adoc#working-with-dates,Working with Dates>> section.
For more information, refer to the <<working-with-dates.adoc#,Working with Dates>> section.
* `2017-07-17T19:35:08Z`

View File

@ -17,8 +17,8 @@
// specific language governing permissions and limitations
// under the License.
Reduction functions reduce the values of <<analytics-expression-sources.adoc#analytics-expression-sources,sources>>
and/or unreduced <<analytics-mapping-functions.adoc#analytics-mapping-functions,mapping functions>>
Reduction functions reduce the values of <<analytics-expression-sources.adoc#,sources>>
and/or unreduced <<analytics-mapping-functions.adoc#,mapping functions>>
for every Solr Document to a single value.
Below is a list of all reduction functions provided by the Analytics Component.

View File

@ -161,7 +161,7 @@ The supported fields are listed in the <<analytics-expression-sources.adoc#suppo
Mapping Functions::
Mapping functions map values for each Solr Document or Reduction.
The provided mapping functions are detailed in the <<analytics-mapping-functions.adoc#analytics-mapping-functions,Analytics Mapping Function Reference>>.
The provided mapping functions are detailed in the <<analytics-mapping-functions.adoc#,Analytics Mapping Function Reference>>.
* Unreduced Mapping: Mapping a Field with another Field or Constant returns a value for every Solr Document.
Unreduced mapping functions can take fields, constants as well as other unreduced mapping functions as input.
@ -170,7 +170,7 @@ Unreduced mapping functions can take fields, constants as well as other unreduce
Reduction Functions::
Functions that reduce the values of sources and/or unreduced mapping functions for every Solr Document to a single value.
The provided reduction functions are detailed in the <<analytics-reduction-functions.adoc#analytics-reduction-functions,Analytics Reduction Function Reference>>.
The provided reduction functions are detailed in the <<analytics-reduction-functions.adoc#,Analytics Reduction Function Reference>>.
==== Component Ordering

View File

@ -84,7 +84,7 @@ This example also defines `security.json` on the command line, but you can also
[WARNING]
====
Whenever you use any security plugins and store `security.json` in ZooKeeper, we highly recommend that you implement access control in your ZooKeeper nodes. Information about how to enable this is available in the section <<zookeeper-access-control.adoc#zookeeper-access-control,ZooKeeper Access Control>>.
Whenever you use any security plugins and store `security.json` in ZooKeeper, we highly recommend that you implement access control in your ZooKeeper nodes. Information about how to enable this is available in the section <<zookeeper-access-control.adoc#,ZooKeeper Access Control>>.
====
Once `security.json` has been uploaded to ZooKeeper, you should use the appropriate APIs for the plugins you're using to update it. You can edit it manually, but you must take care to remove any version data so it will be properly updated across all ZooKeeper nodes. The version data is found at the end of the `security.json` file, and will appear as the letter "v" followed by a number, such as `{"v":138}`.
@ -93,7 +93,7 @@ Once `security.json` has been uploaded to ZooKeeper, you should use the appropri
When running Solr in standalone mode, you need to create the `security.json` file and put it in the `$SOLR_HOME` directory for your installation (this is the same place you have located `solr.xml` and is usually `server/solr`).
If you are using <<legacy-scaling-and-distribution.adoc#legacy-scaling-and-distribution,Legacy Scaling and Distribution>>, you will need to place `security.json` on each node of the cluster.
If you are using <<legacy-scaling-and-distribution.adoc#,Legacy Scaling and Distribution>>, you will need to place `security.json` on each node of the cluster.
You can use the authentication and authorization APIs, but if you are using the legacy scaling model, you will need to make the same API requests on each node separately. You can also edit `security.json` by hand if you prefer.
@ -159,7 +159,7 @@ include::securing-solr.adoc[tag=list-of-authorization-plugins]
[#configuring-audit-logging]
== Audit Logging
<<audit-logging.adoc#audit-logging,Audit logging>> plugins help you keep an audit trail of events happening in your Solr cluster.
<<audit-logging.adoc#,Audit logging>> plugins help you keep an audit trail of events happening in your Solr cluster.
Audit logging may e.g., ship data to an external audit service.
A custom plugin can be implemented by extending the `AuditLoggerPlugin` class.
@ -169,8 +169,8 @@ Whenever an authentication plugin is enabled, authentication is also required fo
When authentication is required the Admin UI will presented you with a login dialogue. The authentication plugins currently supported by the Admin UI are:
* <<basic-authentication-plugin.adoc#basic-authentication-plugin,Basic Authentication Plugin>>
* <<jwt-authentication-plugin.adoc#jwt-authentication-plugin,JWT Authentication Plugin>>
* <<basic-authentication-plugin.adoc#,Basic Authentication Plugin>>
* <<jwt-authentication-plugin.adoc#,JWT Authentication Plugin>>
If your plugin of choice is not supported, the Admin UI will still let you perform unrestricted operations, while for restricted operations you will need to interact with Solr by sending HTTP requests instead of through the graphical user interface of the Admin UI. All operations supported by Admin UI can be performed through Solr's RESTful APIs.

View File

@ -19,7 +19,7 @@
This guide is a tutorial on how to set up a multi-node SolrCloud cluster on https://aws.amazon.com/ec2[Amazon Web Services (AWS) EC2] instances for early development and design.
This tutorial is not meant for production systems. For one, it uses Solr's embedded ZooKeeper instance, and for production you should have at least 3 ZooKeeper nodes in an ensemble. There are additional steps you should take for a production installation; refer to <<taking-solr-to-production.adoc#taking-solr-to-production,Taking Solr to Production>> for how to deploy Solr in production.
This tutorial is not meant for production systems. For one, it uses Solr's embedded ZooKeeper instance, and for production you should have at least 3 ZooKeeper nodes in an ensemble. There are additional steps you should take for a production installation; refer to <<taking-solr-to-production.adoc#,Taking Solr to Production>> for how to deploy Solr in production.
In this guide we are going to:
@ -39,7 +39,7 @@ In this guide we are going to:
To use this guide, you must have the following:
* An https://aws.amazon.com[AWS] account.
* Familiarity with setting up a single-node SolrCloud on local machine. Refer to the <<solr-tutorial.adoc#solr-tutorial,Solr Tutorial>> if you have never used Solr before.
* Familiarity with setting up a single-node SolrCloud on local machine. Refer to the <<solr-tutorial.adoc#,Solr Tutorial>> if you have never used Solr before.
== Launch EC2 instances
@ -192,7 +192,7 @@ $ sudo vim /etc/hosts
=== Install ZooKeeper
These steps will help you install and configure a single instance of ZooKeeper on AWS. This is not sufficient for a production, use, however, where a ZooKeeper ensemble of at least three nodes is recommended. See the section <<setting-up-an-external-zookeeper-ensemble.adoc#setting-up-an-external-zookeeper-ensemble,Setting Up an External ZooKeeper Ensemble>> for information about how to change this single-instance into an ensemble.
These steps will help you install and configure a single instance of ZooKeeper on AWS. This is not sufficient for a production, use, however, where a ZooKeeper ensemble of at least three nodes is recommended. See the section <<setting-up-an-external-zookeeper-ensemble.adoc#,Setting Up an External ZooKeeper Ensemble>> for information about how to change this single-instance into an ensemble.
. Download a stable version of ZooKeeper. In this example we're using ZooKeeper v{ivy-zookeeper-version}. On the node you're using to host ZooKeeper (`zookeeper-node`), download the package and untar it:
+
@ -265,6 +265,6 @@ $ bin/solr start -c -p 8983 -h solr-node-1 -z zookeeper-node:2181
====
As noted earlier, a single ZooKeeper node is not sufficient for a production installation. See these additional resources for more information about deploying Solr in production, which can be used once you have the EC2 instances up and running:
* <<taking-solr-to-production.adoc#taking-solr-to-production,Taking Solr to Production>>
* <<setting-up-an-external-zookeeper-ensemble.adoc#setting-up-an-external-zookeeper-ensemble,Setting Up an External ZooKeeper Ensemble>>
* <<taking-solr-to-production.adoc#,Taking Solr to Production>>
* <<setting-up-an-external-zookeeper-ensemble.adoc#,Setting Up an External ZooKeeper Ensemble>>
====

View File

@ -18,7 +18,7 @@
Solr can support Basic authentication for users with the use of the BasicAuthPlugin.
An authorization plugin is also available to configure Solr with permissions to perform various activities in the system. The authorization plugin is described in the section <<rule-based-authorization-plugin.adoc#rule-based-authorization-plugin,Rule-Based Authorization Plugin>>.
An authorization plugin is also available to configure Solr with permissions to perform various activities in the system. The authorization plugin is described in the section <<rule-based-authorization-plugin.adoc#,Rule-Based Authorization Plugin>>.
== Enable Basic Authentication
@ -26,7 +26,7 @@ To use Basic authentication, you must first create a `security.json` file. This
For Basic authentication, the `security.json` file must have an `authentication` part which defines the class being used for authentication. Usernames and passwords (as a sha256(password+salt) hash) could be added when the file is created, or can be added later with the Basic authentication API, described below.
The `authorization` part is not related to Basic authentication, but is a separate authorization plugin designed to support fine-grained user access control. For more information, see the section <<rule-based-authorization-plugin.adoc#rule-based-authorization-plugin,Rule-Based Authorization Plugin>>.
The `authorization` part is not related to Basic authentication, but is a separate authorization plugin designed to support fine-grained user access control. For more information, see the section <<rule-based-authorization-plugin.adoc#,Rule-Based Authorization Plugin>>.
An example `security.json` showing both sections is shown below to show how these plugins can work together:
@ -77,7 +77,7 @@ NOTE: If you have defined `ZK_HOST` in `solr.in.sh`/`solr.in.cmd` (see <<setting
There are a few things to keep in mind when using the Basic authentication plugin.
* Credentials are sent in plain text by default. It's recommended to use SSL for communication when Basic authentication is enabled, as described in the section <<enabling-ssl.adoc#enabling-ssl,Enabling SSL>>.
* Credentials are sent in plain text by default. It's recommended to use SSL for communication when Basic authentication is enabled, as described in the section <<enabling-ssl.adoc#,Enabling SSL>>.
* A user who has access to write permissions to `security.json` will be able to modify all the permissions and how users have been assigned permissions. Special care should be taken to only grant access to editing security to appropriate users.
* Your network should, of course, be secure. Even with Basic authentication enabled, you should not unnecessarily expose Solr to the outside world.

View File

@ -36,7 +36,7 @@ An example `security.json` is shown below:
=== Certificate Validation
Parts of certificate validation, including verifying the trust chain and peer hostname/ip address will be done by the web servlet container before the request ever reaches the authentication plugin.
These checks are described in the <<enabling-ssl.adoc#enabling-ssl,Enabling SSL>> section.
These checks are described in the <<enabling-ssl.adoc#,Enabling SSL>> section.
This plugin provides no additional checking beyond what has been configured via SSL properties.

View File

@ -18,6 +18,6 @@
Many programming environments are able to send HTTP requests and retrieve responses. Parsing the responses is a slightly more thorny problem. Fortunately, Solr makes it easy to choose an output format that will be easy to handle on the client side.
Specify a response format using the `wt` parameter in a query. The available response formats are documented in <<response-writers.adoc#response-writers,Response Writers>>.
Specify a response format using the `wt` parameter in a query. The available response formats are documented in <<response-writers.adoc#,Response Writers>>.
Most client APIs hide this detail for you, so for many types of client applications, you won't ever have to specify a `wt` parameter. In JavaScript, however, the interface to Solr is a little closer to the metal, so you will need to add this parameter yourself.

View File

@ -19,16 +19,16 @@
This section discusses the available client APIs for Solr. It covers the following topics:
<<introduction-to-client-apis.adoc#introduction-to-client-apis,Introduction to Client APIs>>: A conceptual overview of Solr client APIs.
<<introduction-to-client-apis.adoc#,Introduction to Client APIs>>: A conceptual overview of Solr client APIs.
<<choosing-an-output-format.adoc#choosing-an-output-format,Choosing an Output Format>>: Information about choosing a response format in Solr.
<<choosing-an-output-format.adoc#,Choosing an Output Format>>: Information about choosing a response format in Solr.
<<using-solrj.adoc#using-solrj,Using SolrJ>>: Detailed information about SolrJ, an API for working with Java applications.
<<using-solrj.adoc#,Using SolrJ>>: Detailed information about SolrJ, an API for working with Java applications.
<<using-javascript.adoc#using-javascript,Using JavaScript>>: Explains why a client API is not needed for JavaScript responses.
<<using-javascript.adoc#,Using JavaScript>>: Explains why a client API is not needed for JavaScript responses.
<<using-python.adoc#using-python,Using Python>>: Information about Python and JSON responses.
<<using-python.adoc#,Using Python>>: Information about Python and JSON responses.
<<using-solr-from-ruby.adoc#using-solr-from-ruby,Using Solr From Ruby>>: Detailed information about using Solr with Ruby applications.
<<using-solr-from-ruby.adoc#,Using Solr From Ruby>>: Detailed information about using Solr with Ruby applications.
<<client-api-lineup.adoc#client-api-lineup,Other Clients>>: How to find links to 3rd-party client libraries.
<<client-api-lineup.adoc#,Other Clients>>: How to find links to 3rd-party client libraries.

View File

@ -16,14 +16,14 @@
// specific language governing permissions and limitations
// under the License.
When running in <<solrcloud.adoc#solrcloud,SolrCloud>> mode, a "Cloud" option will appear in the Admin UI between <<logging.adoc#logging,Logging>> and <<collections-core-admin.adoc#collections-core-admin,Collections>>.
When running in <<solrcloud.adoc#,SolrCloud>> mode, a "Cloud" option will appear in the Admin UI between <<logging.adoc#,Logging>> and <<collections-core-admin.adoc#,Collections>>.
This screen provides status information about each collection & node in your cluster, as well as access to the low level data being stored in <<using-zookeeper-to-manage-configuration-files.adoc#using-zookeeper-to-manage-configuration-files,ZooKeeper>>.
This screen provides status information about each collection & node in your cluster, as well as access to the low level data being stored in <<using-zookeeper-to-manage-configuration-files.adoc#,ZooKeeper>>.
.Only Visible When using SolrCloud
[NOTE]
====
The "Cloud" menu option is only available on Solr instances running in <<getting-started-with-solrcloud.adoc#getting-started-with-solrcloud,SolrCloud mode>>. Single node or leader/follower replication instances of Solr will not display this option.
The "Cloud" menu option is only available on Solr instances running in <<getting-started-with-solrcloud.adoc#,SolrCloud mode>>. Single node or leader/follower replication instances of Solr will not display this option.
====
Click on the "Cloud" option in the left-hand navigation, and a small sub-menu appears with options called "Nodes", "Tree", "ZK Status" and "Graph". The sub-view selected by default is "Nodes".

View File

@ -131,7 +131,7 @@ Add, edit or delete a cluster-wide property.
`name`::
The name of the property. Supported properties names are `location`, `maxCoresPerNode`, `urlScheme`, and `defaultShardPreferences`.
If the <<solr-tracing.adoc#solr-tracing,Jaeger tracing contrib>> has been enabled, the property `samplePercentage` is also available.
If the <<solr-tracing.adoc#,Jaeger tracing contrib>> has been enabled, the property `samplePercentage` is also available.
+
Other properties can be set (for example, if you need them for custom plugins) but they must begin with the prefix `ext.`.
Unknown properties that don't begin with `ext.` will be rejected.

View File

@ -18,7 +18,7 @@
The Collapsing query parser and the Expand component combine to form an approach to grouping documents for field collapsing in search results.
The Collapsing query parser groups documents (collapsing the result set) according to your parameters, while the Expand component provides access to documents in the collapsed group for use in results display or other processing by a client application. Collapse & Expand can together do what the older <<result-grouping.adoc#result-grouping,Result Grouping>> (`group=true`) does for _most_ use-cases but not all. Collapse and Expand are not supported when Result Grouping is enabled. Generally, you should prefer Collapse & Expand.
The Collapsing query parser groups documents (collapsing the result set) according to your parameters, while the Expand component provides access to documents in the collapsed group for use in results display or other processing by a client application. Collapse & Expand can together do what the older <<result-grouping.adoc#,Result Grouping>> (`group=true`) does for _most_ use-cases but not all. Collapse and Expand are not supported when Result Grouping is enabled. Generally, you should prefer Collapse & Expand.
[IMPORTANT]
====
@ -39,7 +39,7 @@ The CollapsingQParser accepts the following local parameters:
The field that is being collapsed on. The field must be a single valued String, Int or Float-type of field.
`min` or `max`::
Selects the group head document for each group based on which document has the min or max value of the specified numeric field or <<function-queries.adoc#function-queries,function query>>.
Selects the group head document for each group based on which document has the min or max value of the specified numeric field or <<function-queries.adoc#,function query>>.
+
At most only one of the `min`, `max`, or `sort` (see below) parameters may be specified.
+
@ -134,7 +134,7 @@ fq={!collapse cost=1000 field=group_field}
=== Block Collapsing
When collapsing on the `\_root_` field, using `nullPolicy=expand` or `nullPolicy=ignore`, the Collapsing Query Parser can take advantage of the fact that all docs with identical field values are adjacent to eachother in the index in a single <<indexing-nested-documents.adoc#indexing-nested-documents,"block" of nested documents>>. This allows the collapsing logic to be much faster and more memory efficient.
When collapsing on the `\_root_` field, using `nullPolicy=expand` or `nullPolicy=ignore`, the Collapsing Query Parser can take advantage of the fact that all docs with identical field values are adjacent to eachother in the index in a single <<indexing-nested-documents.adoc#,"block" of nested documents>>. This allows the collapsing logic to be much faster and more memory efficient.
The default collapsing logic must keep track of all group head documents -- for all groups encountered so far -- until it has evaluated all documents, because each document it considers may become the new group head of any group.

View File

@ -85,13 +85,13 @@ When such a collection is deleted, its autocreated configset will be deleted by
`router.field`::
If this parameter is specified, the router will look at the value of the field in an input document to compute the hash and identify a shard instead of looking at the `uniqueKey` field. If the field specified is null in the document, the document will be rejected.
+
Please note that <<realtime-get.adoc#realtime-get,RealTime Get>> or retrieval by document ID would also require the parameter `\_route_` (or `shard.keys`) to avoid a distributed search.
Please note that <<realtime-get.adoc#,RealTime Get>> or retrieval by document ID would also require the parameter `\_route_` (or `shard.keys`) to avoid a distributed search.
`perReplicaState`::
If `true` the states of individual replicas will be maintained as individual child of the `state.json`. default is `false`
`property._name_=_value_`::
Set core property _name_ to _value_. See the section <<defining-core-properties.adoc#defining-core-properties,Defining core.properties>> for details on supported properties and values.
Set core property _name_ to _value_. See the section <<defining-core-properties.adoc#,Defining core.properties>> for details on supported properties and values.
[WARNING]
====
@ -474,7 +474,7 @@ The routing key prefix. For example, if the uniqueKey of a document is "a!123",
The timeout, in seconds, until which write requests made to the source collection for the given `split.key` will be forwarded to the target shard. The default is 60 seconds.
`property._name_=_value_`::
Set core property _name_ to _value_. See the section <<defining-core-properties.adoc#defining-core-properties,Defining core.properties>> for details on supported properties and values.
Set core property _name_ to _value_. See the section <<defining-core-properties.adoc#,Defining core.properties>> for details on supported properties and values.
`async`::
Request ID to track this action which will be <<collections-api.adoc#asynchronous-calls,processed asynchronously>>.
@ -1209,7 +1209,7 @@ Backs up Solr collections and associated configurations to a shared filesystem -
`/admin/collections?action=BACKUP&name=myBackupName&collection=myCollectionName&location=/path/to/my/shared/drive`
The BACKUP command will backup Solr indexes and configurations for a specified collection. The BACKUP command <<making-and-restoring-backups.adoc#making-and-restoring-backups,takes one copy from each shard for the indexes>>. For configurations, it backs up the configset that was associated with the collection and metadata.
The BACKUP command will backup Solr indexes and configurations for a specified collection. The BACKUP command <<making-and-restoring-backups.adoc#,takes one copy from each shard for the indexes>>. For configurations, it backs up the configset that was associated with the collection and metadata.
=== BACKUP Parameters

View File

@ -22,9 +22,9 @@ In the left-hand navigation bar, you will see a pull-down menu titled "Collectio
.Only Visible When Using SolrCloud
[NOTE]
====
The "Collection Selector" pull-down menu is only available on Solr instances running in <<solrcloud.adoc#solrcloud,SolrCloud mode>>.
The "Collection Selector" pull-down menu is only available on Solr instances running in <<solrcloud.adoc#,SolrCloud mode>>.
Single node or leader/follower replication instances of Solr will not display this menu, instead the Collection specific UI pages described in this section will be available in the <<core-specific-tools.adoc#core-specific-tools,Core Selector pull-down menu>>.
Single node or leader/follower replication instances of Solr will not display this menu, instead the Collection specific UI pages described in this section will be available in the <<core-specific-tools.adoc#,Core Selector pull-down menu>>.
====
Clicking on the Collection Selector pull-down menu will show a list of the collections in your Solr cluster, with a search box that can be used to find a specific collection by name. When you select a collection from the pull-down, the main display of the page will display some basic metadata about the collection, and a secondary menu will appear in the left nav with links to additional collection specific administration screens.
@ -34,10 +34,10 @@ image::images/collection-specific-tools/collection_dashboard.png[image,width=482
The collection-specific UI screens are listed below, with a link to the section of this guide to find out more:
// TODO: SOLR-10655 BEGIN: refactor this into a 'collection-screens-list.include.adoc' file for reuse
* <<analysis-screen.adoc#analysis-screen,Analysis>> - lets you analyze the data found in specific fields.
* <<documents-screen.adoc#documents-screen,Documents>> - provides a simple form allowing you to execute various Solr indexing commands directly from the browser.
* <<files-screen.adoc#files-screen,Files>> - shows the current core configuration files such as `solrconfig.xml`.
* <<query-screen.adoc#query-screen,Query>> - lets you submit a structured query about various elements of a core.
* <<stream-screen.adoc#stream-screen,Stream>> - allows you to submit streaming expressions and see results and parsing explanations.
* <<schema-browser-screen.adoc#schema-browser-screen,Schema Browser>> - displays schema data in a browser window.
* <<analysis-screen.adoc#,Analysis>> - lets you analyze the data found in specific fields.
* <<documents-screen.adoc#,Documents>> - provides a simple form allowing you to execute various Solr indexing commands directly from the browser.
* <<files-screen.adoc#,Files>> - shows the current core configuration files such as `solrconfig.xml`.
* <<query-screen.adoc#,Query>> - lets you submit a structured query about various elements of a core.
* <<stream-screen.adoc#,Stream>> - allows you to submit streaming expressions and see results and parsing explanations.
* <<schema-browser-screen.adoc#,Schema Browser>> - displays schema data in a browser window.
// TODO: SOLR-10655 END

View File

@ -22,15 +22,15 @@ A SolrCloud cluster includes a number of components. The Collections API is prov
Because this API has a large number of commands and options, we've grouped the commands into the following sub-sections:
*<<cluster-node-management.adoc#cluster-node-management,Cluster and Node Management>>*: Define properties for the entire cluster; check the status of a cluster; remove replicas from a node; utilize a newly added node; add or remove roles for a node.
*<<cluster-node-management.adoc#,Cluster and Node Management>>*: Define properties for the entire cluster; check the status of a cluster; remove replicas from a node; utilize a newly added node; add or remove roles for a node.
*<<collection-management.adoc#collection-management,Collection Management>>*: Create, list, reload and delete collections; set collection properties; migrate documents to another collection; rebalance leaders; backup and restore collections.
*<<collection-management.adoc#,Collection Management>>*: Create, list, reload and delete collections; set collection properties; migrate documents to another collection; rebalance leaders; backup and restore collections.
*<<collection-aliasing.adoc#collection-aliasing,Collection Aliasing>>*: Create, list or delete collection aliases; set alias properties.
*<<collection-aliasing.adoc#,Collection Aliasing>>*: Create, list or delete collection aliases; set alias properties.
*<<shard-management.adoc#shard-management,Shard Management>>*: Create and delete a shard; split a shard into two or more additional shards; force a shard leader.
*<<shard-management.adoc#,Shard Management>>*: Create and delete a shard; split a shard into two or more additional shards; force a shard leader.
*<<replica-management.adoc#replica-management,Replica Management>>*: Add or delete a replica; set replica properties; move a replica to a different node.
*<<replica-management.adoc#,Replica Management>>*: Add or delete a replica; set replica properties; move a replica to a different node.
== Asynchronous Calls

View File

@ -16,13 +16,13 @@
// specific language governing permissions and limitations
// under the License.
The Collections screen provides some basic functionality for managing your Collections, powered by the <<collections-api.adoc#collections-api,Collections API>>.
The Collections screen provides some basic functionality for managing your Collections, powered by the <<collections-api.adoc#,Collections API>>.
[NOTE]
====
If you are running a single node Solr instance, you will not see a Collections option in the left nav menu of the Admin UI.
You will instead see a "Core Admin" screen that supports some comparable Core level information & manipulation via the <<coreadmin-api.adoc#coreadmin-api,CoreAdmin API>> instead.
You will instead see a "Core Admin" screen that supports some comparable Core level information & manipulation via the <<coreadmin-api.adoc#,CoreAdmin API>> instead.
====
The main display of this page provides a list of collections that exist in your cluster. Clicking on a collection name provides some basic metadata about how the collection is defined, and its current shards & replicas, with options for adding and deleting individual replicas.

View File

@ -20,11 +20,11 @@ A ZooKeeper Command Line Interface (CLI) script is available to allow you to int
While Solr's Administration UI includes pages dedicated to the state of your SolrCloud cluster, it does not allow you to download or modify related configuration files.
TIP: See the section <<cloud-screens.adoc#cloud-screens,Cloud Screens>> for more information about using the Admin UI screens.
TIP: See the section <<cloud-screens.adoc#,Cloud Screens>> for more information about using the Admin UI screens.
The ZooKeeper CLI scripts found in `server/scripts/cloud-scripts` let you upload configuration information to ZooKeeper, in the same ways shown in the examples in <<parameter-reference.adoc#parameter-reference,Parameter Reference>>. It also provides a few other commands that let you link collection sets to collections, make ZooKeeper paths or clear them, and download configurations from ZooKeeper to the local filesystem.
The ZooKeeper CLI scripts found in `server/scripts/cloud-scripts` let you upload configuration information to ZooKeeper, in the same ways shown in the examples in <<parameter-reference.adoc#,Parameter Reference>>. It also provides a few other commands that let you link collection sets to collections, make ZooKeeper paths or clear them, and download configurations from ZooKeeper to the local filesystem.
Many of the functions provided by the zkCli.sh script are also provided by the <<solr-control-script-reference.adoc#solr-control-script-reference,Solr Control Script>>, which may be more familiar as the start script ZooKeeper maintenance commands are very similar to Unix commands.
Many of the functions provided by the zkCli.sh script are also provided by the <<solr-control-script-reference.adoc#,Solr Control Script>>, which may be more familiar as the start script ZooKeeper maintenance commands are very similar to Unix commands.
.Solr's zkcli.sh vs ZooKeeper's zkCli.sh
[IMPORTANT]

View File

@ -26,7 +26,7 @@ The defType parameter selects the query parser that Solr should use to process t
`defType=dismax`
If no `defType` parameter is specified, then by default, the <<the-standard-query-parser.adoc#the-standard-query-parser,The Standard Query Parser>> is used. (e.g., `defType=lucene`)
If no `defType` parameter is specified, then by default, the <<the-standard-query-parser.adoc#,The Standard Query Parser>> is used. (e.g., `defType=lucene`)
== sort Parameter
@ -39,7 +39,7 @@ Solr can sort query responses according to:
* The value of any primitive field (numerics, string, boolean, dates, etc.) which has `docValues="true"` (or `multiValued="false"` and `indexed="true"`, in which case the indexed terms will used to build DocValue like structures on the fly at runtime)
* A SortableTextField which implicitly uses `docValues="true"` by default to allow sorting on the original input string regardless of the analyzers used for Searching.
* A single-valued TextField that uses an analyzer (such as the KeywordTokenizer) that produces only a single term per document. TextField does not support `docValues="true"`, but a DocValue-like structure will be built on the fly at runtime.
** *NOTE:* If you want to be able to sort on a field whose contents you want to tokenize to facilitate searching, <<copying-fields.adoc#copying-fields,use a `copyField` directive>> in the the Schema to clone the field. Then search on the field and sort on its clone.
** *NOTE:* If you want to be able to sort on a field whose contents you want to tokenize to facilitate searching, <<copying-fields.adoc#,use a `copyField` directive>> in the the Schema to clone the field. Then search on the field and sort on its clone.
In the case of primitive fields, or SortableTextFields, that are `multiValued="true"` the representative value used for each doc when sorting depends on the sort direction: The minimum value in each document is used for ascending (`asc`) sorting, while the maximal value in each document is used for descending (`desc`) sorting. This default behavior is equivalent to explicitly sorting using the 2 argument `<<function-queries.adoc#field-function,field()>>` function: `sort=field(name,min) asc` and `sort=field(name,max) desc`
@ -104,7 +104,7 @@ fq=popularity:[10 TO *]&fq=section:0
fq=+popularity:[10 TO *] +section:0
----
* The document sets from each filter query are cached independently. Thus, concerning the previous examples: use a single `fq` containing two mandatory clauses if those clauses appear together often, and use two separate `fq` parameters if they are relatively independent. (To learn about tuning cache sizes and making sure a filter cache actually exists, see <<the-well-configured-solr-instance.adoc#the-well-configured-solr-instance,The Well-Configured Solr Instance>>.)
* The document sets from each filter query are cached independently. Thus, concerning the previous examples: use a single `fq` containing two mandatory clauses if those clauses appear together often, and use two separate `fq` parameters if they are relatively independent. (To learn about tuning cache sizes and making sure a filter cache actually exists, see <<the-well-configured-solr-instance.adoc#,The Well-Configured Solr Instance>>.)
* It is also possible to use <<the-standard-query-parser.adoc#differences-between-lucenes-classic-query-parser-and-solrs-standard-query-parser,filter(condition) syntax>> inside the `fq` to cache clauses individually and - among other things - to achieve union of cached filter queries.
* As with all parameters: special characters in an URL need to be properly escaped and encoded as hex values. Online tools are available to help you with URL-encoding. For example: http://meyerweb.com/eric/tools/dencoder/.
@ -133,7 +133,7 @@ This table shows some basic examples of how to use `fl`:
=== Functions with fl
<<function-queries.adoc#function-queries,Functions>> can be computed for each document in the result and returned as a pseudo-field:
<<function-queries.adoc#,Functions>> can be computed for each document in the result and returned as a pseudo-field:
[source,text]
----
@ -142,7 +142,7 @@ fl=id,title,product(price,popularity)
=== Document Transformers with fl
<<transforming-result-documents.adoc#transforming-result-documents,Document Transformers>> can be used to modify the information returned about each documents in the results of a query:
<<transforming-result-documents.adoc#,Document Transformers>> can be used to modify the information returned about each documents in the results of a query:
[source,text]
----
@ -206,7 +206,7 @@ The default value of this parameter is blank, which causes no extra "explain inf
== timeAllowed Parameter
This parameter specifies the amount of time, in milliseconds, allowed for a search to complete. If this time expires before the search is complete, any partial results will be returned, but values such as `numFound`, <<faceting.adoc#faceting,facet>> counts, and result <<the-stats-component.adoc#the-stats-component,stats>> may not be accurate for the entire result set. In case of expiration, if `omitHeader` isn't set to `true` the response header contains a special flag called `partialResults`. When using `timeAllowed` in combination with <<pagination-of-results.adoc#using-cursors,`cursorMark`>>, and the `partialResults` flag is present, some matching documents may have been skipped in the result set. Additionally, if the `partialResults` flag is present, `cursorMark` can match `nextCursorMark` even if there may be more results
This parameter specifies the amount of time, in milliseconds, allowed for a search to complete. If this time expires before the search is complete, any partial results will be returned, but values such as `numFound`, <<faceting.adoc#,facet>> counts, and result <<the-stats-component.adoc#,stats>> may not be accurate for the entire result set. In case of expiration, if `omitHeader` isn't set to `true` the response header contains a special flag called `partialResults`. When using `timeAllowed` in combination with <<pagination-of-results.adoc#using-cursors,`cursorMark`>>, and the `partialResults` flag is present, some matching documents may have been skipped in the result set. Additionally, if the `partialResults` flag is present, `cursorMark` can match `nextCursorMark` even if there may be more results
[source,json]
----
@ -244,7 +244,7 @@ If set to `true`, and if <<indexconfig-in-solrconfig.adoc#mergepolicyfactory,the
If early termination is used, a `segmentTerminatedEarly` header will be included in the `responseHeader`.
Similar to using <<timeAllowed Parameter,the `timeAllowed` Parameter>>, when early segment termination happens values such as `numFound`, <<faceting.adoc#faceting,Facet>> counts, and result <<the-stats-component.adoc#the-stats-component,Stats>> may not be accurate for the entire result set.
Similar to using <<timeAllowed Parameter,the `timeAllowed` Parameter>>, when early segment termination happens values such as `numFound`, <<faceting.adoc#,Facet>> counts, and result <<the-stats-component.adoc#,Stats>> may not be accurate for the entire result set.
The default value of this parameter is `false`.
@ -252,11 +252,11 @@ The default value of this parameter is `false`.
This parameter may be set to either `true` or `false`.
If set to `true`, this parameter excludes the header from the returned results. The header contains information about the request, such as the time it took to complete. The default value for this parameter is `false`. When using parameters such as <<common-query-parameters.adoc#timeallowed-parameter,`timeAllowed`>>, and <<solrcloud-query-routing-and-read-tolerance.adoc#shards-tolerant-parameter,`shards.tolerant`>>, which can lead to partial results, it is advisable to keep the header, so that the `partialResults` flag can be checked, and values such as `numFound`, `nextCursorMark`, <<faceting.adoc#faceting,Facet>> counts, and result <<the-stats-component.adoc#the-stats-component,Stats>> can be interpreted in the context of partial results.
If set to `true`, this parameter excludes the header from the returned results. The header contains information about the request, such as the time it took to complete. The default value for this parameter is `false`. When using parameters such as <<common-query-parameters.adoc#timeallowed-parameter,`timeAllowed`>>, and <<solrcloud-query-routing-and-read-tolerance.adoc#shards-tolerant-parameter,`shards.tolerant`>>, which can lead to partial results, it is advisable to keep the header, so that the `partialResults` flag can be checked, and values such as `numFound`, `nextCursorMark`, <<faceting.adoc#,Facet>> counts, and result <<the-stats-component.adoc#,Stats>> can be interpreted in the context of partial results.
== wt Parameter
The `wt` parameter selects the Response Writer that Solr should use to format the query's response. For detailed descriptions of Response Writers, see <<response-writers.adoc#response-writers,Response Writers>>.
The `wt` parameter selects the Response Writer that Solr should use to format the query's response. For detailed descriptions of Response Writers, see <<response-writers.adoc#,Response Writers>>.
If you do not define the `wt` parameter in your queries, JSON will be returned as the format of the response.

View File

@ -28,7 +28,7 @@ All Config API endpoints are collection-specific, meaning this API can inspect o
* `_collection_/config`: retrieve the full effective config, or modify the config. Use GET to retrieve and POST for executing commands.
* `_collection_/config/overlay`: retrieve the details in the `configoverlay.json` only, removing any options defined in `solrconfig.xml` directly or implicitly through defaults.
* `_collection_/config/params`: create parameter sets that can override or take the place of parameters defined in `solrconfig.xml`. See <<request-parameters-api.adoc#request-parameters-api,Request Parameters API>> for more information about this endpoint.
* `_collection_/config/params`: create parameter sets that can override or take the place of parameters defined in `solrconfig.xml`. See <<request-parameters-api.adoc#,Request Parameters API>> for more information about this endpoint.
== Retrieving the Config
@ -85,7 +85,7 @@ http://localhost:8983/api/collections/techproducts/config/requestHandler
====
--
The output will be details of each request handler defined in `solrconfig.xml`, all <<implicit-requesthandlers.adoc#implicit-requesthandlers,defined implicitly>> by Solr, and all defined with this Config API stored in `configoverlay.json`. To see the configuration for implicit request handlers, add `expandParams=true` to the request. See the documentation for the implicit request handlers for examples using this command.
The output will be details of each request handler defined in `solrconfig.xml`, all <<implicit-requesthandlers.adoc#,defined implicitly>> by Solr, and all defined with this Config API stored in `configoverlay.json`. To see the configuration for implicit request handlers, add `expandParams=true` to the request. See the documentation for the implicit request handlers for examples using this command.
The available top-level sections that can be added as path parameters are: `query`, `requestHandler`, `searchComponent`, `updateHandler`, `queryResponseWriter`, `initParams`, `znodeVersion`, `listener`, `directoryFactory`, `indexConfig`, and `codecFactory`.
@ -154,7 +154,7 @@ The properties that can be configured with `set-property` and `unset-property` a
*Update Handler Settings*
See <<updatehandlers-in-solrconfig.adoc#updatehandlers-in-solrconfig,UpdateHandlers in SolrConfig>> for defaults and acceptable values for these settings.
See <<updatehandlers-in-solrconfig.adoc#,UpdateHandlers in SolrConfig>> for defaults and acceptable values for these settings.
* `updateHandler.autoCommit.maxDocs`
* `updateHandler.autoCommit.maxTime`
@ -166,7 +166,7 @@ See <<updatehandlers-in-solrconfig.adoc#updatehandlers-in-solrconfig,UpdateHandl
*Query Settings*
See <<query-settings-in-solrconfig.adoc#query-settings-in-solrconfig,Query Settings in SolrConfig>> for defaults and acceptable values for these settings.
See <<query-settings-in-solrconfig.adoc#,Query Settings in SolrConfig>> for defaults and acceptable values for these settings.
_Caches and Cache Sizes_
@ -203,14 +203,14 @@ _Query Sizing and Warming_
_Query Circuit Breakers_
See <<circuit-breakers.adoc#circuit-breakers,Circuit Breakers in Solr>> for more details
See <<circuit-breakers.adoc#,Circuit Breakers in Solr>> for more details
* `query.useCircuitBreakers`
* `query.memoryCircuitBreakerThresholdPct`
*RequestDispatcher Settings*
See <<requestdispatcher-in-solrconfig.adoc#requestdispatcher-in-solrconfig,RequestDispatcher in SolrConfig>> for defaults and acceptable values for these settings.
See <<requestdispatcher-in-solrconfig.adoc#,RequestDispatcher in SolrConfig>> for defaults and acceptable values for these settings.
* `requestDispatcher.handleSelect`
* `requestDispatcher.requestParsers.enableRemoteStreaming`
@ -859,7 +859,7 @@ Every core watches the ZooKeeper directory for the configset being used with tha
For instance, if the configset 'myconf' is used by a core, the node would watch `/configs/myconf`. Every write operation performed through the API would 'touch' the directory and all watchers are notified. Every core would check if the schema file, `solrconfig.xml`, or `configoverlay.json` has been modified by comparing the `znode` versions. If any have been modified, the core is reloaded.
If `params.json` is modified, the params object is just updated without a core reload (see <<request-parameters-api.adoc#request-parameters-api,Request Parameters API>> for more information about `params.json`).
If `params.json` is modified, the params object is just updated without a core reload (see <<request-parameters-api.adoc#,Request Parameters API>> for more information about `params.json`).
=== Empty Command

View File

@ -16,7 +16,7 @@
// specific language governing permissions and limitations
// under the License.
Configsets are a set of configuration files used in a Solr installation: `solrconfig.xml`, the schema, and then <<resource-loading.adoc#resource-loading,resources>> like language files, `synonyms.txt`, and others.
Configsets are a set of configuration files used in a Solr installation: `solrconfig.xml`, the schema, and then <<resource-loading.adoc#,resources>> like language files, `synonyms.txt`, and others.
Such configuration, _configsets_, can be named and then referenced by collections or cores, possibly with the intent to share them to avoid duplication.
@ -48,7 +48,7 @@ The structure should look something like this:
/solrconfig.xml
----
The default base directory is `$SOLR_HOME/configsets`. This path can be configured in `solr.xml` (see <<format-of-solr-xml.adoc#format-of-solr-xml,Format of solr.xml>> for details).
The default base directory is `$SOLR_HOME/configsets`. This path can be configured in `solr.xml` (see <<format-of-solr-xml.adoc#,Format of solr.xml>> for details).
To create a new core using a configset, pass `configSet` as one of the core properties. For example, if you do this via the CoreAdmin API:
@ -90,7 +90,7 @@ This and some demonstration ones remain on the file system but Solr does not use
When you create a collection in SolrCloud, you can specify a named configset -- possibly shared.
If you don't, then the `_default` will be copied and given a unique name for use by this collection.
A configset can be uploaded to ZooKeeper either via the <<configsets-api.adoc#configsets-api,Configsets API>> or more directly via <<solr-control-script-reference.adoc#upload-a-configuration-set,`bin/solr zk upconfig`>>.
A configset can be uploaded to ZooKeeper either via the <<configsets-api.adoc#,Configsets API>> or more directly via <<solr-control-script-reference.adoc#upload-a-configuration-set,`bin/solr zk upconfig`>>.
The Configsets API has some other operations as well, and likewise, so does the CLI.
To upload a file to a configset already stored on ZooKeeper, you can use <<solr-control-script-reference.adoc#copy-between-local-files-and-zookeeper-znodes,`bin/solr zk cp`>>.

View File

@ -23,11 +23,11 @@ Configsets are a collection of configuration files such as `solrconfig.xml`, `sy
This API provides a way to upload configuration files to ZooKeeper and share the same set of configuration files between two or more collections.
Once a configset has been uploaded to ZooKeeper, use the configset name when creating the collection with the <<collections-api.adoc#collections-api,Collections API>> and the collection will use your configuration files.
Once a configset has been uploaded to ZooKeeper, use the configset name when creating the collection with the <<collections-api.adoc#,Collections API>> and the collection will use your configuration files.
Configsets do not have to be shared between collections if they are uploaded with this API, but this API makes it easier to do so if you wish. An alternative to uploading your configsets in advance would be to put the configuration files into a directory under `server/solr/configsets` and using the directory name as the `-d` parameter when using `bin/solr create` to create a collection.
NOTE: This API can only be used with Solr running in SolrCloud mode. If you are not running Solr in SolrCloud mode but would still like to use shared configurations, please see the section <<config-sets.adoc#config-sets,Configsets>>.
NOTE: This API can only be used with Solr running in SolrCloud mode. If you are not running Solr in SolrCloud mode but would still like to use shared configurations, please see the section <<config-sets.adoc#,Configsets>>.
The API works by passing commands to the `configs` endpoint. The path to the endpoint varies depending on the API being used: the v1 API uses `solr/admin/configs`, while the v2 API uses `api/cluster/configs`. Examples of both types are provided below.

View File

@ -19,6 +19,6 @@
Solr includes several APIs that can be used to modify settings in `solrconfig.xml`.
* <<config-api.adoc#config-api,Config API>>
* <<request-parameters-api.adoc#request-parameters-api,Request Parameters API>>
* <<managed-resources.adoc#managed-resources,Managed Resources>>
* <<config-api.adoc#,Config API>>
* <<request-parameters-api.adoc#,Request Parameters API>>
* <<managed-resources.adoc#,Managed Resources>>

View File

@ -25,7 +25,7 @@ In addition to the logging options described below, there is a way to configure
== Temporary Logging Settings
You can control the amount of logging output in Solr by using the Admin Web interface. Select the *LOGGING* link. Note that this page only lets you change settings in the running system and is not saved for the next run. (For more information about the Admin Web interface, see <<using-the-solr-administration-user-interface.adoc#using-the-solr-administration-user-interface,Using the Solr Administration User Interface>>.)
You can control the amount of logging output in Solr by using the Admin Web interface. Select the *LOGGING* link. Note that this page only lets you change settings in the running system and is not saved for the next run. (For more information about the Admin Web interface, see <<using-the-solr-administration-user-interface.adoc#,Using the Solr Administration User Interface>>.)
.The Logging Screen
image::images/logging/logging.png[image]
@ -73,7 +73,7 @@ You can temporarily choose a different logging level as you start Solr. There ar
The first way is to set the `SOLR_LOG_LEVEL` environment variable before you start Solr, or place the same variable in `bin/solr.in.sh` or `bin/solr.in.cmd`. The variable must contain an uppercase string with a supported log level (see above).
The second way is to start Solr with the -v or -q options, see <<solr-control-script-reference.adoc#solr-control-script-reference,Solr Control Script Reference>> for details. Examples:
The second way is to start Solr with the -v or -q options, see <<solr-control-script-reference.adoc#,Solr Control Script Reference>> for details. Examples:
[source,bash]
----
@ -87,7 +87,7 @@ bin/solr start -f -q
Solr uses http://logging.apache.org/log4j/log4j-{ivy-log4j-version}/[Log4J version {ivy-log4j-version}] for logging which is configured using `server/resources/log4j2.xml`. Take a moment to inspect the contents of the `log4j2.xml` file so that you are familiar with its structure. By default, Solr log messages will be written to `SOLR_LOGS_DIR/solr.log`.
When you're ready to deploy Solr in production, set the variable `SOLR_LOGS_DIR` to the location where you want Solr to write log files, such as `/var/solr/logs`. You may also want to tweak `log4j2.xml`. Note that if you installed Solr as a service using the instructions provided in <<taking-solr-to-production.adoc#taking-solr-to-production,Taking Solr to Production>>, then see `/var/solr/log4j2.xml` instead of the default `server/resources` version.
When you're ready to deploy Solr in production, set the variable `SOLR_LOGS_DIR` to the location where you want Solr to write log files, such as `/var/solr/logs`. You may also want to tweak `log4j2.xml`. Note that if you installed Solr as a service using the instructions provided in <<taking-solr-to-production.adoc#,Taking Solr to Production>>, then see `/var/solr/log4j2.xml` instead of the default `server/resources` version.
When starting Solr in the foreground (`-f` option), all logs will be sent to the console, in addition to `solr.log`. When starting Solr in the background, it will write all `stdout` and `stderr` output to a log file in `solr-<port>-console.log`, and automatically disable the CONSOLE logger configured in `log4j2.xml`, having the same effect as if you removed the CONSOLE appender from the rootLogger manually.

View File

@ -30,7 +30,7 @@
The `solrconfig.xml` file is the configuration file with the most parameters affecting Solr itself.
While configuring Solr, you'll work with `solrconfig.xml` often, either directly or via the <<config-api.adoc#config-api,Config API>> to create "configuration overlays" (`configoverlay.json`) to override the values in `solrconfig.xml`.
While configuring Solr, you'll work with `solrconfig.xml` often, either directly or via the <<config-api.adoc#,Config API>> to create "configuration overlays" (`configoverlay.json`) to override the values in `solrconfig.xml`.
In `solrconfig.xml`, you configure important features such as:
@ -42,22 +42,22 @@ In `solrconfig.xml`, you configure important features such as:
* the Admin Web interface
* parameters related to replication and duplication (these parameters are covered in detail in <<legacy-scaling-and-distribution.adoc#legacy-scaling-and-distribution,Legacy Scaling and Distribution>>)
* parameters related to replication and duplication (these parameters are covered in detail in <<legacy-scaling-and-distribution.adoc#,Legacy Scaling and Distribution>>)
The `solrconfig.xml` file is located in the `conf/` directory for each collection. Several well-commented example files can be found in the `server/solr/configsets/` directories demonstrating best practices for many different types of installations.
We've covered the options in the following sections:
* <<datadir-and-directoryfactory-in-solrconfig.adoc#datadir-and-directoryfactory-in-solrconfig,DataDir and DirectoryFactory in SolrConfig>>
* <<schema-factory-definition-in-solrconfig.adoc#schema-factory-definition-in-solrconfig,Schema Factory Definition in SolrConfig>>
* <<indexconfig-in-solrconfig.adoc#indexconfig-in-solrconfig,IndexConfig in SolrConfig>>
* <<requesthandlers-and-searchcomponents-in-solrconfig.adoc#requesthandlers-and-searchcomponents-in-solrconfig,RequestHandlers and SearchComponents in SolrConfig>>
* <<initparams-in-solrconfig.adoc#initparams-in-solrconfig,InitParams in SolrConfig>>
* <<updatehandlers-in-solrconfig.adoc#updatehandlers-in-solrconfig,UpdateHandlers in SolrConfig>>
* <<query-settings-in-solrconfig.adoc#query-settings-in-solrconfig,Query Settings in SolrConfig>>
* <<requestdispatcher-in-solrconfig.adoc#requestdispatcher-in-solrconfig,RequestDispatcher in SolrConfig>>
* <<update-request-processors.adoc#update-request-processors,Update Request Processors>>
* <<codec-factory.adoc#codec-factory,Codec Factory>>
* <<datadir-and-directoryfactory-in-solrconfig.adoc#,DataDir and DirectoryFactory in SolrConfig>>
* <<schema-factory-definition-in-solrconfig.adoc#,Schema Factory Definition in SolrConfig>>
* <<indexconfig-in-solrconfig.adoc#,IndexConfig in SolrConfig>>
* <<requesthandlers-and-searchcomponents-in-solrconfig.adoc#,RequestHandlers and SearchComponents in SolrConfig>>
* <<initparams-in-solrconfig.adoc#,InitParams in SolrConfig>>
* <<updatehandlers-in-solrconfig.adoc#,UpdateHandlers in SolrConfig>>
* <<query-settings-in-solrconfig.adoc#,Query Settings in SolrConfig>>
* <<requestdispatcher-in-solrconfig.adoc#,RequestDispatcher in SolrConfig>>
* <<update-request-processors.adoc#,Update Request Processors>>
* <<codec-factory.adoc#,Codec Factory>>
Some SolrConfig aspects are covered in other sections.
See <<libs.adoc#lib-directives-in-solrconfig,lib directives in SolrConfig>>, which can be used for both Plugins and Resources.
@ -86,11 +86,11 @@ Which means the lock type defaults to "native" but when starting Solr, you could
bin/solr start -Dsolr.lock.type=none
----
In general, any Java system property that you want to set can be passed through the `bin/solr` script using the standard `-Dproperty=value` syntax. Alternatively, you can add common system properties to the `SOLR_OPTS` environment variable defined in the Solr include file (`bin/solr.in.sh` or `bin/solr.in.cmd`). For more information about how the Solr include file works, refer to: <<taking-solr-to-production.adoc#taking-solr-to-production,Taking Solr to Production>>.
In general, any Java system property that you want to set can be passed through the `bin/solr` script using the standard `-Dproperty=value` syntax. Alternatively, you can add common system properties to the `SOLR_OPTS` environment variable defined in the Solr include file (`bin/solr.in.sh` or `bin/solr.in.cmd`). For more information about how the Solr include file works, refer to: <<taking-solr-to-production.adoc#,Taking Solr to Production>>.
=== Config API to Override solrconfig.xml
The <<config-api.adoc#config-api,Config API>> allows you to use an API to modify Solr's configuration, specifically user defined properties. Changes made with this API are stored in a file named `configoverlay.json`. This file should only be edited with the API, but will look like this example:
The <<config-api.adoc#,Config API>> allows you to use an API to modify Solr's configuration, specifically user defined properties. Changes made with this API are stored in a file named `configoverlay.json`. This file should only be edited with the API, but will look like this example:
[source,json]
----
@ -105,7 +105,7 @@ The <<config-api.adoc#config-api,Config API>> allows you to use an API to modify
"components":["terms"]}}}
----
For more details, see the section <<config-api.adoc#config-api,Config API>>.
For more details, see the section <<config-api.adoc#,Config API>>.
=== solrcore.properties
@ -128,7 +128,7 @@ solr.lock.type=none
[IMPORTANT]
====
The path and name of the `solrcore.properties` file can be overridden using the `properties` property in <<defining-core-properties.adoc#defining-core-properties,`core.properties`>>.
The path and name of the `solrcore.properties` file can be overridden using the `properties` property in <<defining-core-properties.adoc#,`core.properties`>>.
====
@ -196,7 +196,7 @@ For example, regardless of whether the name for a particular Solr core is explic
</requestHandler>
----
All implicit properties use the `solr.core.` name prefix, and reflect the runtime value of the equivalent <<defining-core-properties.adoc#defining-core-properties,`core.properties` property>>:
All implicit properties use the `solr.core.` name prefix, and reflect the runtime value of the equivalent <<defining-core-properties.adoc#,`core.properties` property>>:
* `solr.core.name`
* `solr.core.config`

View File

@ -50,7 +50,7 @@ In `solrconfig.xml`, you can enable it by changing the following `enableRemoteSt
When `enableRemoteStreaming` is not specified in `solrconfig.xml`, the default behavior is to _not_ allow remote streaming (i.e., `enableRemoteStreaming="false"`).
Remote streaming can also be enabled through the <<config-api.adoc#config-api,Config API>> as follows:
Remote streaming can also be enabled through the <<config-api.adoc#,Config API>> as follows:
[.dynamic-tabs]
--
@ -84,4 +84,4 @@ Gzip doesn't apply to `stream.body`.
== Debugging Requests
The implicit "dump" RequestHandler (see <<implicit-requesthandlers.adoc#implicit-requesthandlers,Implicit RequestHandlers>>) simply outputs the contents of the Solr QueryRequest using the specified writer type `wt`. This is a useful tool to help understand what streams are available to the RequestHandlers.
The implicit "dump" RequestHandler (see <<implicit-requesthandlers.adoc#,Implicit RequestHandlers>>) simply outputs the contents of the Solr QueryRequest using the specified writer type `wt`. This is a useful tool to help understand what streams are available to the RequestHandlers.

View File

@ -25,7 +25,7 @@ The name of the field you want to copy is the _source_, and the name of the copy
<copyField source="cat" dest="text" maxChars="30000" />
----
In this example, we want Solr to copy the `cat` field to a field named `text`. Fields are copied before <<understanding-analyzers-tokenizers-and-filters.adoc#understanding-analyzers-tokenizers-and-filters,analysis>> is done, meaning you can have two fields with identical original content, but which use different analysis chains and are stored in the index differently.
In this example, we want Solr to copy the `cat` field to a field named `text`. Fields are copied before <<understanding-analyzers-tokenizers-and-filters.adoc#,analysis>> is done, meaning you can have two fields with identical original content, but which use different analysis chains and are stored in the index differently.
In the example above, if the `text` destination field has data of its own in the input documents, the contents of the `cat` field will be added as additional values just as if all of the values had originally been specified by the client. Remember to configure your fields as `multivalued="true"` if they will ultimately get multiple values (either from a multivalued source or from multiple `copyField` directives).

View File

@ -29,19 +29,19 @@ image::images/core-specific-tools/core_dashboard.png[image,width=515,height=250]
The core-specific UI screens are listed below, with a link to the section of this guide to find out more:
// TODO: SOLR-10655 BEGIN: refactor this into a 'core-screens-list.include.adoc' file for reuse
* <<ping.adoc#ping,Ping>> - lets you ping a named core and determine whether the core is active.
* <<plugins-stats-screen.adoc#plugins-stats-screen,Plugins/Stats>> - shows statistics for plugins and other installed components.
* <<replication-screen.adoc#replication-screen,Replication>> - shows you the current replication status for the core, and lets you enable/disable replication.
* <<segments-info.adoc#segments-info,Segments Info>> - Provides a visualization of the underlying Lucene index segments.
* <<ping.adoc#,Ping>> - lets you ping a named core and determine whether the core is active.
* <<plugins-stats-screen.adoc#,Plugins/Stats>> - shows statistics for plugins and other installed components.
* <<replication-screen.adoc#,Replication>> - shows you the current replication status for the core, and lets you enable/disable replication.
* <<segments-info.adoc#,Segments Info>> - Provides a visualization of the underlying Lucene index segments.
// TODO: SOLR-10655 END
If you are running a single node instance of Solr, additional UI screens normally displayed on a per-collection bases will also be listed:
// TODO: SOLR-10655 BEGIN: refactor this into a 'collection-screens-list.include.adoc' file for reuse
* <<analysis-screen.adoc#analysis-screen,Analysis>> - lets you analyze the data found in specific fields.
* <<documents-screen.adoc#documents-screen,Documents>> - provides a simple form allowing you to execute various Solr indexing commands directly from the browser.
* <<files-screen.adoc#files-screen,Files>> - shows the current core configuration files such as `solrconfig.xml`.
* <<query-screen.adoc#query-screen,Query>> - lets you submit a structured query about various elements of a core.
* <<stream-screen.adoc#stream-screen,Stream>> - allows you to submit streaming expressions and see results and parsing explanations.
* <<schema-browser-screen.adoc#schema-browser-screen,Schema Browser>> - displays schema data in a browser window.
* <<analysis-screen.adoc#,Analysis>> - lets you analyze the data found in specific fields.
* <<documents-screen.adoc#,Documents>> - provides a simple form allowing you to execute various Solr indexing commands directly from the browser.
* <<files-screen.adoc#,Files>> - shows the current core configuration files such as `solrconfig.xml`.
* <<query-screen.adoc#,Query>> - lets you submit a structured query about various elements of a core.
* <<stream-screen.adoc#,Stream>> - allows you to submit streaming expressions and see results and parsing explanations.
* <<schema-browser-screen.adoc#,Schema Browser>> - displays schema data in a browser window.
// TODO: SOLR-10655 END

View File

@ -17,11 +17,11 @@
// specific language governing permissions and limitations
// under the License.
The Core Admin API is primarily used under the covers by the <<collections-api.adoc#collections-api,Collections API>> when running a <<solrcloud.adoc#solrcloud,SolrCloud>> cluster.
The Core Admin API is primarily used under the covers by the <<collections-api.adoc#,Collections API>> when running a <<solrcloud.adoc#,SolrCloud>> cluster.
SolrCloud users should not typically use the CoreAdmin API directly, but the API may be useful for users of single-node or leader/follower Solr installations for core maintenance operations.
The CoreAdmin API is implemented by the CoreAdminHandler, which is a special purpose <<requesthandlers-and-searchcomponents-in-solrconfig.adoc#requesthandlers-and-searchcomponents-in-solrconfig,request handler>> that is used to manage Solr cores. Unlike other request handlers, the CoreAdminHandler is not attached to a single core. Instead, there is a single instance of the CoreAdminHandler in each Solr node that manages all the cores running in that node and is accessible at the `/solr/admin/cores` path.
The CoreAdmin API is implemented by the CoreAdminHandler, which is a special purpose <<requesthandlers-and-searchcomponents-in-solrconfig.adoc#,request handler>> that is used to manage Solr cores. Unlike other request handlers, the CoreAdminHandler is not attached to a single core. Instead, there is a single instance of the CoreAdminHandler in each Solr node that manages all the cores running in that node and is accessible at the `/solr/admin/cores` path.
CoreAdmin actions can be executed by via HTTP requests that specify an `action` request parameter, with additional action specific arguments provided as additional parameters.
@ -58,9 +58,9 @@ Note that this command is the only one of the Core Admin API commands that *does
====
Your CREATE call must be able to find a configuration, or it will not succeed.
When you are running SolrCloud and create a new core for a collection, the configuration will be inherited from the collection. Each collection is linked to a configName, which is stored in ZooKeeper. This satisfies the configuration requirement. There is something to note, though: if you're running SolrCloud, you should *NOT* use the CoreAdmin API at all. Use the <<collections-api.adoc#collections-api,Collections API>>.
When you are running SolrCloud and create a new core for a collection, the configuration will be inherited from the collection. Each collection is linked to a configName, which is stored in ZooKeeper. This satisfies the configuration requirement. There is something to note, though: if you're running SolrCloud, you should *NOT* use the CoreAdmin API at all. Use the <<collections-api.adoc#,Collections API>>.
When you are not running SolrCloud, if you have <<config-sets.adoc#config-sets,Configsets>> defined, you can use the `configSet` parameter as documented below. If there are no configsets, then the `instanceDir` specified in the CREATE call must already exist, and it must contain a `conf` directory which in turn must contain `solrconfig.xml`, your schema (usually named either `managed-schema` or `schema.xml`), and any files referenced by those configs.
When you are not running SolrCloud, if you have <<config-sets.adoc#,Configsets>> defined, you can use the `configSet` parameter as documented below. If there are no configsets, then the `instanceDir` specified in the CREATE call must already exist, and it must contain a `conf` directory which in turn must contain `solrconfig.xml`, your schema (usually named either `managed-schema` or `schema.xml`), and any files referenced by those configs.
The config and schema filenames can be specified with the `config` and `schema` parameters, but these are expert options. One thing you could do to avoid creating the `conf` directory is use `config` and `schema` parameters that point at absolute paths, but this can lead to confusing configurations unless you fully understand what you are doing.
====
@ -83,18 +83,18 @@ The directory where files for this core should be stored. Same as `instanceDir`
Name of the config file (i.e., `solrconfig.xml`) relative to `instanceDir`.
`schema`::
Name of the schema file to use for the core. Please note that if you are using a "managed schema" (the default behavior) then any value for this property which does not match the effective `managedSchemaResourceName` will be read once, backed up, and converted for managed schema use. See <<schema-factory-definition-in-solrconfig.adoc#schema-factory-definition-in-solrconfig,Schema Factory Definition in SolrConfig>> for details.
Name of the schema file to use for the core. Please note that if you are using a "managed schema" (the default behavior) then any value for this property which does not match the effective `managedSchemaResourceName` will be read once, backed up, and converted for managed schema use. See <<schema-factory-definition-in-solrconfig.adoc#,Schema Factory Definition in SolrConfig>> for details.
`dataDir`::
Name of the data directory relative to `instanceDir`. If absolute value is used, it must be inside `SOLR_HOME`, `SOLR_DATA_HOME` or one of the paths specified by system property `solr.allowPaths`.
`configSet`::
Name of the configset to use for this core. For more information, see the section <<config-sets.adoc#config-sets,Configsets>>.
Name of the configset to use for this core. For more information, see the section <<config-sets.adoc#,Configsets>>.
`collection`::
The name of the collection to which this core belongs. The default is the name of the core. `collection._param_=_value_` causes a property of `_param_=_value_` to be set if a new collection is being created. Use `collection.configName=_config-name_` to point to the configuration for a new collection.
+
WARNING: While it's possible to create a core for a non-existent collection, this approach is not supported and not recommended. Always create a collection using the <<collections-api.adoc#collections-api,Collections API>> before creating a core directly for it.
WARNING: While it's possible to create a core for a non-existent collection, this approach is not supported and not recommended. Always create a collection using the <<collections-api.adoc#,Collections API>> before creating a core directly for it.
`shard`::
The shard id this core represents. Normally you want to be auto-assigned a shard id.

View File

@ -29,7 +29,7 @@ By default, Solr stores its index data in a directory called `/data` under the c
The `${solr.core.name}` substitution will cause the name of the current core to be substituted, which results in each core's data being kept in a separate subdirectory.
If you are using replication to replicate the Solr index (as described in <<legacy-scaling-and-distribution.adoc#legacy-scaling-and-distribution,Legacy Scaling and Distribution>>), then the `<dataDir>` directory should correspond to the index directory used in the replication configuration.
If you are using replication to replicate the Solr index (as described in <<legacy-scaling-and-distribution.adoc#,Legacy Scaling and Distribution>>), then the `<dataDir>` directory should correspond to the index directory used in the replication configuration.
NOTE: If the environment variable `SOLR_DATA_HOME` is defined, or if `solr.data.home` is configured for your DirectoryFactory, or if `solr.xml` contains an
element `<solrDataHome>` then the location of data directory will be `<SOLR_DATA_HOME>/<instance_name>/data`.
@ -55,5 +55,5 @@ The {solr-javadocs}/core/org/apache/solr/core/RAMDirectoryFactory.html[`solr.RAM
[NOTE]
====
If you are using Hadoop and would like to store your indexes in HDFS, you should use the {solr-javadocs}/core/org/apache/solr/core/HdfsDirectoryFactory.html[`solr.HdfsDirectoryFactory`] instead of either of the above implementations. For more details, see the section <<running-solr-on-hdfs.adoc#running-solr-on-hdfs,Running Solr on HDFS>>.
If you are using Hadoop and would like to store your indexes in HDFS, you should use the {solr-javadocs}/core/org/apache/solr/core/HdfsDirectoryFactory.html[`solr.HdfsDirectoryFactory`] instead of either of the above implementations. For more details, see the section <<running-solr-on-hdfs.adoc#,Running Solr on HDFS>>.
====

View File

@ -39,7 +39,7 @@ There are two places in Solr to configure de-duplication: in `solrconfig.xml` an
=== In solrconfig.xml
The `SignatureUpdateProcessorFactory` has to be registered in `solrconfig.xml` as part of an <<update-request-processors.adoc#update-request-processors,Update Request Processor Chain>>, as in this example:
The `SignatureUpdateProcessorFactory` has to be registered in `solrconfig.xml` as part of an <<update-request-processors.adoc#,Update Request Processor Chain>>, as in this example:
[source,xml]
----

View File

@ -70,11 +70,11 @@ The following properties are available:
`config`:: The configuration file name for a given core. The default is `solrconfig.xml`.
`schema`:: The schema file name for a given core. The default is `schema.xml` but please note that if you are using a "managed schema" (the default behavior) then any value for this property which does not match the effective `managedSchemaResourceName` will be read once, backed up, and converted for managed schema use. See <<schema-factory-definition-in-solrconfig.adoc#schema-factory-definition-in-solrconfig,Schema Factory Definition in SolrConfig>> for more details.
`schema`:: The schema file name for a given core. The default is `schema.xml` but please note that if you are using a "managed schema" (the default behavior) then any value for this property which does not match the effective `managedSchemaResourceName` will be read once, backed up, and converted for managed schema use. See <<schema-factory-definition-in-solrconfig.adoc#,Schema Factory Definition in SolrConfig>> for more details.
`dataDir`:: The core's data directory (where indexes are stored) as either an absolute pathname, or a path relative to the value of `instanceDir`. This is `data` by default.
`configSet`:: The name of a defined configset, if desired, to use to configure the core (see the section <<config-sets.adoc#config-sets,Configsets>> for more details).
`configSet`:: The name of a defined configset, if desired, to use to configure the core (see the section <<config-sets.adoc#,Configsets>> for more details).
`properties`:: The name of the properties file for this core. The value can be an absolute pathname or a path relative to the value of `instanceDir`.

View File

@ -42,7 +42,7 @@ A default value that will be added automatically to any document that does not h
== Optional Field Type Override Properties
Fields can have many of the same properties as field types. Properties from the table below which are specified on an individual field will override any explicit value for that property specified on the the `fieldType` of the field, or any implicit default property value provided by the underlying `fieldType` implementation. The table below is reproduced from <<field-type-definitions-and-properties.adoc#field-type-definitions-and-properties,Field Type Definitions and Properties>>, which has more details:
Fields can have many of the same properties as field types. Properties from the table below which are specified on an individual field will override any explicit value for that property specified on the the `fieldType` of the field, or any implicit default property value provided by the underlying `fieldType` implementation. The table below is reproduced from <<field-type-definitions-and-properties.adoc#,Field Type Definitions and Properties>>, which has more details:
// TODO: SOLR-10655 BEGIN: refactor this into a 'field-default-properties.include.adoc' file for reuse
@ -53,16 +53,16 @@ Fields can have many of the same properties as field types. Properties from the
|Property |Description |Values |Implicit Default
|indexed |If true, the value of the field can be used in queries to retrieve matching documents. |true or false |true
|stored |If true, the actual value of the field can be retrieved by queries. |true or false |true
|docValues |If true, the value of the field will be put in a column-oriented <<docvalues.adoc#docvalues,DocValues>> structure. |true or false |false
|docValues |If true, the value of the field will be put in a column-oriented <<docvalues.adoc#,DocValues>> structure. |true or false |false
|sortMissingFirst sortMissingLast |Control the placement of documents when a sort field is not present. |true or false |false
|multiValued |If true, indicates that a single document might contain multiple values for this field type. |true or false |false
|uninvertible|If true, indicates that an `indexed="true" docValues="false"` field can be "un-inverted" at query time to build up large in memory data structure to serve in place of <<docvalues.adoc#docvalues,DocValues>>. *Defaults to true for historical reasons, but users are strongly encouraged to set this to `false` for stability and use `docValues="true"` as needed.*|true or false |true
|uninvertible|If true, indicates that an `indexed="true" docValues="false"` field can be "un-inverted" at query time to build up large in memory data structure to serve in place of <<docvalues.adoc#,DocValues>>. *Defaults to true for historical reasons, but users are strongly encouraged to set this to `false` for stability and use `docValues="true"` as needed.*|true or false |true
|omitNorms |If true, omits the norms associated with this field (this disables length normalization for the field, and saves some memory). *Defaults to true for all primitive (non-analyzed) field types, such as int, float, data, bool, and string.* Only full-text fields or fields need norms. |true or false |*
|omitTermFreqAndPositions |If true, omits term frequency, positions, and payloads from postings for this field. This can be a performance boost for fields that don't require that information. It also reduces the storage space required for the index. Queries that rely on position that are issued on a field with this option will silently fail to find documents. *This property defaults to true for all field types that are not text fields.* |true or false |*
|omitPositions |Similar to `omitTermFreqAndPositions` but preserves term frequency information. |true or false |*
|termVectors termPositions termOffsets termPayloads |These options instruct Solr to maintain full term vectors for each document, optionally including position, offset and payload information for each term occurrence in those vectors. These can be used to accelerate highlighting and other ancillary functionality, but impose a substantial cost in terms of index size. They are not necessary for typical uses of Solr. |true or false |false
|required |Instructs Solr to reject any attempts to add a document which does not have a value for this field. This property defaults to false. |true or false |false
|useDocValuesAsStored |If the field has `<<docvalues.adoc#docvalues,docValues>>` enabled, setting this to true would allow the field to be returned as if it were a stored field (even if it has `stored=false`) when matching "`*`" in an <<common-query-parameters.adoc#fl-field-list-parameter,fl parameter>>. |true or false |true
|useDocValuesAsStored |If the field has `<<docvalues.adoc#,docValues>>` enabled, setting this to true would allow the field to be returned as if it were a stored field (even if it has `stored=false`) when matching "`*`" in an <<common-query-parameters.adoc#fl-field-list-parameter,fl parameter>>. |true or false |true
|large |Large fields are always lazy loaded and will only take up space in the document cache if the actual value is < 512KB. This option requires `stored="true"` and `multiValued="false"`. It's intended for fields that might have very large values so that they don't get cached in memory. |true or false |false
|===

View File

@ -21,18 +21,18 @@ An important aspect of Solr is that all operations and deployment can be done on
Common administrative tasks include:
<<solr-control-script-reference.adoc#solr-control-script-reference,Solr Control Script Reference>>: This section provides information about all of the options available to the `bin/solr` / `bin\solr.cmd` scripts, which can start and stop Solr, configure authentication, and create or remove collections and cores.
<<solr-control-script-reference.adoc#,Solr Control Script Reference>>: This section provides information about all of the options available to the `bin/solr` / `bin\solr.cmd` scripts, which can start and stop Solr, configure authentication, and create or remove collections and cores.
<<solr-configuration-files.adoc#solr-configuration-files,Solr Configuration Files>>: Overview of the installation layout and major configuration files.
<<solr-configuration-files.adoc#,Solr Configuration Files>>: Overview of the installation layout and major configuration files.
<<taking-solr-to-production.adoc#taking-solr-to-production,Taking Solr to Production>>: Detailed steps to help you install Solr as a service and take your application to production.
<<taking-solr-to-production.adoc#,Taking Solr to Production>>: Detailed steps to help you install Solr as a service and take your application to production.
<<making-and-restoring-backups.adoc#making-and-restoring-backups,Making and Restoring Backups>>: Describes backup strategies for your Solr indexes.
<<making-and-restoring-backups.adoc#,Making and Restoring Backups>>: Describes backup strategies for your Solr indexes.
<<running-solr-on-hdfs.adoc#running-solr-on-hdfs,Running Solr on HDFS>>: How to use HDFS to store your Solr indexes and transaction logs.
<<running-solr-on-hdfs.adoc#,Running Solr on HDFS>>: How to use HDFS to store your Solr indexes and transaction logs.
<<aws-solrcloud-tutorial.adoc#aws-solrcloud-tutorial,SolrCloud on AWS EC2>>: A tutorial on deploying Solr in Amazon Web Services (AWS) using EC2 instances.
<<aws-solrcloud-tutorial.adoc#,SolrCloud on AWS EC2>>: A tutorial on deploying Solr in Amazon Web Services (AWS) using EC2 instances.
<<upgrading-a-solr-cluster.adoc#upgrading-a-solr-cluster,Upgrading a Solr Cluster>>: Information for upgrading a production SolrCloud cluster.
<<upgrading-a-solr-cluster.adoc#,Upgrading a Solr Cluster>>: Information for upgrading a production SolrCloud cluster.
<<solr-upgrade-notes.adoc#solr-upgrade-notes,Solr Upgrade Notes>>: Information about changes made in Solr releases.
<<solr-upgrade-notes.adoc#,Solr Upgrade Notes>>: Information about changes made in Solr releases.

View File

@ -28,7 +28,7 @@ You can see a comparison between the Tika and LangDetect implementations here: h
For specific information on each of these language identification implementations, including a list of supported languages for each, see the relevant project websites.
For more information about language analysis in Solr, see <<language-analysis.adoc#language-analysis,Language Analysis>>.
For more information about language analysis in Solr, see <<language-analysis.adoc#,Language Analysis>>.
== Configuring Language Detection
@ -80,7 +80,7 @@ Here is an example of a minimal OpenNLP `langid` configuration in `solrconfig.xm
==== OpenNLP-specific Parameters
`langid.model`::
An OpenNLP language detection model. The OpenNLP project provides a pre-trained 103 language model on the http://opennlp.apache.org/models.html[OpenNLP site's model dowload page]. Model training instructions are provided on the http://opennlp.apache.org/docs/{ivy-opennlp-version}/manual/opennlp.html#tools.langdetect[OpenNLP website]. This parameter is required. See <<resource-loading.adoc#resource-loading,Resource Loading>> for information on where to put the model.
An OpenNLP language detection model. The OpenNLP project provides a pre-trained 103 language model on the http://opennlp.apache.org/models.html[OpenNLP site's model dowload page]. Model training instructions are provided on the http://opennlp.apache.org/docs/{ivy-opennlp-version}/manual/opennlp.html#tools.langdetect[OpenNLP website]. This parameter is required. See <<resource-loading.adoc#,Resource Loading>> for information on where to put the model.
==== OpenNLP Language Codes

View File

@ -18,7 +18,7 @@
When using traditional index sharding, you will need to consider how to query your documents.
It is highly recommended that you use <<solrcloud.adoc#solrcloud,SolrCloud>> when needing to scale up or scale out. The setup described below is legacy and was used prior to the existence of SolrCloud. SolrCloud provides for a truly distributed set of features with support for things like automatic routing, leader election, optimistic concurrency and other sanity checks that are expected out of a distributed system.
It is highly recommended that you use <<solrcloud.adoc#,SolrCloud>> when needing to scale up or scale out. The setup described below is legacy and was used prior to the existence of SolrCloud. SolrCloud provides for a truly distributed set of features with support for things like automatic routing, leader election, optimistic concurrency and other sanity checks that are expected out of a distributed system.
Everything on this page is specific to legacy setup of distributed search. Users trying out SolrCloud should not follow any of the steps or information below.
@ -79,7 +79,7 @@ Formerly a limitation was that TF/IDF relevancy computations only used shard-loc
== Avoiding Distributed Deadlock with Distributed Search
Like in SolrCloud mode, inter-shard requests could lead to a distributed deadlock. It can be avoided by following the instructions in the section <<distributed-requests.adoc#distributed-requests,Distributed Requests>>.
Like in SolrCloud mode, inter-shard requests could lead to a distributed deadlock. It can be avoided by following the instructions in the section <<distributed-requests.adoc#,Distributed Requests>>.
== Testing Index Sharding on Two Local Servers

View File

@ -21,24 +21,24 @@ This section discusses how Solr organizes its data into documents and fields, as
This section includes the following topics:
<<overview-of-documents-fields-and-schema-design.adoc#overview-of-documents-fields-and-schema-design,Overview of Documents, Fields, and Schema Design>>: An introduction to the concepts covered in this section.
<<overview-of-documents-fields-and-schema-design.adoc#,Overview of Documents, Fields, and Schema Design>>: An introduction to the concepts covered in this section.
<<solr-field-types.adoc#solr-field-types,Solr Field Types>>: Detailed information about field types in Solr, including the field types in the default Solr schema.
<<solr-field-types.adoc#,Solr Field Types>>: Detailed information about field types in Solr, including the field types in the default Solr schema.
<<defining-fields.adoc#defining-fields,Defining Fields>>: Describes how to define fields in Solr.
<<defining-fields.adoc#,Defining Fields>>: Describes how to define fields in Solr.
<<copying-fields.adoc#copying-fields,Copying Fields>>: Describes how to populate fields with data copied from another field.
<<copying-fields.adoc#,Copying Fields>>: Describes how to populate fields with data copied from another field.
<<dynamic-fields.adoc#dynamic-fields,Dynamic Fields>>: Information about using dynamic fields in order to catch and index fields that do not exactly conform to other field definitions in your schema.
<<dynamic-fields.adoc#,Dynamic Fields>>: Information about using dynamic fields in order to catch and index fields that do not exactly conform to other field definitions in your schema.
<<schema-api.adoc#schema-api,Schema API>>: Use curl commands to read various parts of a schema or create new fields and copyField rules.
<<schema-api.adoc#,Schema API>>: Use curl commands to read various parts of a schema or create new fields and copyField rules.
<<other-schema-elements.adoc#other-schema-elements,Other Schema Elements>>: Describes other important elements in the Solr schema.
<<other-schema-elements.adoc#,Other Schema Elements>>: Describes other important elements in the Solr schema.
<<putting-the-pieces-together.adoc#putting-the-pieces-together,Putting the Pieces Together>>: A higher-level view of the Solr schema and how its elements work together.
<<putting-the-pieces-together.adoc#,Putting the Pieces Together>>: A higher-level view of the Solr schema and how its elements work together.
<<docvalues.adoc#docvalues,DocValues>>: Describes how to create a docValues index for faster lookups.
<<docvalues.adoc#,DocValues>>: Describes how to create a docValues index for faster lookups.
<<schemaless-mode.adoc#schemaless-mode,Schemaless Mode>>: Automatically add previously unknown schema fields using value-based field type guessing.
<<schemaless-mode.adoc#,Schemaless Mode>>: Automatically add previously unknown schema fields using value-based field type guessing.
<<luke-request-handler.adoc#luke-request-handler,Luke Requst Handler>>: The request handler which provides access to information about fields in the index. This request handler powers the <<schema-browser-screen.adoc#schema-browser-screen,Schema Browser>> page of Solr's Admin UI.
<<luke-request-handler.adoc#,Luke Requst Handler>>: The request handler which provides access to information about fields in the index. This request handler powers the <<schema-browser-screen.adoc#,Schema Browser>> page of Solr's Admin UI.

View File

@ -31,14 +31,14 @@ The screen allows you to:
====
There are other ways to load data, see also these sections:
* <<uploading-data-with-index-handlers.adoc#uploading-data-with-index-handlers,Uploading Data with Index Handlers>>
* <<uploading-data-with-solr-cell-using-apache-tika.adoc#uploading-data-with-solr-cell-using-apache-tika,Uploading Data with Solr Cell using Apache Tika>>
* <<uploading-data-with-index-handlers.adoc#,Uploading Data with Index Handlers>>
* <<uploading-data-with-solr-cell-using-apache-tika.adoc#,Uploading Data with Solr Cell using Apache Tika>>
====
== Common Fields
* Request-Handler: The first step is to define the RequestHandler. By default `/update` will be defined. Change the request handler to `/update/extract` to use Solr Cell.
* Document Type: Select the Document Type to define the format of document to load. The remaining parameters may change depending on the document type selected.
* Document(s): Enter a properly-formatted Solr document corresponding to the `Document Type` selected. XML and JSON documents must be formatted in a Solr-specific format, a small illustrative document will be shown. CSV files should have headers corresponding to fields defined in the schema. More details can be found at: <<uploading-data-with-index-handlers.adoc#uploading-data-with-index-handlers,Uploading Data with Index Handlers>>.
* Document(s): Enter a properly-formatted Solr document corresponding to the `Document Type` selected. XML and JSON documents must be formatted in a Solr-specific format, a small illustrative document will be shown. CSV files should have headers corresponding to fields defined in the schema. More details can be found at: <<uploading-data-with-index-handlers.adoc#,Uploading Data with Index Handlers>>.
* Commit Within: Specify the number of milliseconds between the time the document is submitted and when it is available for searching.
* Overwrite: If `true` the new document will replace an existing document with the same value in the `id` field. If `false` multiple documents with the same id can be added.
@ -62,7 +62,7 @@ The Document Builder provides a wizard-like interface to enter fields of a docum
The File Upload option allows choosing a prepared file and uploading it. If using `/update` for the Request-Handler option, you will be limited to XML, CSV, and JSON.
Other document types (e.g., Word, PDF, etc.) can be indexed using the ExtractingRequestHandler (aka, Solr Cell). You must modify the RequestHandler to `/update/extract`, which must be defined in your `solrconfig.xml` file with your desired defaults. You should also add `&literal.id` shown in the "Extracting Request Handler Params" field so the file chosen is given a unique id.
More information can be found at: <<uploading-data-with-solr-cell-using-apache-tika.adoc#uploading-data-with-solr-cell-using-apache-tika,Uploading Data with Solr Cell using Apache Tika>>
More information can be found at: <<uploading-data-with-solr-cell-using-apache-tika.adoc#,Uploading Data with Solr Cell using Apache Tika>>
== Solr Command

View File

@ -30,7 +30,7 @@ In Lucene 4.0, a new approach was introduced. DocValue fields are now column-ori
To use docValues, you only need to enable it for a field that you will use it with. As with all schema design, you need to define a field type and then define fields of that type with docValues enabled. All of these actions are done in `schema.xml`.
Enabling a field for docValues only requires adding `docValues="true"` to the field (or field type) definition, as in this example from the `schema.xml` of Solr's `sample_techproducts_configs` <<config-sets.adoc#config-sets,configset>>:
Enabling a field for docValues only requires adding `docValues="true"` to the field (or field type) definition, as in this example from the `schema.xml` of Solr's `sample_techproducts_configs` <<config-sets.adoc#,configset>>:
[source,xml]
----
@ -73,17 +73,17 @@ Lucene index back-compatibility is only supported for the default codec. If you
=== Sorting, Faceting & Functions
If `docValues="true"` for a field, then DocValues will automatically be used any time the field is used for <<common-query-parameters.adoc#sort-parameter,sorting>>, <<faceting.adoc#faceting,faceting>> or <<function-queries.adoc#function-queries,function queries>>.
If `docValues="true"` for a field, then DocValues will automatically be used any time the field is used for <<common-query-parameters.adoc#sort-parameter,sorting>>, <<faceting.adoc#,faceting>> or <<function-queries.adoc#,function queries>>.
=== Retrieving DocValues During Search
Field values retrieved during search queries are typically returned from stored values. However, non-stored docValues fields will be also returned along with other stored fields when all fields (or pattern matching globs) are specified to be returned (e.g., "`fl=*`") for search queries depending on the effective value of the `useDocValuesAsStored` parameter for each field. For schema versions >= 1.6, the implicit default is `useDocValuesAsStored="true"`. See <<field-type-definitions-and-properties.adoc#field-type-definitions-and-properties,Field Type Definitions and Properties>> & <<defining-fields.adoc#defining-fields,Defining Fields>> for more details.
Field values retrieved during search queries are typically returned from stored values. However, non-stored docValues fields will be also returned along with other stored fields when all fields (or pattern matching globs) are specified to be returned (e.g., "`fl=*`") for search queries depending on the effective value of the `useDocValuesAsStored` parameter for each field. For schema versions >= 1.6, the implicit default is `useDocValuesAsStored="true"`. See <<field-type-definitions-and-properties.adoc#,Field Type Definitions and Properties>> & <<defining-fields.adoc#,Defining Fields>> for more details.
When `useDocValuesAsStored="false"`, non-stored DocValues fields can still be explicitly requested by name in the <<common-query-parameters.adoc#fl-field-list-parameter,fl param>>, but will not match glob patterns (`"*"`). Note that returning DocValues along with "regular" stored fields at query time has performance implications that stored fields may not because DocValues are column-oriented and may therefore incur additional cost to retrieve for each returned document. Also note that while returning non-stored fields from DocValues, the values of a multi-valued field are returned in sorted order rather than insertion order and may have duplicates removed, see above. If you require the multi-valued fields to be returned in the original insertion order, then make your multi-valued field as stored (such a change requires reindexing).
In cases where the query is returning _only_ docValues fields performance may improve since returning stored fields requires disk reads and decompression whereas returning docValues fields in the fl list only requires memory access.
When retrieving fields from their docValues form (such as when using the <<exporting-result-sets.adoc#exporting-result-sets,/export handler>>, <<streaming-expressions.adoc#streaming-expressions,streaming expressions>> or if the field is requested in the `fl` parameter), two important differences between regular stored fields and docValues fields must be understood:
When retrieving fields from their docValues form (such as when using the <<exporting-result-sets.adoc#,/export handler>>, <<streaming-expressions.adoc#,streaming expressions>> or if the field is requested in the `fl` parameter), two important differences between regular stored fields and docValues fields must be understood:
1. Order is _not_ preserved. When retrieving stored fields, the insertion order is the return order. For docValues, it is the _sorted_ order.
2. For field types using `SORTED_SET` (see above), multiple identical entries are collapsed into a single value. Thus if values 4, 5, 2, 4, 1 are inserted, the values returned will be 1, 2, 4, 5.

View File

@ -71,7 +71,7 @@ To activate the SSL settings, uncomment and update the set of properties beginni
====
[.tab-label]**nix (solr.in.sh)*
NOTE: If you setup Solr as a service on Linux using the steps outlined in <<taking-solr-to-production.adoc#taking-solr-to-production,Taking Solr to Production>>, then make these changes in `/var/solr/solr.in.sh`.
NOTE: If you setup Solr as a service on Linux using the steps outlined in <<taking-solr-to-production.adoc#,Taking Solr to Production>>, then make these changes in `/var/solr/solr.in.sh`.
[source,bash]
----
@ -195,7 +195,7 @@ NOTE: ZooKeeper does not support encrypted communication with clients like Solr.
After creating the keystore described above and before you start any SolrCloud nodes, you must configure your Solr cluster properties in ZooKeeper so that Solr nodes know to communicate via SSL.
This section assumes you have created and started an external ZooKeeper.
See <<setting-up-an-external-zookeeper-ensemble.adoc#setting-up-an-external-zookeeper-ensemble,Setting Up an External ZooKeeper Ensemble>> for more information.
See <<setting-up-an-external-zookeeper-ensemble.adoc#,Setting Up an External ZooKeeper Ensemble>> for more information.
The `urlScheme` cluster-wide property needs to be set to `https` before any Solr node starts up.
The examples below use the `zkcli` tool that comes with Solr to do this.

View File

@ -17,7 +17,7 @@
// under the License.
It's possible to export fully sorted result sets using a special <<query-re-ranking.adoc#query-re-ranking,rank query parser>> and <<response-writers.adoc#response-writers,response writer>> specifically designed to work together to handle scenarios that involve sorting and exporting millions of records.
It's possible to export fully sorted result sets using a special <<query-re-ranking.adoc#,rank query parser>> and <<response-writers.adoc#,response writer>> specifically designed to work together to handle scenarios that involve sorting and exporting millions of records.
This feature uses a stream sorting technique that begins to send records within milliseconds and continues to stream results until the entire result set has been sorted and exported.
@ -25,11 +25,11 @@ The cases where this functionality may be useful include: session analysis, dist
== Field Requirements
All the fields being sorted and exported must have docValues set to true. For more information, see the section on <<docvalues.adoc#docvalues,DocValues>>.
All the fields being sorted and exported must have docValues set to true. For more information, see the section on <<docvalues.adoc#,DocValues>>.
== The /export RequestHandler
The `/export` request handler with the appropriate configuration is one of Solr's out-of-the-box request handlers - see <<implicit-requesthandlers.adoc#implicit-requesthandlers,Implicit RequestHandlers>> for more information.
The `/export` request handler with the appropriate configuration is one of Solr's out-of-the-box request handlers - see <<implicit-requesthandlers.adoc#,Implicit RequestHandlers>> for more information.
Note that this request handler's properties are defined as "invariants", which means they cannot be overridden by other properties passed at another time (such as at query time).
@ -63,11 +63,11 @@ The `fl` property defines the fields that will be exported with the result set.
=== Specifying the Local Streaming Expression
The optional `expr` property defines a <<streaming-expressions.adoc#streaming-expressions,stream expression>> that allows documents to be processed locally before they are exported in the result set.
The optional `expr` property defines a <<streaming-expressions.adoc#,stream expression>> that allows documents to be processed locally before they are exported in the result set.
Expressions have to use a special `input()` stream that represents original results from the `/export` handler. Output from the stream expression then becomes the output from the `/export` handler. The `&streamLocalOnly=true` flag is always set for this streaming expression.
Only stream <<stream-decorator-reference.adoc#stream-decorator-reference,decorators>> and <<stream-evaluator-reference.adoc#stream-evaluator-reference,evaluators>> are supported in these expressions - using any of the <<stream-source-reference.adoc#stream-source-reference,source>> expressions except for the pre-defined `input()` will result in an error.
Only stream <<stream-decorator-reference.adoc#,decorators>> and <<stream-evaluator-reference.adoc#,evaluators>> are supported in these expressions - using any of the <<stream-source-reference.adoc#,source>> expressions except for the pre-defined `input()` will result in an error.
Using stream expressions with the `/export` handler may result in dramatic performance improvements due to the local in-memory reduction of the number of documents to be returned.
@ -91,4 +91,4 @@ http://localhost:8983/solr/core_name/export?q=my-query&sort=reporter+desc,&fl=re
== Distributed Support
See the section <<streaming-expressions.adoc#streaming-expressions,Streaming Expressions>> for distributed support.
See the section <<streaming-expressions.adoc#,Streaming Expressions>> for distributed support.

View File

@ -20,7 +20,7 @@ Faceting is the arrangement of search results into categories based on indexed t
Searchers are presented with the indexed terms, along with numerical counts of how many matching documents were found for each term. Faceting makes it easy for users to explore search results, narrowing in on exactly the results they are looking for.
See also <<json-facet-api.adoc#json-facet-api, JSON Facet API>> for an alternative approach to this.
See also <<json-facet-api.adoc#, JSON Facet API>> for an alternative approach to this.
== General Facet Parameters
@ -47,7 +47,7 @@ Several parameters can be used to trigger faceting based on the indexed terms in
When using these parameters, it is important to remember that "term" is a very specific concept in Lucene: it relates to the literal field/value pairs that are indexed after any analysis occurs. For text fields that include stemming, lowercasing, or word splitting, the resulting terms may not be what you expect.
If you want Solr to perform both analysis (for searching) and faceting on the full literal strings, use the `copyField` directive in your Schema to create two versions of the field: one Text and one String. The Text field should have `indexed="true" docValues=“false"` if used for searching but not faceting and the String field should have `indexed="false" docValues="true"` if used for faceting but not searching.
(For more information about the `copyField` directive, see <<documents-fields-and-schema-design.adoc#documents-fields-and-schema-design,Documents, Fields, and Schema Design>>.)
(For more information about the `copyField` directive, see <<documents-fields-and-schema-design.adoc#,Documents, Fields, and Schema Design>>.)
Unless otherwise specified, all of the parameters below can be specified on a per-field basis with the syntax of `f.<fieldname>.facet.<parameter>`
@ -225,7 +225,7 @@ The `facet.range.method` parameter selects the type of algorithm or method Solr
--
filter::: This method generates the ranges based on other facet.range parameters, and for each of them executes a filter that later intersects with the main query resultset to get the count. It will make use of the filterCache, so it will benefit of a cache large enough to contain all ranges.
+
dv::: This method iterates the documents that match the main query, and for each of them finds the correct range for the value. This method will make use of <<docvalues.adoc#docvalues,docValues>> (if enabled for the field) or fieldCache. The `dv` method is not supported for field type DateRangeField or when using <<result-grouping.adoc#result-grouping,group.facets>>.
dv::: This method iterates the documents that match the main query, and for each of them finds the correct range for the value. This method will make use of <<docvalues.adoc#,docValues>> (if enabled for the field) or fieldCache. The `dv` method is not supported for field type DateRangeField or when using <<result-grouping.adoc#,group.facets>>.
--
+
The default value for this parameter is `filter`.
@ -237,7 +237,7 @@ The default value for this parameter is `filter`.
====
Range faceting on date fields is a common situation where the <<working-with-dates.adoc#tz,`TZ`>> parameter can be useful to ensure that the "facet counts per day" or "facet counts per month" are based on a meaningful definition of when a given day/month "starts" relative to a particular TimeZone.
For more information, see the examples in the <<working-with-dates.adoc#working-with-dates,Working with Dates>> section.
For more information, see the examples in the <<working-with-dates.adoc#,Working with Dates>> section.
====
=== facet.mincount in Range Faceting
@ -295,7 +295,7 @@ http://localhost:8983/solr/techproducts/select?q=*:*&facet.pivot=cat,popularity,
=== Combining Stats Component With Pivots
In addition to some of the <<Local Parameters for Faceting,general local parameters>> supported by other types of faceting, a `stats` local parameters can be used with `facet.pivot` to refer to <<the-stats-component.adoc#the-stats-component,`stats.field`>> instances (by tag) that you would like to have computed for each Pivot Constraint.
In addition to some of the <<Local Parameters for Faceting,general local parameters>> supported by other types of faceting, a `stats` local parameters can be used with `facet.pivot` to refer to <<the-stats-component.adoc#,`stats.field`>> instances (by tag) that you would like to have computed for each Pivot Constraint.
In the example below, two different (overlapping) sets of statistics are computed for each of the facet.pivot result hierarchies:
@ -526,7 +526,7 @@ Even though the same functionality can be achieved by using a facet query with r
If you are concerned about the performance of your searches you should test with both options. Interval faceting tends to be better with multiple intervals for the same fields, while facet query tend to be better in environments where filter cache is more effective (static indexes for example).
This method will use <<docvalues.adoc#docvalues,docValues>> if they are enabled for the field, will use fieldCache otherwise.
This method will use <<docvalues.adoc#,docValues>> if they are enabled for the field, will use fieldCache otherwise.
Use these parameters for interval faceting:
@ -575,7 +575,7 @@ Interval faceting supports output key replacement described below. Output keys c
== Local Parameters for Faceting
The <<local-parameters-in-queries.adoc#local-parameters-in-queries,LocalParams syntax>> allows overriding global settings. It can also provide a method of adding metadata to other parameter values, much like XML attributes.
The <<local-parameters-in-queries.adoc#,LocalParams syntax>> allows overriding global settings. It can also provide a method of adding metadata to other parameter values, much like XML attributes.
=== Tagging and Excluding Filters
@ -622,4 +622,4 @@ This local parameter overrides default logic for `facet.sort`. if `facet.sort` i
== Related Topics
See also <<spatial-search.adoc#spatial-search,Heatmap Faceting (Spatial)>>.
See also <<spatial-search.adoc#,Heatmap Faceting (Spatial)>>.

View File

@ -42,8 +42,8 @@ Notes:
2. [[fpbuc_2,2]] Will be used if present, but not necessary.
3. [[fpbuc_3,3]] (if termVectors=true)
4. [[fpbuc_4,4]] A tokenizer must be defined for the field, but it doesn't need to be indexed.
5. [[fpbuc_5,5]] Described in <<understanding-analyzers-tokenizers-and-filters.adoc#understanding-analyzers-tokenizers-and-filters,Understanding Analyzers, Tokenizers, and Filters>>.
5. [[fpbuc_5,5]] Described in <<understanding-analyzers-tokenizers-and-filters.adoc#,Understanding Analyzers, Tokenizers, and Filters>>.
6. [[fpbuc_6,6]] Term vectors are not mandatory here. If not true, then a stored field is analyzed. So term vectors are recommended, but only required if `stored=false`.
7. [[fpbuc_7,7]] For most field types, either `indexed` or `docValues` must be true, but both are not required. <<docvalues.adoc#docvalues,DocValues>> can be more efficient in many cases. For `[Int/Long/Float/Double/Date]PointFields`, `docValues=true` is required.
8. [[fpbuc_8,8]] Stored content will be used by default, but docValues can alternatively be used. See <<docvalues.adoc#docvalues,DocValues>>.
7. [[fpbuc_7,7]] For most field types, either `indexed` or `docValues` must be true, but both are not required. <<docvalues.adoc#,DocValues>> can be more efficient in many cases. For `[Int/Long/Float/Double/Date]PointFields`, `docValues=true` is required.
8. [[fpbuc_8,8]] Stored content will be used by default, but docValues can alternatively be used. See <<docvalues.adoc#,DocValues>>.
9. [[fpbuc_9,9]] Multi-valued sorting may be performed on docValues-enabled fields using the two-argument `field()` function, e.g., `field(myfield,min)`; see the <<function-queries.adoc#field-function,field() function in Function Queries>>.

View File

@ -50,7 +50,7 @@ Field types are defined in `schema.xml`. Each field type is defined between `fie
----
<1> The first line in the example above contains the field type name, `text_general`, and the name of the implementing class, `solr.TextField`.
<2> The rest of the definition is about field analysis, described in <<understanding-analyzers-tokenizers-and-filters.adoc#understanding-analyzers-tokenizers-and-filters,Understanding Analyzers, Tokenizers, and Filters>>.
<2> The rest of the definition is about field analysis, described in <<understanding-analyzers-tokenizers-and-filters.adoc#,Understanding Analyzers, Tokenizers, and Filters>>.
The implementing class is responsible for making sure the field is handled correctly. In the class names in `schema.xml`, the string `solr` is shorthand for `org.apache.solr.schema` or `org.apache.solr.analysis`. Therefore, `solr.TextField` is really `org.apache.solr.schema.TextField`.
@ -127,16 +127,16 @@ The default values for each property depend on the underlying `FieldType` class,
|Property |Description |Values |Implicit Default
|indexed |If true, the value of the field can be used in queries to retrieve matching documents. |true or false |true
|stored |If true, the actual value of the field can be retrieved by queries. |true or false |true
|docValues |If true, the value of the field will be put in a column-oriented <<docvalues.adoc#docvalues,DocValues>> structure. |true or false |false
|docValues |If true, the value of the field will be put in a column-oriented <<docvalues.adoc#,DocValues>> structure. |true or false |false
|sortMissingFirst sortMissingLast |Control the placement of documents when a sort field is not present. |true or false |false
|multiValued |If true, indicates that a single document might contain multiple values for this field type. |true or false |false
|uninvertible|If true, indicates that an `indexed="true" docValues="false"` field can be "un-inverted" at query time to build up large in memory data structure to serve in place of <<docvalues.adoc#docvalues,DocValues>>. *Defaults to true for historical reasons, but users are strongly encouraged to set this to `false` for stability and use `docValues="true"` as needed.*|true or false |true
|uninvertible|If true, indicates that an `indexed="true" docValues="false"` field can be "un-inverted" at query time to build up large in memory data structure to serve in place of <<docvalues.adoc#,DocValues>>. *Defaults to true for historical reasons, but users are strongly encouraged to set this to `false` for stability and use `docValues="true"` as needed.*|true or false |true
|omitNorms |If true, omits the norms associated with this field (this disables length normalization for the field, and saves some memory). *Defaults to true for all primitive (non-analyzed) field types, such as int, float, data, bool, and string.* Only full-text fields or fields need norms. |true or false |*
|omitTermFreqAndPositions |If true, omits term frequency, positions, and payloads from postings for this field. This can be a performance boost for fields that don't require that information. It also reduces the storage space required for the index. Queries that rely on position that are issued on a field with this option will silently fail to find documents. *This property defaults to true for all field types that are not text fields.* |true or false |*
|omitPositions |Similar to `omitTermFreqAndPositions` but preserves term frequency information. |true or false |*
|termVectors termPositions termOffsets termPayloads |These options instruct Solr to maintain full term vectors for each document, optionally including position, offset and payload information for each term occurrence in those vectors. These can be used to accelerate highlighting and other ancillary functionality, but impose a substantial cost in terms of index size. They are not necessary for typical uses of Solr. |true or false |false
|required |Instructs Solr to reject any attempts to add a document which does not have a value for this field. This property defaults to false. |true or false |false
|useDocValuesAsStored |If the field has <<docvalues.adoc#docvalues,docValues>> enabled, setting this to true would allow the field to be returned as if it were a stored field (even if it has `stored=false`) when matching "`*`" in an <<common-query-parameters.adoc#fl-field-list-parameter,fl parameter>>. |true or false |true
|useDocValuesAsStored |If the field has <<docvalues.adoc#,docValues>> enabled, setting this to true would allow the field to be returned as if it were a stored field (even if it has `stored=false`) when matching "`*`" in an <<common-query-parameters.adoc#fl-field-list-parameter,fl parameter>>. |true or false |true
|large |Large fields are always lazy loaded and will only take up space in the document cache if the actual value is < 512KB. This option requires `stored="true"` and `multiValued="false"`. It's intended for fields that might have very large values so that they don't get cached in memory. |true or false |false
|===

View File

@ -25,7 +25,7 @@ The following table lists the field types that are available in Solr and are rec
[cols="25,75",options="header"]
|===
|Class |Description
|BBoxField | Indexes a single rectangle (bounding box) per document field and supports searching via a bounding box. See the section <<spatial-search.adoc#spatial-search,Spatial Search>> for more information.
|BBoxField | Indexes a single rectangle (bounding box) per document field and supports searching via a bounding box. See the section <<spatial-search.adoc#,Spatial Search>> for more information.
|BinaryField |Binary data.
@ -33,17 +33,17 @@ The following table lists the field types that are available in Solr and are rec
|CollationField |Supports Unicode collation for sorting and range queries. The ICUCollationField is a better choice if you can use ICU4J. See the section <<language-analysis.adoc#unicode-collation,Unicode Collation>> for more information.
|CurrencyFieldType |Supports currencies and exchange rates. See the section <<working-with-currencies-and-exchange-rates.adoc#working-with-currencies-and-exchange-rates,Working with Currencies and Exchange Rates>> for more information.
|CurrencyFieldType |Supports currencies and exchange rates. See the section <<working-with-currencies-and-exchange-rates.adoc#,Working with Currencies and Exchange Rates>> for more information.
|DateRangeField |Supports indexing date ranges, to include point in time date instances as well (single-millisecond durations). See the section <<working-with-dates.adoc#working-with-dates,Working with Dates>> for more detail on using this field type. Consider using this field type even if it's just for date instances, particularly when the queries typically fall on UTC year/month/day/hour, etc., boundaries.
|DateRangeField |Supports indexing date ranges, to include point in time date instances as well (single-millisecond durations). See the section <<working-with-dates.adoc#,Working with Dates>> for more detail on using this field type. Consider using this field type even if it's just for date instances, particularly when the queries typically fall on UTC year/month/day/hour, etc., boundaries.
|DatePointField |Date field. Represents a point in time with millisecond precision, encoded using a "Dimensional Points" based data structure that allows for very efficient searches for specific values, or ranges of values. See the section <<working-with-dates.adoc#working-with-dates,Working with Dates>> for more details on the supported syntax. For single valued fields, `docValues="true"` must be used to enable sorting.
|DatePointField |Date field. Represents a point in time with millisecond precision, encoded using a "Dimensional Points" based data structure that allows for very efficient searches for specific values, or ranges of values. See the section <<working-with-dates.adoc#,Working with Dates>> for more details on the supported syntax. For single valued fields, `docValues="true"` must be used to enable sorting.
|DoublePointField |Double field (64-bit IEEE floating point). This class encodes double values using a "Dimensional Points" based data structure that allows for very efficient searches for specific values, or ranges of values. For single valued fields, `docValues="true"` must be used to enable sorting.
|ExternalFileField |Pulls values from a file on disk. See the section <<working-with-external-files-and-processes.adoc#working-with-external-files-and-processes,Working with External Files and Processes>> for more information.
|ExternalFileField |Pulls values from a file on disk. See the section <<working-with-external-files-and-processes.adoc#,Working with External Files and Processes>> for more information.
|EnumFieldType |Allows defining an enumerated set of values which may not be easily sorted by either alphabetic or numeric order (such as a list of severities, for example). This field type takes a configuration file, which lists the proper order of the field values. See the section <<working-with-enum-fields.adoc#working-with-enum-fields,Working with Enum Fields>> for more information.
|EnumFieldType |Allows defining an enumerated set of values which may not be easily sorted by either alphabetic or numeric order (such as a list of severities, for example). This field type takes a configuration file, which lists the proper order of the field values. See the section <<working-with-enum-fields.adoc#,Working with Enum Fields>> for more information.
|FloatPointField |Floating point field (32-bit IEEE floating point). This class encodes float values using a "Dimensional Points" based data structure that allows for very efficient searches for specific values, or ranges of values. For single valued fields, `docValues="true"` must be used to enable sorting.
@ -51,13 +51,13 @@ The following table lists the field types that are available in Solr and are rec
|IntPointField |Integer field (32-bit signed integer). This class encodes int values using a "Dimensional Points" based data structure that allows for very efficient searches for specific values, or ranges of values. For single valued fields, `docValues="true"` must be used to enable sorting.
|LatLonPointSpatialField |A latitude/longitude coordinate pair; possibly multi-valued for multiple points. Usually it's specified as "lat,lon" order with a comma. See the section <<spatial-search.adoc#spatial-search,Spatial Search>> for more information.
|LatLonPointSpatialField |A latitude/longitude coordinate pair; possibly multi-valued for multiple points. Usually it's specified as "lat,lon" order with a comma. See the section <<spatial-search.adoc#,Spatial Search>> for more information.
|LongPointField |Long field (64-bit signed integer). This class encodes foo values using a "Dimensional Points" based data structure that allows for very efficient searches for specific values, or ranges of values. For single valued fields, `docValues="true"` must be used to enable sorting.
|NestPathField | Specialized field type storing ehanced information, when <<indexing-nested-documents.adoc#schema-configuration,working with nested documents>>.
|PointType |A single-valued n-dimensional point. It's both for sorting spatial data that is _not_ lat-lon, and for some more rare use-cases. (NOTE: this is _not_ related to the "Point" based numeric fields). See <<spatial-search.adoc#spatial-search,Spatial Search>> for more information.
|PointType |A single-valued n-dimensional point. It's both for sorting spatial data that is _not_ lat-lon, and for some more rare use-cases. (NOTE: this is _not_ related to the "Point" based numeric fields). See <<spatial-search.adoc#,Spatial Search>> for more information.
|PreAnalyzedField |Provides a way to send to Solr serialized token streams, optionally with independent stored values of a field, and have this information stored and indexed without any additional text processing.
@ -67,19 +67,19 @@ Configuration and usage of PreAnalyzedField is documented in the section <<work
|RankField |Can be used to store scoring factors to improve document ranking. To be used in combination with <<other-parsers.adoc#ranking-query-parser,RankQParserPlugin>>
|RptWithGeometrySpatialField |A derivative of `SpatialRecursivePrefixTreeFieldType` that also stores the original geometry. See <<spatial-search.adoc#spatial-search,Spatial Search>> for more information and usage with geospatial results transformer.
|RptWithGeometrySpatialField |A derivative of `SpatialRecursivePrefixTreeFieldType` that also stores the original geometry. See <<spatial-search.adoc#,Spatial Search>> for more information and usage with geospatial results transformer.
|SortableTextField |A specialized version of TextField that allows (and defaults to) `docValues="true"` for sorting on the first 1024 characters of the original string prior to analysis. The number of characters used for sorting can be overridden with the `maxCharsForDocValues` attribute. See <<common-query-parameters.adoc#sort-parameter,sort parameter discussion>> for details.
|SpatialRecursivePrefixTreeFieldType |(RPT for short) Accepts latitude comma longitude strings or other shapes in WKT format. See <<spatial-search.adoc#spatial-search,Spatial Search>> for more information.
|SpatialRecursivePrefixTreeFieldType |(RPT for short) Accepts latitude comma longitude strings or other shapes in WKT format. See <<spatial-search.adoc#,Spatial Search>> for more information.
|StrField |String (UTF-8 encoded string or Unicode). Strings are intended for small fields and are _not_ tokenized or analyzed in any way. They have a hard limit of slightly less than 32K.
|TextField |Text, usually multiple words or tokens. In normal usage, only fields of type TextField or SortableTextField will specify an <<analyzers.adoc#analyzers,analyzer>>.
|TextField |Text, usually multiple words or tokens. In normal usage, only fields of type TextField or SortableTextField will specify an <<analyzers.adoc#,analyzer>>.
|UUIDField |Universally Unique Identifier (UUID). Pass in a value of `NEW` and Solr will create a new UUID.
*Note*: configuring a UUIDField instance with a default value of `NEW` is not advisable for most users when using SolrCloud (and not possible if the UUID value is configured as the unique key field) since the result will be that each replica of each document will get a unique UUID value. Using <<update-request-processors.adoc#update-request-processors,UUIDUpdateProcessorFactory>> to generate UUID values when documents are added is recommended instead.
*Note*: configuring a UUIDField instance with a default value of `NEW` is not advisable for most users when using SolrCloud (and not possible if the UUID value is configured as the unique key field) since the result will be that each replica of each document will get a unique UUID value. Using <<update-request-processors.adoc#,UUIDUpdateProcessorFactory>> to generate UUID values when documents are added is recommended instead.
|===
== Deprecated Field Types
@ -97,7 +97,7 @@ NOTE: All Trie* numeric and date field types have been deprecated in favor of *P
|EnumField |Use EnumFieldType instead.
|LatLonType |Consider using the LatLonPointSpatialField instead. A single-valued latitude/longitude coordinate pair. Usually it's specified as "lat,lon" order with a comma. See the section <<spatial-search.adoc#spatial-search,Spatial Search>> for more information.
|LatLonType |Consider using the LatLonPointSpatialField instead. A single-valued latitude/longitude coordinate pair. Usually it's specified as "lat,lon" order with a comma. See the section <<spatial-search.adoc#,Spatial Search>> for more information.
|TrieDateField |Use DatePointField instead.

View File

@ -21,17 +21,17 @@ The Files screen lets you browse & view the various configuration files (such `s
.The Files Screen
image::images/files-screen/files-screen.png[image,height=400]
If you are using <<solrcloud.adoc#solrcloud,SolrCloud>>, the files displayed are the configuration files for this collection stored in ZooKeeper. In a standalone Solr installations, all files in the `conf` directory are displayed.
If you are using <<solrcloud.adoc#,SolrCloud>>, the files displayed are the configuration files for this collection stored in ZooKeeper. In a standalone Solr installations, all files in the `conf` directory are displayed.
While `solrconfig.xml` defines the behavior of Solr as it indexes content and responds to queries, the Schema allows you to define the types of data in your content (field types), the fields your documents will be broken into, and any dynamic fields that should be generated based on patterns of field names in the incoming documents. Any other configuration files are used depending on how they are referenced in either `solrconfig.xml` or your schema.
Configuration files cannot be edited with this screen, so a text editor of some kind must be used.
This screen is related to the <<schema-browser-screen.adoc#schema-browser-screen,Schema Browser Screen>>, in that they both can display information from the schema, but the Schema Browser provides a way to drill into the analysis chain and displays linkages between field types, fields, and dynamic field rules.
This screen is related to the <<schema-browser-screen.adoc#,Schema Browser Screen>>, in that they both can display information from the schema, but the Schema Browser provides a way to drill into the analysis chain and displays linkages between field types, fields, and dynamic field rules.
Many of the options defined in these configuration files are described throughout the rest of this Guide. In particular, you will want to review these sections:
* <<indexing-and-basic-data-operations.adoc#indexing-and-basic-data-operations,Indexing and Basic Data Operations>>
* <<searching.adoc#searching,Searching>>
* <<the-well-configured-solr-instance.adoc#the-well-configured-solr-instance,The Well-Configured Solr Instance>>
* <<documents-fields-and-schema-design.adoc#documents-fields-and-schema-design,Documents, Fields, and Schema Design>>
* <<indexing-and-basic-data-operations.adoc#,Indexing and Basic Data Operations>>
* <<searching.adoc#,Searching>>
* <<the-well-configured-solr-instance.adoc#,The Well-Configured Solr Instance>>
* <<documents-fields-and-schema-design.adoc#,Documents, Fields, and Schema Design>>

View File

@ -292,7 +292,7 @@ Collation allows sorting of text in a language-sensitive way. It is usually used
== Daitch-Mokotoff Soundex Filter
Implements the Daitch-Mokotoff Soundex algorithm, which allows identification of similar names, even if they are spelled differently. More information about how this works is available in the section on <<phonetic-matching.adoc#phonetic-matching,Phonetic Matching>>.
Implements the Daitch-Mokotoff Soundex algorithm, which allows identification of similar names, even if they are spelled differently. More information about how this works is available in the section on <<phonetic-matching.adoc#,Phonetic Matching>>.
*Factory class:* `solr.DaitchMokotoffSoundexFilterFactory`
@ -330,7 +330,7 @@ Implements the Daitch-Mokotoff Soundex algorithm, which allows identification of
== Double Metaphone Filter
This filter creates tokens using the http://commons.apache.org/proper/commons-codec/archives/{ivy-commons-codec-version}/apidocs/org/apache/commons/codec/language/DoubleMetaphone.html[`DoubleMetaphone`] encoding algorithm from commons-codec. For more information, see the <<phonetic-matching.adoc#phonetic-matching,Phonetic Matching>> section.
This filter creates tokens using the http://commons.apache.org/proper/commons-codec/archives/{ivy-commons-codec-version}/apidocs/org/apache/commons/codec/language/DoubleMetaphone.html[`DoubleMetaphone`] encoding algorithm from commons-codec. For more information, see the <<phonetic-matching.adoc#,Phonetic Matching>> section.
*Factory class:* `solr.DoubleMetaphoneFilterFactory`
@ -1343,7 +1343,7 @@ Converts any uppercase letters in a token to the equivalent lowercase token. All
== Managed Stop Filter
This is specialized version of the <<Stop Filter,Stop Words Filter Factory>> that uses a set of stop words that are <<managed-resources.adoc#managed-resources,managed from a REST API.>>
This is specialized version of the <<Stop Filter,Stop Words Filter Factory>> that uses a set of stop words that are <<managed-resources.adoc#,managed from a REST API.>>
*Arguments:*
@ -1383,7 +1383,7 @@ See <<Stop Filter>> for example input/output.
== Managed Synonym Filter
This is specialized version of the <<Synonym Filter>> that uses a mapping on synonyms that is <<managed-resources.adoc#managed-resources,managed from a REST API.>>
This is specialized version of the <<Synonym Filter>> that uses a mapping on synonyms that is <<managed-resources.adoc#,managed from a REST API.>>
.Managed Synonym Filter has been Deprecated
[WARNING]
@ -1397,7 +1397,7 @@ For arguments and examples, see the <<Synonym Graph Filter>> below.
== Managed Synonym Graph Filter
This is specialized version of the <<Synonym Graph Filter>> that uses a mapping on synonyms that is <<managed-resources.adoc#managed-resources,managed from a REST API.>>
This is specialized version of the <<Synonym Graph Filter>> that uses a mapping on synonyms that is <<managed-resources.adoc#,managed from a REST API.>>
This filter maps single- or multi-token synonyms, producing a fully correct graph output. This filter is a replacement for the Managed Synonym Filter, which produces incorrect graphs for multi-token synonyms.
@ -1687,7 +1687,7 @@ More complex pattern with capture group reference in the replacement. Tokens tha
== Phonetic Filter
This filter creates tokens using one of the phonetic encoding algorithms in the `org.apache.commons.codec.language` package. For more information, see the section on <<phonetic-matching.adoc#phonetic-matching,Phonetic Matching>>.
This filter creates tokens using one of the phonetic encoding algorithms in the `org.apache.commons.codec.language` package. For more information, see the section on <<phonetic-matching.adoc#,Phonetic Matching>>.
*Factory class:* `solr.PhoneticFilterFactory`
@ -2314,7 +2314,7 @@ NOTE: Although this filter produces correct token graphs, it cannot consume an i
*Arguments:*
`synonyms`:: (required) The path of a file that contains a list of synonyms, one per line. In the (default) `solr` format - see the `format` argument below for alternatives - blank lines and lines that begin with "`#`" are ignored. This may be a comma-separated list of paths. See <<resource-loading.adoc#resource-loading,Resource Loading>> for more information.
`synonyms`:: (required) The path of a file that contains a list of synonyms, one per line. In the (default) `solr` format - see the `format` argument below for alternatives - blank lines and lines that begin with "`#`" are ignored. This may be a comma-separated list of paths. See <<resource-loading.adoc#,Resource Loading>> for more information.
+
There are two ways to specify synonym mappings:
+

View File

@ -18,7 +18,7 @@
The `solr.xml` file defines some global configuration options that apply to all or many cores.
This section will describe the default `solr.xml` file included with Solr and how to modify it for your needs. For details on how to configure `core.properties`, see the section <<defining-core-properties.adoc#defining-core-properties,Defining core.properties>>.
This section will describe the default `solr.xml` file included with Solr and how to modify it for your needs. For details on how to configure `core.properties`, see the section <<defining-core-properties.adoc#,Defining core.properties>>.
== Defining solr.xml
@ -160,7 +160,7 @@ In SolrCloud mode, the URL of the ZooKeeper host that Solr should use for cluste
If `TRUE`, node names are not based on the address of the node, but on a generic name that identifies the core. When a different machine takes over serving that core things will be much easier to understand.
`zkCredentialsProvider` & `zkACLProvider`::
Optional parameters that can be specified if you are using <<zookeeper-access-control.adoc#zookeeper-access-control,ZooKeeper Access Control>>.
Optional parameters that can be specified if you are using <<zookeeper-access-control.adoc#,ZooKeeper Access Control>>.
=== The <logging> Element

View File

@ -18,7 +18,7 @@
Function queries enable you to generate a relevancy score using the actual value of one or more numeric fields.
Function queries are supported by the <<the-dismax-query-parser.adoc#the-dismax-query-parser,DisMax>>, <<the-extended-dismax-query-parser.adoc#the-extended-dismax-query-parser,Extended DisMax>>, and <<the-standard-query-parser.adoc#the-standard-query-parser,standard>> query parsers.
Function queries are supported by the <<the-dismax-query-parser.adoc#,DisMax>>, <<the-extended-dismax-query-parser.adoc#,Extended DisMax>>, and <<the-standard-query-parser.adoc#,standard>> query parsers.
Function queries use _functions_. The functions can be a constant (numeric or string literal), a field, another function or a parameter substitution argument. You can use these functions to modify the ranking of results for users. These could be used to change the ranking of results based on a user's location, or some other calculation.
@ -253,7 +253,7 @@ Use the `field(myfield,min)` <<field Function,syntax for selecting the minimum v
=== ms Function
Returns milliseconds of difference between its arguments. Dates are relative to the Unix or POSIX time epoch, midnight, January 1, 1970 UTC.
Arguments may be the name of a `DatePointField`, `TrieDateField`, or date math based on a <<working-with-dates.adoc#working-with-dates,constant date or `NOW`>>.
Arguments may be the name of a `DatePointField`, `TrieDateField`, or date math based on a <<working-with-dates.adoc#,constant date or `NOW`>>.
* `ms()`: Equivalent to `ms(NOW)`, number of milliseconds since the epoch.
* `ms(a):` Returns the number of milliseconds since the epoch that the argument represents.
@ -332,7 +332,7 @@ Returns the product of multiple values or functions, which are specified in a co
* `mul(x,y)`
=== query Function
Returns the score for the given subquery, or the default value for documents not matching the query. Any type of subquery is supported through either parameter de-referencing `$otherparam` or direct specification of the query string in the <<local-parameters-in-queries.adoc#local-parameters-in-queries,Local Parameters>> through the `v` key.
Returns the score for the given subquery, or the default value for documents not matching the query. Any type of subquery is supported through either parameter de-referencing `$otherparam` or direct specification of the query string in the <<local-parameters-in-queries.adoc#,Local Parameters>> through the `v` key.
*Syntax Examples*

View File

@ -28,7 +28,7 @@ In this section you will learn how to start a SolrCloud cluster using startup sc
[TIP]
====
This tutorial assumes that you're already familiar with the basics of using Solr. If you need a refresher, please see the <<getting-started.adoc#getting-started,Getting Started section>> to get a grounding in Solr concepts. If you load documents as part of that exercise, you should start over with a fresh Solr installation for these SolrCloud tutorials.
This tutorial assumes that you're already familiar with the basics of using Solr. If you need a refresher, please see the <<getting-started.adoc#,Getting Started section>> to get a grounding in Solr concepts. If you load documents as part of that exercise, you should start over with a fresh Solr installation for these SolrCloud tutorials.
====
[WARNING]
@ -86,9 +86,9 @@ After starting up all nodes in the cluster, the script prompts you for the name
The suggested default is "gettingstarted" but you might want to choose a name more appropriate for your specific search application.
Next, the script prompts you for the number of shards to distribute the collection across. <<shards-and-indexing-data-in-solrcloud.adoc#shards-and-indexing-data-in-solrcloud,Sharding>> is covered in more detail later on, so if you're unsure, we suggest using the default of 2 so that you can see how a collection is distributed across multiple nodes in a SolrCloud cluster.
Next, the script prompts you for the number of shards to distribute the collection across. <<shards-and-indexing-data-in-solrcloud.adoc#,Sharding>> is covered in more detail later on, so if you're unsure, we suggest using the default of 2 so that you can see how a collection is distributed across multiple nodes in a SolrCloud cluster.
Next, the script will prompt you for the number of replicas to create for each shard. <<shards-and-indexing-data-in-solrcloud.adoc#shards-and-indexing-data-in-solrcloud,Replication>> is covered in more detail later in the guide, so if you're unsure, then use the default of 2 so that you can see how replication is handled in SolrCloud.
Next, the script will prompt you for the number of replicas to create for each shard. <<shards-and-indexing-data-in-solrcloud.adoc#,Replication>> is covered in more detail later in the guide, so if you're unsure, then use the default of 2 so that you can see how replication is handled in SolrCloud.
Lastly, the script will prompt you for the name of a configuration directory for your collection. You can choose *_default*, or *sample_techproducts_configs*. The configuration directories are pulled from `server/solr/configsets/` so you can review them beforehand if you wish. The *_default* configuration is useful when you're still designing a schema for your documents and need some flexibility as you experiment with Solr, since it has schemaless functionality. However, after creating your collection, the schemaless functionality can be disabled in order to lock down the schema (so that documents indexed after doing so will not alter the schema) or to configure the schema by yourself. This can be done as follows (assuming your collection name is `mycollection`):
@ -131,7 +131,7 @@ bin/solr healthcheck -c gettingstarted
The healthcheck command gathers basic information about each replica in a collection, such as number of docs, current status (active, down, etc.), and address (where the replica lives in the cluster).
Documents can now be added to SolrCloud using the <<post-tool.adoc#post-tool,Post Tool>>.
Documents can now be added to SolrCloud using the <<post-tool.adoc#,Post Tool>>.
To stop Solr in SolrCloud mode, you would use the `bin/solr` script and issue the `stop` command, as in:
@ -191,4 +191,4 @@ bin/solr start -cloud -s example/cloud/node3/solr -p 8987 -z localhost:9983
The previous command will start another Solr node on port 8987 with Solr home set to `example/cloud/node3/solr`. The new node will write its log files to `example/cloud/node3/logs`.
Once you're comfortable with how the SolrCloud example works, we recommend using the process described in <<taking-solr-to-production.adoc#taking-solr-to-production,Taking Solr to Production>> for setting up SolrCloud nodes in production.
Once you're comfortable with how the SolrCloud example works, we recommend using the process described in <<taking-solr-to-production.adoc#,Taking Solr to Production>> for setting up SolrCloud nodes in production.

View File

@ -22,10 +22,10 @@ Solr makes it easy for programmers to develop sophisticated, high-performance se
This section introduces you to the basic Solr architecture and features to help you get up and running quickly. It covers the following topics:
<<solr-tutorial.adoc#solr-tutorial,Solr Tutorial>>: This tutorial covers getting Solr up and running
<<solr-tutorial.adoc#,Solr Tutorial>>: This tutorial covers getting Solr up and running
<<a-quick-overview.adoc#a-quick-overview,A Quick Overview>>: A high-level overview of how Solr works.
<<a-quick-overview.adoc#,A Quick Overview>>: A high-level overview of how Solr works.
<<solr-system-requirements.adoc#solr-system-requirements,Solr System Requirements>>: Solr System Requirement
<<solr-system-requirements.adoc#,Solr System Requirements>>: Solr System Requirement
<<installing-solr.adoc#installing-solr,Installing Solr>>: A walkthrough of the Solr installation process.
<<installing-solr.adoc#,Installing Solr>>: A walkthrough of the Solr installation process.

View File

@ -26,7 +26,7 @@ The `nodes` function can be combined with the `scoreNodes` function to provide r
[IMPORTANT]
====
This document assumes a basic understanding of graph terminology and streaming expressions. You can begin exploring graph traversal concepts with this https://en.wikipedia.org/wiki/Graph_traversal[Wikipedia article]. More details about streaming expressions are available in this Guide, in the section <<streaming-expressions.adoc#streaming-expressions,Streaming Expressions>>.
This document assumes a basic understanding of graph terminology and streaming expressions. You can begin exploring graph traversal concepts with this https://en.wikipedia.org/wiki/Graph_traversal[Wikipedia article]. More details about streaming expressions are available in this Guide, in the section <<streaming-expressions.adoc#,Streaming Expressions>>.
====
== Basic Syntax
@ -59,7 +59,7 @@ nodes(emails,
The `nodes` function above finds all the edges with "johndoe@apache.org" or "janesmith@apache.org" in the `from` field and gathers the `to` field.
Like all <<streaming-expressions.adoc#streaming-expressions,Streaming Expressions>>, you can execute a `nodes` expression by sending it to the `/stream` handler. For example:
Like all <<streaming-expressions.adoc#,Streaming Expressions>>, you can execute a `nodes` expression by sending it to the `/stream` handler. For example:
[source,bash]
----
@ -220,7 +220,7 @@ nodes(emails,
gather="to")
----
In the example above only emails that match the filter query will be included in the traversal. Any Solr query can be included here. So you can do fun things like <<spatial-search.adoc#spatial-search,geospatial queries>>, apply any of the available <<query-syntax-and-parsing.adoc#query-syntax-and-parsing,query parsers>>, or even write custom query parsers to limit the traversal.
In the example above only emails that match the filter query will be included in the traversal. Any Solr query can be included here. So you can do fun things like <<spatial-search.adoc#,geospatial queries>>, apply any of the available <<query-syntax-and-parsing.adoc#,query parsers>>, or even write custom query parsers to limit the traversal.
== Root Streams

View File

@ -24,7 +24,7 @@ This plugin can be particularly useful in leveraging an extended set of features
Please note that the version of Hadoop library used by Solr is upgraded periodically. While Solr will ensure the stability and backwards compatibility of the structure of the plugin configuration (viz., the parameter names of this plugin), the values of these parameters may change based on the version of Hadoop library. Please review the Hadoop documentation for the version used by your Solr installation for more details.
For some of the authentication schemes (e.g., Kerberos), Solr provides a native implementation of authentication plugin. If you require a more stable setup, in terms of configuration, ability to perform rolling upgrades, backward compatibility, etc., you should consider using such plugin. Please review the section <<authentication-and-authorization-plugins.adoc#authentication-and-authorization-plugins,Authentication and Authorization Plugins>> for an overview of authentication plugin options in Solr.
For some of the authentication schemes (e.g., Kerberos), Solr provides a native implementation of authentication plugin. If you require a more stable setup, in terms of configuration, ability to perform rolling upgrades, backward compatibility, etc., you should consider using such plugin. Please review the section <<authentication-and-authorization-plugins.adoc#,Authentication and Authorization Plugins>> for an overview of authentication plugin options in Solr.
There are two plugin classes:
@ -71,9 +71,9 @@ The `HttpClientBuilderFactory` implementation used for the Solr internal communi
=== Kerberos Authentication using Hadoop Authentication Plugin
This example lets you configure Solr to use Kerberos Authentication, similar to how you would use the <<kerberos-authentication-plugin.adoc#kerberos-authentication-plugin,Kerberos Authentication Plugin>>.
This example lets you configure Solr to use Kerberos Authentication, similar to how you would use the <<kerberos-authentication-plugin.adoc#,Kerberos Authentication Plugin>>.
After consulting the Hadoop authentication library's documentation, you can supply per host configuration parameters using the `solr.*` prefix. As an example, the Hadoop authentication library expects a parameter `kerberos.principal`, which can be supplied as a system property named `solr.kerberos.principal` when starting a Solr node. Refer to the section <<kerberos-authentication-plugin.adoc#kerberos-authentication-plugin,Kerberos Authentication Plugin>> for other typical configuration parameters.
After consulting the Hadoop authentication library's documentation, you can supply per host configuration parameters using the `solr.*` prefix. As an example, the Hadoop authentication library expects a parameter `kerberos.principal`, which can be supplied as a system property named `solr.kerberos.principal` when starting a Solr node. Refer to the section <<kerberos-authentication-plugin.adoc#,Kerberos Authentication Plugin>> for other typical configuration parameters.
Please note that this example uses `ConfigurableInternodeAuthHadoopPlugin`, and hence you must provide the `clientBuilderFactory` implementation. As a result, all internode communication will use the Kerberos mechanism, instead of PKI authentication.
@ -101,7 +101,7 @@ To setup this plugin, use the following in your `security.json` file.
=== Simple Authentication with Delegation Tokens
Similar to the previous example, this is an example of setting up a Solr cluster that uses delegation tokens. Refer to the parameters in the Hadoop authentication library's https://hadoop.apache.org/docs/stable/hadoop-auth/Configuration.html[documentation] or refer to the section <<kerberos-authentication-plugin.adoc#kerberos-authentication-plugin,Kerberos Authentication Plugin>> for further details. Please note that this example does not use Kerberos and the requests made to Solr must contain valid delegation tokens.
Similar to the previous example, this is an example of setting up a Solr cluster that uses delegation tokens. Refer to the parameters in the Hadoop authentication library's https://hadoop.apache.org/docs/stable/hadoop-auth/Configuration.html[documentation] or refer to the section <<kerberos-authentication-plugin.adoc#,Kerberos Authentication Plugin>> for further details. Please note that this example does not use Kerberos and the requests made to Solr must contain valid delegation tokens.
To setup this plugin, use the following in your `security.json` file.

View File

@ -42,7 +42,7 @@ When using `*`, consider adding `hl.requireFieldMatch=true`.
+
Note that the field(s) listed here ought to have compatible text-analysis (defined in the schema) with field(s) referenced in the query to be highlighted.
It may be necessary to modify `hl.q` and `hl.qparser` and/or modify the text analysis.
The following example uses the <<local-parameters-in-queries.adoc#local-parameters-in-queries,local-params>> syntax and <<the-extended-dismax-query-parser.adoc#the-extended-dismax-query-parser,the edismax parser>> to highlight fields in `hl.fl`:
The following example uses the <<local-parameters-in-queries.adoc#,local-params>> syntax and <<the-extended-dismax-query-parser.adoc#,the edismax parser>> to highlight fields in `hl.fl`:
`&hl.fl=field1 field2&hl.q={!edismax qf=$hl.fl v=$q}&hl.qparser=lucene&hl.requireFieldMatch=true` (along with other applicable parameters, of course).
+
The default is the value of the `df` parameter which in turn has no default.
@ -55,7 +55,7 @@ When setting this, you might also need to set `hl.qparser`.
The default is the value of the `q` parameter (already parsed).
`hl.qparser`::
The <<query-syntax-and-parsing.adoc#query-syntax-and-parsing,query parser>> to use for the `hl.q` query. It only applies when `hl.q` is set.
The <<query-syntax-and-parsing.adoc#,query parser>> to use for the `hl.q` query. It only applies when `hl.q` is set.
+
The default is the value of the `defType` parameter which in turn defaults to `lucene`.

View File

@ -19,11 +19,11 @@
The following sections cover provide general information about how various SolrCloud features work. To understand these features, it's important to first understand a few key concepts that relate to SolrCloud.
* <<shards-and-indexing-data-in-solrcloud.adoc#shards-and-indexing-data-in-solrcloud,Shards and Indexing Data in SolrCloud>>
* <<distributed-requests.adoc#distributed-requests,Distributed Requests>>
* <<aliases.adoc#aliases,Standard and Routed Aliases>>
* <<shards-and-indexing-data-in-solrcloud.adoc#,Shards and Indexing Data in SolrCloud>>
* <<distributed-requests.adoc#,Distributed Requests>>
* <<aliases.adoc#,Standard and Routed Aliases>>
If you are already familiar with SolrCloud concepts and basic functionality, you can skip to the section covering <<solrcloud-configuration-and-parameters.adoc#solrcloud-configuration-and-parameters,SolrCloud Configuration and Parameters>>.
If you are already familiar with SolrCloud concepts and basic functionality, you can skip to the section covering <<solrcloud-configuration-and-parameters.adoc#,SolrCloud Configuration and Parameters>>.
== Key SolrCloud Concepts

View File

@ -61,7 +61,7 @@ v2: `api/node/logging` |{solr-javadocs}/core/org/apache/solr/handler/admin/ShowF
Luke:: Expose the internal Lucene index. This handler must have a collection name in the path to the endpoint.
+
*Documentation*: <<luke-request-handler.adoc#luke-request-handler,Luke Request Handler>>
*Documentation*: <<luke-request-handler.adoc#,Luke Request Handler>>
+
[cols="3*.",frame=none,grid=cols,options="header"]
|===
@ -72,7 +72,7 @@ Luke:: Expose the internal Lucene index. This handler must have a collection nam
MBeans:: Provide info about all registered {solr-javadocs}/core/org/apache/solr/core/SolrInfoBean.html[SolrInfoMBeans]. This handler must have a collection name in the path to the endpoint.
+
*Documentation*: <<mbean-request-handler.adoc#mbean-request-handler,MBean Request Handler>>
*Documentation*: <<mbean-request-handler.adoc#,MBean Request Handler>>
+
[cols="3*.",frame=none,grid=cols,options="header"]
|===
@ -82,7 +82,7 @@ MBeans:: Provide info about all registered {solr-javadocs}/core/org/apache/solr/
Ping:: Health check. This handler must have a collection name in the path to the endpoint.
+
*Documentation*: <<ping.adoc#ping,Ping>>
*Documentation*: <<ping.adoc#,Ping>>
+
[cols="3*.",frame=none,grid=cols,options="header"]
|===
@ -151,7 +151,7 @@ Document Analysis:: Return a breakdown of the analysis process of the given docu
|`solr/<collection>/analysis/document` |{solr-javadocs}/core/org/apache/solr/handler/DocumentAnalysisRequestHandler.html[DocumentAnalysisRequestHandler] |`_ANALYSIS_DOCUMENT`
|===
Field Analysis:: Return index- and query-time analysis over the given field(s)/field type(s). This handler drives the <<analysis-screen.adoc#analysis-screen,Analysis screen>> in Solr's Admin UI.
Field Analysis:: Return index- and query-time analysis over the given field(s)/field type(s). This handler drives the <<analysis-screen.adoc#,Analysis screen>> in Solr's Admin UI.
+
[cols="3*.",frame=none,grid=cols,options="header"]
|===
@ -164,7 +164,7 @@ Field Analysis:: Return index- and query-time analysis over the given field(s)/f
[horizontal]
Config API:: Retrieve and modify Solr configuration.
+
*Documentation*: <<config-api.adoc#config-api,Config API>>
*Documentation*: <<config-api.adoc#,Config API>>
+
[cols="3*.",frame=none,grid=cols,options="header"]
|===
@ -192,7 +192,7 @@ Replication:: Replicate indexes for SolrCloud recovery and Leader/Follower index
Schema API:: Retrieve and modify the Solr schema.
+
*Documentation*: <<schema-api.adoc#schema-api,Schema API>>
*Documentation*: <<schema-api.adoc#,Schema API>>
+
[cols="3*.",frame=none,grid=cols,options="header"]
|===
@ -207,7 +207,7 @@ v2: `api/collections/<collection>/schema`, `api/cores/<core>/schema` |{solr-java
[horizontal]
Export:: Export full sorted result sets.
+
*Documentation*: <<exporting-result-sets.adoc#exporting-result-sets,Exporting Result Sets>>
*Documentation*: <<exporting-result-sets.adoc#,Exporting Result Sets>>
+
[cols="3*.",frame=none,grid=cols,options="header"]
|===
@ -217,7 +217,7 @@ Export:: Export full sorted result sets.
RealTimeGet:: Low-latency retrieval of the latest version of a document.
+
*Documentation*: <<realtime-get.adoc#realtime-get,RealTime Get>>
*Documentation*: <<realtime-get.adoc#,RealTime Get>>
+
[cols="3*.",frame=none,grid=cols,options="header"]
|===
@ -227,7 +227,7 @@ RealTimeGet:: Low-latency retrieval of the latest version of a document.
Graph Traversal:: Return http://graphml.graphdrawing.org/[GraphML] formatted output from a `gatherNodes` streaming expression.
+
*Documentation*: <<graph-traversal.adoc#graph-traversal,Graph Traversal>>
*Documentation*: <<graph-traversal.adoc#,Graph Traversal>>
+
[cols="3*.",frame=none,grid=cols,options="header"]
|===
@ -270,7 +270,7 @@ Terms:: Return a field's indexed terms and the number of documents containing ea
[horizontal]
Update:: Add, delete and update indexed documents formatted as SolrXML, CSV, SolrJSON or javabin.
+
*Documentation*: <<uploading-data-with-index-handlers.adoc#uploading-data-with-index-handlers,Uploading Data with Index Handlers>>
*Documentation*: <<uploading-data-with-index-handlers.adoc#,Uploading Data with Index Handlers>>
+
[cols="3*.",frame=none,grid=cols,options="header"]
|===
@ -300,7 +300,7 @@ JSON Updates:: Add, delete and update SolrJSON-formatted documents.
Custom JSON Updates:: Add and update custom JSON-formatted documents.
+
*Documentation*: <<transforming-and-indexing-custom-json.adoc#transforming-and-indexing-custom-json,Transforming and Indexing Custom JSON>>
*Documentation*: <<transforming-and-indexing-custom-json.adoc#,Transforming and Indexing Custom JSON>>
+
[cols="3*.",frame=none,grid=cols,options="header"]
|===
@ -310,7 +310,7 @@ Custom JSON Updates:: Add and update custom JSON-formatted documents.
== How to View Implicit Handler Paramsets
You can see configuration for all request handlers, including the implicit request handlers, via the <<config-api.adoc#config-api,Config API>>.
You can see configuration for all request handlers, including the implicit request handlers, via the <<config-api.adoc#,Config API>>.
To include the expanded paramset in the response, as well as the effective parameters from merging the paramset parameters with the built-in parameters, use the `expandParams` request parameter. For the `/export` request handler, you can make a request like this:
@ -374,4 +374,4 @@ The response will look similar to:
== How to Edit Implicit Handler Paramsets
Because implicit request handlers are not present in `solrconfig.xml`, configuration of their associated `default`, `invariant` and `appends` parameters may be edited via the <<request-parameters-api.adoc#request-parameters-api, Request Parameters API>> using the paramset listed in the above table. However, other parameters, including SearchHandler components, may not be modified. The invariants and appends specified in the implicit configuration cannot be overridden.
Because implicit request handlers are not present in `solrconfig.xml`, configuration of their associated `default`, `invariant` and `appends` parameters may be edited via the <<request-parameters-api.adoc#, Request Parameters API>> using the paramset listed in the above table. However, other parameters, including SearchHandler components, may not be modified. The invariants and appends specified in the implicit configuration cannot be overridden.

View File

@ -38,7 +38,7 @@ Solr includes a Java implementation of index replication that works over HTTP:
.Replication In SolrCloud
[NOTE]
====
Although there is no explicit concept of "leader/follower" nodes in a <<solrcloud.adoc#solrcloud,SolrCloud>> cluster, the `ReplicationHandler` discussed on this page is still used by SolrCloud as needed to support "shard recovery" but this is done in a peer to peer manner.
Although there is no explicit concept of "leader/follower" nodes in a <<solrcloud.adoc#,SolrCloud>> cluster, the `ReplicationHandler` discussed on this page is still used by SolrCloud as needed to support "shard recovery" but this is done in a peer to peer manner.
When using SolrCloud, the `ReplicationHandler` must be available via the `/replication` path. Solr does this implicitly unless overridden explicitly in your `solrconfig.xml`, but if you wish to override the default behavior, make certain that you do not explicitly set any of the "leader" or "follower" configuration options mentioned below, or they will interfere with normal SolrCloud operation.
====

View File

@ -66,18 +66,18 @@ The Guide includes the following sections:
[sidebar.col]
****
The *<<getting-started.adoc#getting-started,Getting Started>>* section guides you through the installation and setup of Solr. A detailed tutorial for first-time users shows many of Solr's features.
The *<<getting-started.adoc#,Getting Started>>* section guides you through the installation and setup of Solr. A detailed tutorial for first-time users shows many of Solr's features.
*<<using-the-solr-administration-user-interface.adoc#using-the-solr-administration-user-interface,Using the Solr Administration User Interface>>*: This section introduces the Web-based interface for administering Solr. From your browser you can view configuration files, submit queries, view logfile settings and Java environment settings, and monitor and control distributed configurations.
*<<using-the-solr-administration-user-interface.adoc#,Using the Solr Administration User Interface>>*: This section introduces the Web-based interface for administering Solr. From your browser you can view configuration files, submit queries, view logfile settings and Java environment settings, and monitor and control distributed configurations.
****
.Deploying Solr
[sidebar.col]
****
*<<deployment-and-operations.adoc#deployment-and-operations,Deployment and Operations>>*: Once you have Solr configured, you want to deploy it to production and keep it up to date. This section includes information about how to take Solr to production, run it in HDFS or AWS, and information about upgrades and managing Solr from the command line.
*<<deployment-and-operations.adoc#,Deployment and Operations>>*: Once you have Solr configured, you want to deploy it to production and keep it up to date. This section includes information about how to take Solr to production, run it in HDFS or AWS, and information about upgrades and managing Solr from the command line.
*<<monitoring-solr.adoc#monitoring-solr,Monitoring Solr>>*: Solr includes options for keeping an eye on the performance of your Solr cluster with the web-based administration console, through the command line interface, or using REST APIs.
*<<monitoring-solr.adoc#,Monitoring Solr>>*: Solr includes options for keeping an eye on the performance of your Solr cluster with the web-based administration console, through the command line interface, or using REST APIs.
****
--
@ -91,22 +91,22 @@ The *<<getting-started.adoc#getting-started,Getting Started>>* section guides yo
.Indexing Documents
[sidebar.col]
****
*<<indexing-and-basic-data-operations.adoc#indexing-and-basic-data-operations,Indexing and Basic Data Operations>>*: This section describes the indexing process and basic index operations, such as commit, optimize, and rollback.
*<<indexing-and-basic-data-operations.adoc#,Indexing and Basic Data Operations>>*: This section describes the indexing process and basic index operations, such as commit, optimize, and rollback.
*<<documents-fields-and-schema-design.adoc#documents-fields-and-schema-design,Documents, Fields, and Schema Design>>*: This section describes how Solr organizes data in the index. It explains how a Solr schema defines the fields and field types which Solr uses to organize data within the document files it indexes.
*<<documents-fields-and-schema-design.adoc#,Documents, Fields, and Schema Design>>*: This section describes how Solr organizes data in the index. It explains how a Solr schema defines the fields and field types which Solr uses to organize data within the document files it indexes.
*<<understanding-analyzers-tokenizers-and-filters.adoc#understanding-analyzers-tokenizers-and-filters,Understanding Analyzers, Tokenizers, and Filters>>*: This section explains how Solr prepares text for indexing and searching. Analyzers parse text and produce a stream of tokens, lexical units used for indexing and searching. Tokenizers break field data down into tokens. Filters perform other transformational or selective work on token streams.
*<<understanding-analyzers-tokenizers-and-filters.adoc#,Understanding Analyzers, Tokenizers, and Filters>>*: This section explains how Solr prepares text for indexing and searching. Analyzers parse text and produce a stream of tokens, lexical units used for indexing and searching. Tokenizers break field data down into tokens. Filters perform other transformational or selective work on token streams.
****
.Searching Documents
[sidebar.col]
****
*<<searching.adoc#searching,Searching>>*: This section presents an overview of the search process in Solr. It describes the main components used in searches, including request handlers, query parsers, and response writers. It lists the query parameters that can be passed to Solr, and it describes features such as boosting and faceting, which can be used to fine-tune search results.
*<<searching.adoc#,Searching>>*: This section presents an overview of the search process in Solr. It describes the main components used in searches, including request handlers, query parsers, and response writers. It lists the query parameters that can be passed to Solr, and it describes features such as boosting and faceting, which can be used to fine-tune search results.
*<<streaming-expressions.adoc#streaming-expressions,Streaming Expressions>>*: A stream processing language for Solr, with a suite of functions to perform many types of queries and parallel execution tasks.
*<<streaming-expressions.adoc#,Streaming Expressions>>*: A stream processing language for Solr, with a suite of functions to perform many types of queries and parallel execution tasks.
*<<client-apis.adoc#client-apis,Client APIs>>*: This section tells you how to access Solr through various client APIs, including JavaScript, JSON, and Ruby.
*<<client-apis.adoc#,Client APIs>>*: This section tells you how to access Solr through various client APIs, including JavaScript, JSON, and Ruby.
****
--
@ -120,20 +120,20 @@ The *<<getting-started.adoc#getting-started,Getting Started>>* section guides yo
.Scaling Solr
[sidebar.col]
****
*<<solrcloud.adoc#solrcloud,SolrCloud>>*: This section describes SolrCloud, which provides comprehensive distributed capabilities.
*<<solrcloud.adoc#,SolrCloud>>*: This section describes SolrCloud, which provides comprehensive distributed capabilities.
*<<legacy-scaling-and-distribution.adoc#legacy-scaling-and-distribution,Legacy Scaling and Distribution>>*: This section tells you how to grow a Solr distribution by dividing a large index into sections called shards, which are then distributed across multiple servers, or by replicating a single index across multiple services.
*<<legacy-scaling-and-distribution.adoc#,Legacy Scaling and Distribution>>*: This section tells you how to grow a Solr distribution by dividing a large index into sections called shards, which are then distributed across multiple servers, or by replicating a single index across multiple services.
*<<circuit-breakers.adoc#circuit-breakers,Circuit Breakers>>*: This section talks about circuit breakers, a way of allowing a higher stability of Solr nodes and increased service level guarantees of requests that are accepted by Solr.
*<<circuit-breakers.adoc#,Circuit Breakers>>*: This section talks about circuit breakers, a way of allowing a higher stability of Solr nodes and increased service level guarantees of requests that are accepted by Solr.
*<<rate-limiters.adoc#rate-limiters,Request Rate Limiters>>*: This section talks about request rate limiters, a way of guaranteeing throughput per request type and dedicating resource quotas by resource type. Rate limiter configurations are per instance/JVM and applied to the entire JVM, not at a core/collection level.
*<<rate-limiters.adoc#,Request Rate Limiters>>*: This section talks about request rate limiters, a way of guaranteeing throughput per request type and dedicating resource quotas by resource type. Rate limiter configurations are per instance/JVM and applied to the entire JVM, not at a core/collection level.
****
.Advanced Configuration
[sidebar.col]
****
*<<securing-solr.adoc#securing-solr,Securing Solr>>*: When planning how to secure Solr, you should consider which of the available features or approaches are right for you.
*<<securing-solr.adoc#,Securing Solr>>*: When planning how to secure Solr, you should consider which of the available features or approaches are right for you.
*<<the-well-configured-solr-instance.adoc#the-well-configured-solr-instance,The Well-Configured Solr Instance>>*: This section discusses performance tuning for Solr. It begins with an overview of the `solrconfig.xml` file, then tells you how to configure cores with `solr.xml`, how to configure the Lucene index writer, and more.
*<<the-well-configured-solr-instance.adoc#,The Well-Configured Solr Instance>>*: This section discusses performance tuning for Solr. It begins with an overview of the `solrconfig.xml` file, then tells you how to configure cores with `solr.xml`, how to configure the Lucene index writer, and more.
****
--

View File

@ -168,7 +168,7 @@ The defaults for the above attributes are dynamically set based on whether the u
=== mergedSegmentWarmer
When using Solr in for <<near-real-time-searching.adoc#near-real-time-searching,Near Real Time Searching>> a merged segment warmer can be configured to warm the reader on the newly merged segment, before the merge commits. This is not required for near real-time search, but will reduce search latency on opening a new near real-time reader after a merge completes.
When using Solr in for <<near-real-time-searching.adoc#,Near Real Time Searching>> a merged segment warmer can be configured to warm the reader on the newly merged segment, before the merge commits. This is not required for near real-time search, but will reduce search latency on opening a new near real-time reader after a merge completes.
[source,xml]
----
@ -198,14 +198,14 @@ Many <<Merging Index Segments,Merge Policy>> implementations support `noCFSRatio
The LockFactory options specify the locking implementation to use.
The set of valid lock type options depends on the <<datadir-and-directoryfactory-in-solrconfig.adoc#datadir-and-directoryfactory-in-solrconfig,DirectoryFactory>> you have configured.
The set of valid lock type options depends on the <<datadir-and-directoryfactory-in-solrconfig.adoc#,DirectoryFactory>> you have configured.
The values listed below are are supported by `StandardDirectoryFactory` (the default):
* `native` (default) uses NativeFSLockFactory to specify native OS file locking. If a second Solr process attempts to access the directory, it will fail. Do not use when multiple Solr web applications are attempting to share a single index. See also the {lucene-javadocs}/core/org/apache/lucene/store/NativeFSLockFactory.html[Javadocs].
* `simple` uses SimpleFSLockFactory to specify a plain file for locking. See also the {lucene-javadocs}/core/org/apache/lucene/store/SimpleFSLockFactory.html[Javadocs].
* `single` (expert) uses SingleInstanceLockFactory. Use for special situations of a read-only index directory, or when there is no possibility of more than one process trying to modify the index (even sequentially). This type will protect against multiple cores within the _same_ JVM attempting to access the same index. WARNING! If multiple Solr instances in different JVMs modify an index, this type will _not_ protect against index corruption. See also the {lucene-javadocs}/core/org/apache/lucene/store/SingleInstanceLockFactory.html[Javadocs].
* `hdfs` uses HdfsLockFactory to support reading and writing index and transaction log files to a HDFS filesystem. See the section <<running-solr-on-hdfs.adoc#running-solr-on-hdfs,Running Solr on HDFS>> for more details on using this feature.
* `hdfs` uses HdfsLockFactory to support reading and writing index and transaction log files to a HDFS filesystem. See the section <<running-solr-on-hdfs.adoc#,Running Solr on HDFS>> for more details on using this feature.
[source,xml]
----

View File

@ -29,28 +29,28 @@
This section describes how Solr adds data to its index. It covers the following topics:
* *<<introduction-to-solr-indexing.adoc#introduction-to-solr-indexing,Introduction to Solr Indexing>>*: An overview of Solr's indexing process.
* *<<introduction-to-solr-indexing.adoc#,Introduction to Solr Indexing>>*: An overview of Solr's indexing process.
* *<<post-tool.adoc#post-tool,Post Tool>>*: Information about using `post.jar` to quickly upload some content to your system.
* *<<post-tool.adoc#,Post Tool>>*: Information about using `post.jar` to quickly upload some content to your system.
* *<<uploading-data-with-index-handlers.adoc#uploading-data-with-index-handlers,Uploading Data with Index Handlers>>*: Information about using Solr's Index Handlers to upload XML/XSLT, JSON and CSV data.
* *<<uploading-data-with-index-handlers.adoc#,Uploading Data with Index Handlers>>*: Information about using Solr's Index Handlers to upload XML/XSLT, JSON and CSV data.
* *<<transforming-and-indexing-custom-json.adoc#transforming-and-indexing-custom-json,Transforming and Indexing Custom JSON>>*: Index any JSON of your choice
* *<<transforming-and-indexing-custom-json.adoc#,Transforming and Indexing Custom JSON>>*: Index any JSON of your choice
* *<<indexing-nested-documents.adoc#indexing-nested-documents,Indexing Nested Documents>>*: Detailed information about indexing and schema configuration for nested documents.
* *<<indexing-nested-documents.adoc#,Indexing Nested Documents>>*: Detailed information about indexing and schema configuration for nested documents.
* *<<uploading-data-with-solr-cell-using-apache-tika.adoc#uploading-data-with-solr-cell-using-apache-tika,Uploading Data with Solr Cell using Apache Tika>>*: Information about using the Solr Cell framework to upload data for indexing.
* *<<uploading-data-with-solr-cell-using-apache-tika.adoc#,Uploading Data with Solr Cell using Apache Tika>>*: Information about using the Solr Cell framework to upload data for indexing.
* *<<updating-parts-of-documents.adoc#updating-parts-of-documents,Updating Parts of Documents>>*: Information about how to use atomic updates and optimistic concurrency with Solr.
* *<<updating-parts-of-documents.adoc#,Updating Parts of Documents>>*: Information about how to use atomic updates and optimistic concurrency with Solr.
* *<<detecting-languages-during-indexing.adoc#detecting-languages-during-indexing,Detecting Languages During Indexing>>*: Information about using language identification during the indexing process.
* *<<detecting-languages-during-indexing.adoc#,Detecting Languages During Indexing>>*: Information about using language identification during the indexing process.
* *<<de-duplication.adoc#de-duplication,De-Duplication>>*: Information about configuring Solr to mark duplicate documents as they are indexed.
* *<<de-duplication.adoc#,De-Duplication>>*: Information about configuring Solr to mark duplicate documents as they are indexed.
* *<<content-streams.adoc#content-streams,Content Streams>>*: Information about streaming content to Solr Request Handlers.
* *<<content-streams.adoc#,Content Streams>>*: Information about streaming content to Solr Request Handlers.
* *<<reindexing.adoc#reindexing,Reindexing>>*: Details about when reindexing is required or recommended, and some strategies for completely reindexing your documents.
* *<<reindexing.adoc#,Reindexing>>*: Details about when reindexing is required or recommended, and some strategies for completely reindexing your documents.
== Indexing Using Client APIs
Using client APIs, such as <<using-solrj.adoc#using-solrj,SolrJ>>, from your applications is an important option for updating Solr indexes. See the <<client-apis.adoc#client-apis,Client APIs>> section for more information.
Using client APIs, such as <<using-solrj.adoc#,SolrJ>>, from your applications is an important option for updating Solr indexes. See the <<client-apis.adoc#,Client APIs>> section for more information.

View File

@ -18,7 +18,7 @@
// specific language governing permissions and limitations
// under the License.
Solr supports indexing nested documents, described here, and ways to <<searching-nested-documents.adoc#searching-nested-documents,search and retrieve>> them very efficiently.
Solr supports indexing nested documents, described here, and ways to <<searching-nested-documents.adoc#,search and retrieve>> them very efficiently.
By way of examples: nested documents in Solr can be used to bind a blog post (parent document)
with comments (child documents) -- or as a way to model major product lines as parent documents,
@ -33,7 +33,7 @@ In terms of performance, indexing the relationships between documents usually yi
since the relationships are already stored in the index and do not need to be computed.
However, nested documents are less flexible than query time joins as it imposes rules that some applications may not be able to accept.
Nested documents may be indexed via either the XML or JSON data syntax, and is also supported by <<using-solrj.adoc#using-solrj,SolrJ>> with javabin.
Nested documents may be indexed via either the XML or JSON data syntax, and is also supported by <<using-solrj.adoc#,SolrJ>> with javabin.
[CAUTION]

View File

@ -20,7 +20,7 @@ The `<initParams>` section of `solrconfig.xml` allows you to define request hand
There are a couple of use cases where this might be desired:
* Some handlers are implicitly defined in code - see <<implicit-requesthandlers.adoc#implicit-requesthandlers,Implicit RequestHandlers>> - and there should be a way to add/append/override some of the implicitly defined properties.
* Some handlers are implicitly defined in code - see <<implicit-requesthandlers.adoc#,Implicit RequestHandlers>> - and there should be a way to add/append/override some of the implicitly defined properties.
* There are a few properties that are used across handlers. This helps you keep only a single definition of those properties and apply them over multiple handlers.
For example, if you want several of your search handlers to return the same list of fields, you can create an `<initParams>` section without having to define the same set of parameters in each request handler definition. If you have a single request handler that should return different fields, you can define the overriding parameters in individual `<requestHandler>` sections as usual.

View File

@ -19,7 +19,7 @@
Installation of Solr on Unix-compatible or Windows servers generally requires simply extracting (or, unzipping) the download package.
Please be sure to review the <<solr-system-requirements.adoc#solr-system-requirements,Solr System Requirements>> before starting Solr.
Please be sure to review the <<solr-system-requirements.adoc#,Solr System Requirements>> before starting Solr.
== Available Solr Packages
@ -37,7 +37,7 @@ When getting started with Solr, all you need to do is extract the Solr distribut
When you've progressed past initial evaluation of Solr, you'll want to take care to plan your implementation. You may need to reinstall Solr on another server or make a clustered SolrCloud environment.
When you're ready to setup Solr for a production environment, please refer to the instructions provided on the <<taking-solr-to-production.adoc#taking-solr-to-production,Taking Solr to Production>> page.
When you're ready to setup Solr for a production environment, please refer to the instructions provided on the <<taking-solr-to-production.adoc#,Taking Solr to Production>> page.
.What Size Server Do I Need?
[NOTE]
@ -47,7 +47,7 @@ How to size your Solr installation is a complex question that relies on a number
It's highly recommended that you spend a bit of time thinking about the factors that will impact hardware sizing for your Solr implementation. A very good blog post that discusses the issues to consider is https://lucidworks.com/2012/07/23/sizing-hardware-in-the-abstract-why-we-dont-have-a-definitive-answer/[Sizing Hardware in the Abstract: Why We Don't have a Definitive Answer].
====
One thing to note when planning your installation is that a hard limit exists in Lucene for the number of documents in a single index: approximately 2.14 billion documents (2,147,483,647 to be exact). In practice, it is highly unlikely that such a large number of documents would fit and perform well in a single index, and you will likely need to distribute your index across a cluster before you ever approach this number. If you know you will exceed this number of documents in total before you've even started indexing, it's best to plan your installation with <<solrcloud.adoc#solrcloud,SolrCloud>> as part of your design from the start.
One thing to note when planning your installation is that a hard limit exists in Lucene for the number of documents in a single index: approximately 2.14 billion documents (2,147,483,647 to be exact). In practice, it is highly unlikely that such a large number of documents would fit and perform well in a single index, and you will likely need to distribute your index across a cluster before you ever approach this number. If you know you will exceed this number of documents in total before you've even started indexing, it's best to plan your installation with <<solrcloud.adoc#,SolrCloud>> as part of your design from the start.
== Package Installation
@ -68,15 +68,15 @@ After installing Solr, you'll see the following directories and files within the
bin/::
This directory includes several important scripts that will make using Solr easier.
solr and solr.cmd::: This is <<solr-control-script-reference.adoc#solr-control-script-reference,Solr's Control Script>>, also known as `bin/solr` (*nix) / `bin/solr.cmd` (Windows). This script is the preferred tool to start and stop Solr. You can also create collections or cores, configure authentication, and work with configuration files when running in SolrCloud mode.
solr and solr.cmd::: This is <<solr-control-script-reference.adoc#,Solr's Control Script>>, also known as `bin/solr` (*nix) / `bin/solr.cmd` (Windows). This script is the preferred tool to start and stop Solr. You can also create collections or cores, configure authentication, and work with configuration files when running in SolrCloud mode.
post::: The <<post-tool.adoc#post-tool,PostTool>>, which provides a simple command line interface for POSTing content to Solr.
post::: The <<post-tool.adoc#,PostTool>>, which provides a simple command line interface for POSTing content to Solr.
solr.in.sh and solr.in.cmd:::
These are property files for *nix and Windows systems, respectively. System-level properties for Java, Jetty, and Solr are configured here. Many of these settings can be overridden when using `bin/solr` / `bin/solr.cmd`, but this allows you to set all the properties in one place.
install_solr_services.sh:::
This script is used on *nix systems to install Solr as a service. It is described in more detail in the section <<taking-solr-to-production.adoc#taking-solr-to-production,Taking Solr to Production>>.
This script is used on *nix systems to install Solr as a service. It is described in more detail in the section <<taking-solr-to-production.adoc#,Taking Solr to Production>>.
contrib/::
Solr's `contrib` directory includes add-on plugins for specialized features of Solr.
@ -97,17 +97,17 @@ server/::
This directory is where the heart of the Solr application resides. A README in this directory provides a detailed overview, but here are some highlights:
* Solr's Admin UI (`server/solr-webapp`)
* Jetty libraries (`server/lib`)
* Log files (`server/logs`) and log configurations (`server/resources`). See the section <<configuring-logging.adoc#configuring-logging,Configuring Logging>> for more details on how to customize Solr's default logging.
* Log files (`server/logs`) and log configurations (`server/resources`). See the section <<configuring-logging.adoc#,Configuring Logging>> for more details on how to customize Solr's default logging.
* Sample configsets (`server/solr/configsets`)
== Solr Examples
Solr includes a number of example documents and configurations to use when getting started. If you ran through the <<solr-tutorial.adoc#solr-tutorial,Solr Tutorial>>, you have already interacted with some of these files.
Solr includes a number of example documents and configurations to use when getting started. If you ran through the <<solr-tutorial.adoc#,Solr Tutorial>>, you have already interacted with some of these files.
Here are the examples included with Solr:
exampledocs::
This is a small set of simple CSV, XML, and JSON files that can be used with `bin/post` when first getting started with Solr. For more information about using `bin/post` with these files, see <<post-tool.adoc#post-tool,Post Tool>>.
This is a small set of simple CSV, XML, and JSON files that can be used with `bin/post` when first getting started with Solr. For more information about using `bin/post` with these files, see <<post-tool.adoc#,Post Tool>>.
files::
The `files` directory provides a basic search UI for documents such as Word or PDF that you may have stored locally. See the README there for details on how to use this example.
@ -137,7 +137,7 @@ This will start Solr in the background, listening on port 8983.
When you start Solr in the background, the script will wait to make sure Solr starts correctly before returning to the command line prompt.
TIP: All of the options for the Solr CLI are described in the section <<solr-control-script-reference.adoc#solr-control-script-reference,Solr Control Script Reference>>.
TIP: All of the options for the Solr CLI are described in the section <<solr-control-script-reference.adoc#,Solr Control Script Reference>>.
=== Start Solr with a Specific Bundled Example
@ -151,7 +151,7 @@ bin/solr -e techproducts
Currently, the available examples you can run are: techproducts, schemaless, and cloud. See the section <<solr-control-script-reference.adoc#running-with-example-configurations,Running with Example Configurations>> for details on each example.
.Getting Started with SolrCloud
NOTE: Running the `cloud` example starts Solr in <<solrcloud.adoc#solrcloud,SolrCloud>> mode. For more information on starting Solr in SolrCloud mode, see the section <<getting-started-with-solrcloud.adoc#getting-started-with-solrcloud,Getting Started with SolrCloud>>.
NOTE: Running the `cloud` example starts Solr in <<solrcloud.adoc#,SolrCloud>> mode. For more information on starting Solr in SolrCloud mode, see the section <<getting-started-with-solrcloud.adoc#,Getting Started with SolrCloud>>.
=== Check if Solr is Running

View File

@ -26,4 +26,4 @@ Clients use Solr's five fundamental operations to work with Solr. The operations
Queries are executed by creating a URL that contains all the query parameters. Solr examines the request URL, performs the query, and returns the results. The other operations are similar, although in certain cases the HTTP request is a POST operation and contains information beyond whatever is included in the request URL. An index operation, for example, may contain a document in the body of the request.
Solr also features an EmbeddedSolrServer that offers a Java API without requiring an HTTP connection. For details, see <<using-solrj.adoc#using-solrj,Using SolrJ>>.
Solr also features an EmbeddedSolrServer that offers a Java API without requiring an HTTP connection. For details, see <<using-solrj.adoc#,Using SolrJ>>.

View File

@ -20,11 +20,11 @@ Both Lucene and Solr were designed to scale to support large implementations wit
This section covers:
* <<distributed-search-with-index-sharding.adoc#distributed-search-with-index-sharding,distributing>> an index across multiple servers
* <<index-replication.adoc#index-replication,replicating>> an index on multiple servers
* <<merging-indexes.adoc#merging-indexes,merging indexes>>
* <<distributed-search-with-index-sharding.adoc#,distributing>> an index across multiple servers
* <<index-replication.adoc#,replicating>> an index on multiple servers
* <<merging-indexes.adoc#,merging indexes>>
If you need full scale distribution of indexes and queries, as well as replication, load balancing and failover, you may want to use SolrCloud. Full details on configuring and using SolrCloud is available in the section <<solrcloud.adoc#solrcloud,SolrCloud>>.
If you need full scale distribution of indexes and queries, as well as replication, load balancing and failover, you may want to use SolrCloud. Full details on configuring and using SolrCloud is available in the section <<solrcloud.adoc#,SolrCloud>>.
== What Problem Does Distribution Solve?
@ -40,4 +40,4 @@ Replicating an index is useful when:
* You have a large search volume which one machine cannot handle, so you need to distribute searches across multiple read-only copies of the index.
* There is a high volume/high rate of indexing which consumes machine resources and reduces search performance on the indexing machine, so you need to separate indexing and searching.
* You want to make a backup of the index (see <<making-and-restoring-backups.adoc#making-and-restoring-backups,Making and Restoring Backups>>).
* You want to make a backup of the index (see <<making-and-restoring-backups.adoc#,Making and Restoring Backups>>).

View File

@ -24,15 +24,15 @@ A Solr index can accept data from many different sources, including XML files, c
Here are the three most common ways of loading data into a Solr index:
* Using the <<uploading-data-with-solr-cell-using-apache-tika.adoc#uploading-data-with-solr-cell-using-apache-tika,Solr Cell>> framework built on Apache Tika for ingesting binary files or structured files such as Office, Word, PDF, and other proprietary formats.
* Using the <<uploading-data-with-solr-cell-using-apache-tika.adoc#,Solr Cell>> framework built on Apache Tika for ingesting binary files or structured files such as Office, Word, PDF, and other proprietary formats.
* Uploading XML files by sending HTTP requests to the Solr server from any environment where such requests can be generated.
* Writing a custom Java application to ingest data through Solr's Java Client API (which is described in more detail in <<client-apis.adoc#client-apis,Client APIs>>). Using the Java API may be the best choice if you're working with an application, such as a Content Management System (CMS), that offers a Java API.
* Writing a custom Java application to ingest data through Solr's Java Client API (which is described in more detail in <<client-apis.adoc#,Client APIs>>). Using the Java API may be the best choice if you're working with an application, such as a Content Management System (CMS), that offers a Java API.
Regardless of the method used to ingest data, there is a common basic data structure for data being fed into a Solr index: a _document_ containing multiple _fields,_ each with a _name_ and containing _content,_ which may be empty. One of the fields is usually designated as a unique ID field (analogous to a primary key in a database), although the use of a unique ID field is not strictly required by Solr.
If the field name is defined in the Schema that is associated with the index, then the analysis steps associated with that field will be applied to its content when the content is tokenized. Fields that are not explicitly defined in the Schema will either be ignored or mapped to a dynamic field definition (see <<documents-fields-and-schema-design.adoc#documents-fields-and-schema-design,Documents, Fields, and Schema Design>>), if one matching the field name exists.
If the field name is defined in the Schema that is associated with the index, then the analysis steps associated with that field will be applied to its content when the content is tokenized. Fields that are not explicitly defined in the Schema will either be ignored or mapped to a dynamic field definition (see <<documents-fields-and-schema-design.adoc#,Documents, Fields, and Schema Design>>), if one matching the field name exists.
== The Solr Example Directory

View File

@ -581,7 +581,7 @@ Unlike all the facets discussed so far, Aggregation functions (also called *face
|relatedness |`relatedness('popularity:[100 TO *]','inStock:true')`|A function for computing a relatedness score of the documents in the domain to a Foreground set, relative to a Background set (both defined as queries). This is primarily for use when building <<json-facet-api.adoc#relatedness-and-semantic-knowledge-graphs,Semantic Knowledge Graphs>>.
|===
Numeric aggregation functions such as `avg` can be on any numeric field, or on a <<function-queries.adoc#function-queries,nested function>> of multiple numeric fields such as `avg(div(popularity,price))`.
Numeric aggregation functions such as `avg` can be on any numeric field, or on a <<function-queries.adoc#,nested function>> of multiple numeric fields such as `avg(div(popularity,price))`.
The most common way of requesting an aggregation function is as a simple String containing the expression you wish to compute:
@ -617,7 +617,7 @@ include::{example-source-dir}JsonRequestApiTest.java[tag=solrj-json-metrics-face
====
--
An expanded form allows for <<local-parameters-in-queries.adoc#local-parameters-in-queries,Local Parameters>> to be specified. These may be used explicitly by some specialized aggregations such as `<<json-facet-api.adoc#relatedness-options,relatedness()>>`, but can also be used as parameter references to make aggregation expressions more readable, with out needing to use (global) request parameters:
An expanded form allows for <<local-parameters-in-queries.adoc#,Local Parameters>> to be specified. These may be used explicitly by some specialized aggregations such as `<<json-facet-api.adoc#relatedness-options,relatedness()>>`, but can also be used as parameter references to make aggregation expressions more readable, with out needing to use (global) request parameters:
[.dynamic-tabs]
--
@ -834,7 +834,7 @@ As discussed above, facets compute buckets or statistics based on their "domain"
* By default, top-level facets use the set of all documents matching the main query as their domain.
* Nested "sub-facets" are computed for every bucket of their parent facet, using a domain containing all documents in that bucket.
In addition to this default behavior, domains can be also be widened, narrowed, or changed entirely. The JSON Faceting API supports modifying domains through its `domain` property. This is discussed in more detail <<json-faceting-domain-changes.adoc#json-faceting-domain-changes,here>>
In addition to this default behavior, domains can be also be widened, narrowed, or changed entirely. The JSON Faceting API supports modifying domains through its `domain` property. This is discussed in more detail <<json-faceting-domain-changes.adoc#,here>>
== Special Stat Facet Functions
@ -842,7 +842,7 @@ Most stat facet functions (`avg`, `sumsq`, etc.) allow users to perform math com
=== uniqueBlock() and Block Join Counts
When a collection contains <<indexing-nested-documents.adoc#indexing-nested-documents, Nested Documents>>, the `blockChildren` and `blockParent` <<json-faceting-domain-changes.adoc#block-join-domain-changes, domain changes>> can be useful when searching for parent documents and you want to compute stats against all of the affected children documents (or vice versa).
When a collection contains <<indexing-nested-documents.adoc#, Nested Documents>>, the `blockChildren` and `blockParent` <<json-faceting-domain-changes.adoc#block-join-domain-changes, domain changes>> can be useful when searching for parent documents and you want to compute stats against all of the affected children documents (or vice versa).
But if you only need to know the _count_ of all the blocks that exist in the current domain, a more efficient option is the `uniqueBlock()` aggregate function.
Suppose we have products with multiple SKUs, and we want to count products for each color.

View File

@ -214,7 +214,7 @@ NOTE: While a `query` domain can be combined with an additional domain `filter`,
== Block Join Domain Changes
When a collection contains <<indexing-nested-documents.adoc#indexing-nested-documents, Nested Documents>>, the `blockChildren` or `blockParent` domain options can be used to transform an existing domain containing one type of document, into a domain containing the documents with the specified relationship (child or parent of) to the documents from the original domain.
When a collection contains <<indexing-nested-documents.adoc#, Nested Documents>>, the `blockChildren` or `blockParent` domain options can be used to transform an existing domain containing one type of document, into a domain containing the documents with the specified relationship (child or parent of) to the documents from the original domain.
Both of these options work similarly to the corresponding <<other-parsers.adoc#block-join-query-parsers,Block Join Query Parsers>> by taking in a single String query that exclusively matches all parent documents in the collection. If `blockParent` is used, then the resulting domain will contain all parent documents of the children from the original domain. If `blockChildren` is used, then the resulting domain will contain all child documents of the parents from the original domain. Quite often facets over child documents needs to be counted in parent documents, this can be done by `uniqueBlock(\_root_)` as described in <<json-facet-api#uniqueblock-and-block-join-counts, Block Join Facet Counts>>.

View File

@ -23,9 +23,9 @@ Queries and filters provided in JSON requests can be specified using a rich, pow
== Query DSL Structure
The JSON Request API accepts query values in three different formats:
* A valid <<the-standard-query-parser.adoc#the-standard-query-parser,query string>> that uses the default `deftype` (`lucene`, in most cases). e.g., `title:solr`.
* A valid <<the-standard-query-parser.adoc#,query string>> that uses the default `deftype` (`lucene`, in most cases). e.g., `title:solr`.
* A valid <<local-parameters-in-queries.adoc#local-parameters-in-queries,local parameters query string>> that specifies its `deftype` explicitly. e.g., `{!dismax qf=title}solr`.
* A valid <<local-parameters-in-queries.adoc#,local parameters query string>> that specifies its `deftype` explicitly. e.g., `{!dismax qf=title}solr`.
* A valid JSON object with the name of the query parser and any relevant parameters. e.g., `{ "lucene": {"df":"title", "query":"solr"}}`.
** The top level "query" JSON block generally only has a single property representing the name of the query parser to use. The value for the query parser property is a child block containing any relevant parameters as JSON properties. The whole structure is analogous to a "local-params" query string. The query itself (often represented in local params using the name `v`) is specified with the key `query` instead.

View File

@ -18,13 +18,13 @@
If you are using Kerberos to secure your network environment, the Kerberos authentication plugin can be used to secure a Solr cluster.
This allows Solr to use a Kerberos service principal and keytab file to authenticate with ZooKeeper and between nodes of the Solr cluster (if applicable). Users of the Admin UI and all clients (such as <<using-solrj.adoc#using-solrj,SolrJ>>) would also need to have a valid ticket before being able to use the UI or send requests to Solr.
This allows Solr to use a Kerberos service principal and keytab file to authenticate with ZooKeeper and between nodes of the Solr cluster (if applicable). Users of the Admin UI and all clients (such as <<using-solrj.adoc#,SolrJ>>) would also need to have a valid ticket before being able to use the UI or send requests to Solr.
Support for the Kerberos authentication plugin is available in SolrCloud mode or standalone mode.
[TIP]
====
If you are using Solr with a Hadoop cluster secured with Kerberos and intend to store your Solr indexes in HDFS, also see the section <<running-solr-on-hdfs.adoc#running-solr-on-hdfs,Running Solr on HDFS>> for additional steps to configure Solr for that purpose. The instructions on this page apply only to scenarios where Solr will be secured with Kerberos. If you only need to store your indexes in a Kerberized HDFS system, please see the other section referenced above.
If you are using Solr with a Hadoop cluster secured with Kerberos and intend to store your Solr indexes in HDFS, also see the section <<running-solr-on-hdfs.adoc#,Running Solr on HDFS>> for additional steps to configure Solr for that purpose. The instructions on this page apply only to scenarios where Solr will be secured with Kerberos. If you only need to store your indexes in a Kerberized HDFS system, please see the other section referenced above.
====
== How Solr Works With Kerberos
@ -33,7 +33,7 @@ When setting up Solr to use Kerberos, configurations are put in place for Solr t
=== security.json
The Solr authentication model uses a file called `security.json`. A description of this file and how it is created and maintained is covered in the section <<authentication-and-authorization-plugins.adoc#authentication-and-authorization-plugins,Authentication and Authorization Plugins>>. If this file is created after an initial startup of Solr, a restart of each node of the system is required.
The Solr authentication model uses a file called `security.json`. A description of this file and how it is created and maintained is covered in the section <<authentication-and-authorization-plugins.adoc#,Authentication and Authorization Plugins>>. If this file is created after an initial startup of Solr, a restart of each node of the system is required.
=== Service Principals and Keytab Files
@ -176,11 +176,11 @@ server/scripts/cloud-scripts/zkcli.sh -zkhost localhost:2181 -cmd put /security.
If you are using Solr in standalone mode, you need to create the `security.json` file and put it in your `$SOLR_HOME` directory.
More details on how to use a `/security.json` file in Solr are available in the section <<authentication-and-authorization-plugins.adoc#authentication-and-authorization-plugins,Authentication and Authorization Plugins>>.
More details on how to use a `/security.json` file in Solr are available in the section <<authentication-and-authorization-plugins.adoc#,Authentication and Authorization Plugins>>.
[IMPORTANT]
====
If you already have a `/security.json` file in ZooKeeper, download the file, add or modify the authentication section and upload it back to ZooKeeper using the <<command-line-utilities.adoc#command-line-utilities,Command Line Utilities>> available in Solr.
If you already have a `/security.json` file in ZooKeeper, download the file, add or modify the authentication section and upload it back to ZooKeeper using the <<command-line-utilities.adoc#,Command Line Utilities>> available in Solr.
====
=== Define a JAAS Configuration File
@ -217,7 +217,7 @@ The main properties we are concerned with are the `keyTab` and `principal` prope
=== Solr Startup Parameters
While starting up Solr, the following host-specific parameters need to be passed. These parameters can be passed at the command line with the `bin/solr` start command (see <<solr-control-script-reference.adoc#solr-control-script-reference,Solr Control Script Reference>> for details on how to pass system parameters) or defined in `bin/solr.in.sh` or `bin/solr.in.cmd` as appropriate for your operating system.
While starting up Solr, the following host-specific parameters need to be passed. These parameters can be passed at the command line with the `bin/solr` start command (see <<solr-control-script-reference.adoc#,Solr Control Script Reference>> for details on how to pass system parameters) or defined in `bin/solr.in.sh` or `bin/solr.in.cmd` as appropriate for your operating system.
`solr.kerberos.name.rules`::
Used to map Kerberos principals to short names. Default value is `DEFAULT`. Example of a name rule: `RULE:[1:$1@$0](.\*EXAMPLE.COM)s/@.*//`.
@ -269,7 +269,7 @@ There are a few use cases for Solr where this might be helpful:
* When load on the Kerberos server is high. Delegation tokens can reduce the load because they do not access the server after the first request.
* If requests or permissions need to be delegated to another user.
To enable delegation tokens, several parameters must be defined. These parameters can be passed at the command line with the `bin/solr` start command (see <<solr-control-script-reference.adoc#solr-control-script-reference,Solr Control Script Reference>> for details on how to pass system parameters) or defined in `bin/solr.in.sh` or `bin/solr.in.cmd` as appropriate for your operating system.
To enable delegation tokens, several parameters must be defined. These parameters can be passed at the command line with the `bin/solr` start command (see <<solr-control-script-reference.adoc#,Solr Control Script Reference>> for details on how to pass system parameters) or defined in `bin/solr.in.sh` or `bin/solr.in.cmd` as appropriate for your operating system.
`solr.kerberos.delegation.token.enabled`::
This is `false` by default, set to `true` to enable delegation tokens. This parameter is required if you want to enable tokens.

View File

@ -23,13 +23,13 @@ For the European languages, tokenization is fairly straightforward. Tokens are d
In other languages the tokenization rules are often not so simple. Some European languages may also require special tokenization rules, such as rules for decompounding German words.
For information about language detection at index time, see <<detecting-languages-during-indexing.adoc#detecting-languages-during-indexing,Detecting Languages During Indexing>>.
For information about language detection at index time, see <<detecting-languages-during-indexing.adoc#,Detecting Languages During Indexing>>.
== KeywordMarkerFilterFactory
Protects words from being modified by stemmers. A customized protected word list may be specified with the "protected" attribute in the schema. Any words in the protected word list will not be modified by any stemmer in Solr.
A sample Solr `protwords.txt` with comments can be found in the `sample_techproducts_configs` <<config-sets.adoc#config-sets,configset>> directory:
A sample Solr `protwords.txt` with comments can be found in the `sample_techproducts_configs` <<config-sets.adoc#,configset>> directory:
[.dynamic-tabs]
--
@ -166,7 +166,7 @@ Compound words are most commonly found in Germanic languages.
*Arguments:*
`dictionary`:: (required) The path of a file that contains a list of simple words, one per line. Blank lines and lines that begin with "#" are ignored. See <<resource-loading.adoc#resource-loading,Resource Loading>> for more information.
`dictionary`:: (required) The path of a file that contains a list of simple words, one per line. Blank lines and lines that begin with "`#`" are ignored. See <<resource-loading.adoc#,Resource Loading>> for more information.
`minWordSize`:: (integer, default 5) Any token shorter than this is not decompounded.
@ -497,9 +497,9 @@ The OpenNLP Tokenizer takes two language-specific binary model files as paramete
*Arguments:*
`sentenceModel`:: (required) The path of a language-specific OpenNLP sentence detection model file. See <<resource-loading.adoc#resource-loading,Resource Loading>> for more information.
`sentenceModel`:: (required) The path of a language-specific OpenNLP sentence detection model file. See <<resource-loading.adoc#,Resource Loading>> for more information.
`tokenizerModel`:: (required) The path of a language-specific OpenNLP tokenization model file. See <<resource-loading.adoc#resource-loading,Resource Loading>> for more information.
`tokenizerModel`:: (required) The path of a language-specific OpenNLP tokenization model file. See <<resource-loading.adoc#,Resource Loading>> for more information.
*Example:*
@ -541,7 +541,7 @@ NOTE: Lucene currently does not index token types, so if you want to keep this i
*Arguments:*
`posTaggerModel`:: (required) The path of a language-specific OpenNLP POS tagger model file. See <<resource-loading.adoc#resource-loading,Resource Loading>> for more information.
`posTaggerModel`:: (required) The path of a language-specific OpenNLP POS tagger model file. See <<resource-loading.adoc#,Resource Loading>> for more information.
*Examples:*
@ -636,7 +636,7 @@ NOTE: Lucene currently does not index token types, so if you want to keep this i
*Arguments:*
`chunkerModel`:: (required) The path of a language-specific OpenNLP phrase chunker model file. See <<resource-loading.adoc#resource-loading,Resource Loading>> for more information.
`chunkerModel`:: (required) The path of a language-specific OpenNLP phrase chunker model file. See <<resource-loading.adoc#,Resource Loading>> for more information.
*Examples*:
@ -700,9 +700,9 @@ This filter replaces the text of each token with its lemma. Both a dictionary-ba
Either `dictionary` or `lemmatizerModel` must be provided, and both may be provided - see the examples below:
`dictionary`:: (optional) The path of a lemmatization dictionary file. See <<resource-loading.adoc#resource-loading,Resource Loading>> for more information. The dictionary file must be encoded as UTF-8, with one entry per line, in the form `word[tab]lemma[tab]part-of-speech`, e.g., `wrote[tab]write[tab]VBD`.
`dictionary`:: (optional) The path of a lemmatization dictionary file. See <<resource-loading.adoc#,Resource Loading>> for more information. The dictionary file must be encoded as UTF-8, with one entry per line, in the form `word[tab]lemma[tab]part-of-speech`, e.g., `wrote[tab]write[tab]VBD`.
`lemmatizerModel`:: (optional) The path of a language-specific OpenNLP lemmatizer model file. See <<resource-loading.adoc#resource-loading,Resource Loading>> for more information.
`lemmatizerModel`:: (optional) The path of a language-specific OpenNLP lemmatizer model file. See <<resource-loading.adoc#,Resource Loading>> for more information.
*Examples:*
@ -1887,7 +1887,7 @@ Removes terms with one of the configured parts-of-speech. `JapaneseTokenizer` an
*Arguments:*
`tags`:: filename for a list of parts-of-speech for which to remove terms; see `conf/lang/stoptags_ja.txt` in the `sample_techproducts_config` <<config-sets.adoc#config-sets,configset>> for an example.
`tags`:: filename for a list of parts-of-speech for which to remove terms; see `conf/lang/stoptags_ja.txt` in the `sample_techproducts_config` <<config-sets.adoc#,configset>> for an example.
`enablePositionIncrements`:: if `luceneMatchVersion` is `4.3` or earlier and `enablePositionIncrements="false"`, no position holes will be left by this filter when it removes tokens. *This argument is invalid if `luceneMatchVersion` is `5.0` or later.*

View File

@ -24,7 +24,7 @@ The module also supports feature extraction inside Solr. The only thing you need
=== Re-Ranking
Re-Ranking allows you to run a simple query for matching documents and then re-rank the top N documents using the scores from a different, more complex query. This page describes the use of *LTR* complex queries, information on other rank queries included in the Solr distribution can be found on the <<query-re-ranking.adoc#query-re-ranking,Query Re-Ranking>> page.
Re-Ranking allows you to run a simple query for matching documents and then re-rank the top N documents using the scores from a different, more complex query. This page describes the use of *LTR* complex queries, information on other rank queries included in the Solr distribution can be found on the <<query-re-ranking.adoc#,Query Re-Ranking>> page.
=== Learning To Rank Models
@ -83,7 +83,7 @@ The LTR contrib module includes several feature classes as well as support for c
==== Feature Extraction
The ltr contrib module includes a <<transforming-result-documents.adoc#transforming-result-documents,[features>> transformer] to support the calculation and return of feature values for https://en.wikipedia.org/wiki/Feature_extraction[feature extraction] purposes including and especially when you do not yet have an actual reranking model.
The ltr contrib module includes a <<transforming-result-documents.adoc#,[features>> transformer] to support the calculation and return of feature values for https://en.wikipedia.org/wiki/Feature_extraction[feature extraction] purposes including and especially when you do not yet have an actual reranking model.
==== Feature Selection and Model Training
@ -662,7 +662,7 @@ As an alternative to the above-described `DefaultWrapperModel`, it is possible t
=== Applying Changes
The feature store and the model store are both <<managed-resources.adoc#managed-resources,Managed Resources>>. Changes made to managed resources are not applied to the active Solr components until the Solr collection (or Solr core in single server mode) is reloaded.
The feature store and the model store are both <<managed-resources.adoc#,Managed Resources>>. Changes made to managed resources are not applied to the active Solr components until the Solr collection (or Solr core in single server mode) is reloaded.
=== LTR Examples

View File

@ -21,12 +21,12 @@ This section describes how to set up distribution and replication in Solr. It is
This section covers the following topics:
<<introduction-to-scaling-and-distribution.adoc#introduction-to-scaling-and-distribution,Introduction to Scaling and Distribution>>: Conceptual information about distribution and replication in Solr.
<<introduction-to-scaling-and-distribution.adoc#,Introduction to Scaling and Distribution>>: Conceptual information about distribution and replication in Solr.
<<distributed-search-with-index-sharding.adoc#distributed-search-with-index-sharding,Distributed Search with Index Sharding>>: Detailed information about implementing distributed searching in Solr.
<<distributed-search-with-index-sharding.adoc#,Distributed Search with Index Sharding>>: Detailed information about implementing distributed searching in Solr.
<<index-replication.adoc#index-replication,Index Replication>>: Detailed information about replicating your Solr indexes.
<<index-replication.adoc#,Index Replication>>: Detailed information about replicating your Solr indexes.
<<combining-distribution-and-replication.adoc#combining-distribution-and-replication,Combining Distribution and Replication>>: Detailed information about replicating shards in a distributed index.
<<combining-distribution-and-replication.adoc#,Combining Distribution and Replication>>: Detailed information about replicating shards in a distributed index.
<<merging-indexes.adoc#merging-indexes,Merging Indexes>>: Information about combining separate indexes in Solr.
<<merging-indexes.adoc#,Merging Indexes>>: Information about combining separate indexes in Solr.

View File

@ -45,7 +45,7 @@ Solr plugins won't work in these locations.
== Lib Directives in SolrConfig
_Both_ plugin and <<resource-loading.adoc#resource-loading,resource>> file paths are configurable via `<lib/>` directives in `solrconfig.xml`.
_Both_ plugin and <<resource-loading.adoc#,resource>> file paths are configurable via `<lib/>` directives in `solrconfig.xml`.
When a directive matches a directory, then resources can be resolved from it.
When a directive matches a `.jar` file, Solr plugins and their dependencies are resolved from it.
Resources can be placed in a `.jar` too but that's unusual.

View File

@ -52,7 +52,7 @@ is equivalent to:
`q={!type=dismax qf=myfield}solr rocks`
If no "type" is specified (either explicitly or implicitly) then the <<the-standard-query-parser.adoc#the-standard-query-parser,lucene parser>> is used by default. Thus
If no "type" is specified (either explicitly or implicitly) then the <<the-standard-query-parser.adoc#,lucene parser>> is used by default. Thus
`fq={!df=summary}solr rocks`

View File

@ -32,4 +32,4 @@ When you select the *Level* link on the left, you see the hierarchy of classpath
.Log level selection
image::images/logging/level_menu.png[image,width=589,height=250]
For an explanation of the various logging levels, see <<configuring-logging.adoc#configuring-logging,Configuring Logging>>.
For an explanation of the various logging levels, see <<configuring-logging.adoc#,Configuring Logging>>.

View File

@ -18,8 +18,8 @@
This section of the user guide provides an introduction to Solr log analytics.
NOTE: This is an appendix of the <<math-expressions.adoc#math-expressions,Visual Guide to Streaming Expressions and Math Expressions>>. All the functions described below are covered in detail in the guide.
See the <<math-start.adoc#math-start,Getting Started>> chapter to learn how to get started with visualizations and Apache Zeppelin.
NOTE: This is an appendix of the <<math-expressions.adoc#,Visual Guide to Streaming Expressions and Math Expressions>>. All the functions described below are covered in detail in the guide.
See the <<math-start.adoc#,Getting Started>> chapter to learn how to get started with visualizations and Apache Zeppelin.
== Loading

View File

@ -463,7 +463,7 @@ each cluster. In this example the key features of the centroids are extracted
to represent the key phrases for clusters of TF-IDF term vectors.
NOTE: The example below works with TF-IDF _term vectors_.
The section <<term-vectors.adoc#term-vectors,Text Analysis and Term Vectors>> offers
The section <<term-vectors.adoc#,Text Analysis and Term Vectors>> offers
a full explanation of this features.
In the example the `search` function returns documents where the `review_t` field matches the phrase "star wars".

View File

@ -18,7 +18,7 @@
There are some major changes in Solr 6 to consider before starting to migrate your configurations and indexes.
There are many hundreds of changes, so a thorough review of the <<solr-upgrade-notes.adoc#solr-upgrade-notes,Solr Upgrade Notes>> section as well as the {solr-javadocs}/changes//Changes.html[CHANGES.txt] file in your Solr instance will help you plan your migration to Solr 6. This section attempts to highlight some of the major changes you should be aware of.
There are many hundreds of changes, so a thorough review of the <<solr-upgrade-notes.adoc#,Solr Upgrade Notes>> section as well as the {solr-javadocs}/changes//Changes.html[CHANGES.txt] file in your Solr instance will help you plan your migration to Solr 6. This section attempts to highlight some of the major changes you should be aware of.
== Highlights of New Features in Solr 6
@ -27,7 +27,7 @@ Some of the major improvements in Solr 6 include:
[[major-5-6-streaming]]
=== Streaming Expressions
Introduced in Solr 5, <<streaming-expressions.adoc#streaming-expressions,Streaming Expressions>> allow querying Solr and getting results as a stream of data, sorted and aggregated as requested.
Introduced in Solr 5, <<streaming-expressions.adoc#,Streaming Expressions>> allow querying Solr and getting results as a stream of data, sorted and aggregated as requested.
Several new expression types have been added in Solr 6:
@ -40,7 +40,7 @@ Several new expression types have been added in Solr 6:
[[major-5-6-parallel-sql]]
=== Parallel SQL Interface
Built on streaming expressions, new in Solr 6 is a <<parallel-sql-interface.adoc#parallel-sql-interface,Parallel SQL interface>> to be able to send SQL queries to Solr. SQL statements are compiled to streaming expressions on the fly, providing the full range of aggregations available to streaming expression requests. A JDBC driver is included, which allows using SQL clients and database visualization tools to query your Solr index and import data to other systems.
Built on streaming expressions, new in Solr 6 is a <<parallel-sql-interface.adoc#,Parallel SQL interface>> to be able to send SQL queries to Solr. SQL statements are compiled to streaming expressions on the fly, providing the full range of aggregations available to streaming expression requests. A JDBC driver is included, which allows using SQL clients and database visualization tools to query your Solr index and import data to other systems.
=== Cross Data Center Replication
@ -53,11 +53,11 @@ A new <<other-parsers.adoc#graph-query-parser,`graph` query parser>> makes it po
[[major-5-6-docvalues]]
=== DocValues
Most non-text field types in the Solr sample configsets now default to using <<docvalues.adoc#docvalues,DocValues>>.
Most non-text field types in the Solr sample configsets now default to using <<docvalues.adoc#,DocValues>>.
== Java 8 Required
The minimum supported version of Java for Solr 6 (and the <<using-solrj.adoc#using-solrj,SolrJ client libraries>>) is now Java 8.
The minimum supported version of Java for Solr 6 (and the <<using-solrj.adoc#,SolrJ client libraries>>) is now Java 8.
== Index Format Changes
@ -65,25 +65,25 @@ Solr 6 has no support for reading Lucene/Solr 4.x and earlier indexes. Be sure t
== Managed Schema is now the Default
Solr's default behavior when a `solrconfig.xml` does not explicitly define a `<schemaFactory/>` is now dependent on the `luceneMatchVersion` specified in that `solrconfig.xml`. When `luceneMatchVersion < 6.0`, `ClassicIndexSchemaFactory` will continue to be used for back compatibility, otherwise an instance of <<schema-factory-definition-in-solrconfig.adoc#schema-factory-definition-in-solrconfig,`ManagedIndexSchemaFactory`>> will be used.
Solr's default behavior when a `solrconfig.xml` does not explicitly define a `<schemaFactory/>` is now dependent on the `luceneMatchVersion` specified in that `solrconfig.xml`. When `luceneMatchVersion < 6.0`, `ClassicIndexSchemaFactory` will continue to be used for back compatibility, otherwise an instance of <<schema-factory-definition-in-solrconfig.adoc#,`ManagedIndexSchemaFactory`>> will be used.
The most notable impacts of this change are:
* Existing `solrconfig.xml` files that are modified to use `luceneMatchVersion >= 6.0`, but do _not_ have an explicitly configured `ClassicIndexSchemaFactory`, will have their `schema.xml` file automatically upgraded to a `managed-schema` file.
* Schema modifications via the <<schema-api.adoc#schema-api,Schema API>> will now be enabled by default.
* Schema modifications via the <<schema-api.adoc#,Schema API>> will now be enabled by default.
Please review the <<schema-factory-definition-in-solrconfig.adoc#schema-factory-definition-in-solrconfig,Schema Factory Definition in SolrConfig>> section for more details.
Please review the <<schema-factory-definition-in-solrconfig.adoc#,Schema Factory Definition in SolrConfig>> section for more details.
== Default Similarity Changes
Solr's default behavior when a Schema does not explicitly define a global <<other-schema-elements.adoc#other-schema-elements,`<similarity/>`>> is now dependent on the `luceneMatchVersion` specified in the `solrconfig.xml`. When `luceneMatchVersion < 6.0`, an instance of `ClassicSimilarityFactory` will be used, otherwise an instance of `SchemaSimilarityFactory` will be used. Most notably this change means that users can take advantage of per Field Type similarity declarations, without needing to also explicitly declare a global usage of `SchemaSimilarityFactory`.
Solr's default behavior when a Schema does not explicitly define a global <<other-schema-elements.adoc#,`<similarity/>`>> is now dependent on the `luceneMatchVersion` specified in the `solrconfig.xml`. When `luceneMatchVersion < 6.0`, an instance of `ClassicSimilarityFactory` will be used, otherwise an instance of `SchemaSimilarityFactory` will be used. Most notably this change means that users can take advantage of per Field Type similarity declarations, without needing to also explicitly declare a global usage of `SchemaSimilarityFactory`.
Regardless of whether it is explicitly declared, or used as an implicit global default, `SchemaSimilarityFactory` 's implicit behavior when a Field Types do not declare an explicit `<similarity />` has also been changed to depend on the the `luceneMatchVersion`. When `luceneMatchVersion < 6.0`, an instance of `ClassicSimilarity` will be used, otherwise an instance of `BM25Similarity` will be used. A `defaultSimFromFieldType` init option may be specified on the `SchemaSimilarityFactory` declaration to change this behavior. Please review the `SchemaSimilarityFactory` javadocs for more details.
== Replica & Shard Delete Command Changes
DELETESHARD and DELETEREPLICA now default to deleting the instance directory, data directory, and index directory for any replica they delete. Please review the <<collections-api.adoc#collections-api,Collection API>> documentation for details on new request parameters to prevent this behavior if you wish to keep all data on disk when using these commands.
DELETESHARD and DELETEREPLICA now default to deleting the instance directory, data directory, and index directory for any replica they delete. Please review the <<collections-api.adoc#,Collection API>> documentation for details on new request parameters to prevent this behavior if you wish to keep all data on disk when using these commands.
== facet.date.* Parameters Removed
The `facet.date` parameter (and associated `facet.date.*` parameters) that were deprecated in Solr 3.x have been removed completely. If you have not yet switched to using the equivalent <<faceting.adoc#faceting,`facet.range`>> functionality you must do so now before upgrading.
The `facet.date` parameter (and associated `facet.date.*` parameters) that were deprecated in Solr 3.x have been removed completely. If you have not yet switched to using the equivalent <<faceting.adoc#,`facet.range`>> functionality you must do so now before upgrading.

View File

@ -21,15 +21,15 @@ Solr 7 is a major new release of Solr which introduces new features and a number
== Upgrade Planning
There are major changes in Solr 7 to consider before starting to migrate your configurations and indexes. This page is designed to highlight the biggest changes - new features you may want to be aware of, but also changes in default behavior and deprecated features that have been removed.
There are many hundreds of changes in Solr 7, however, so a thorough review of the <<solr-upgrade-notes.adoc#solr-upgrade-notes,Solr Upgrade Notes>> as well as the {solr-javadocs}/changes//Changes.html[CHANGES.txt] file in your Solr instance will help you plan your migration to Solr 7. This section attempts to highlight some of the major changes you should be aware of.
There are many hundreds of changes in Solr 7, however, so a thorough review of the <<solr-upgrade-notes.adoc#,Solr Upgrade Notes>> as well as the {solr-javadocs}/changes//Changes.html[CHANGES.txt] file in your Solr instance will help you plan your migration to Solr 7. This section attempts to highlight some of the major changes you should be aware of.
You should also consider all changes that have been made to Solr in any version you have not upgraded to already. For example, if you are currently using Solr 6.2, you should review changes made in all subsequent 6.x releases in addition to changes for 7.0.
<<reindexing.adoc#upgrades,Reindexing>> your data is considered the best practice and you should try to do so if possible. However, if reindexing is not feasible, keep in mind you can only upgrade one major version at a time. Thus, Solr 6.x indexes will be compatible with Solr 7 but Solr 5.x indexes will not be.
If you do not reindex now, keep in mind that you will need to either reindex your data or upgrade your indexes before you will be able to move to Solr 8 when it is released in the future. See the section <<indexupgrader-tool.adoc#indexupgrader-tool,IndexUpgrader Tool>> for more details on how to upgrade your indexes.
If you do not reindex now, keep in mind that you will need to either reindex your data or upgrade your indexes before you will be able to move to Solr 8 when it is released in the future. See the section <<indexupgrader-tool.adoc#,IndexUpgrader Tool>> for more details on how to upgrade your indexes.
See also the section <<upgrading-a-solr-cluster.adoc#upgrading-a-solr-cluster,Upgrading a Solr Cluster>> for details on how to upgrade a SolrCloud cluster.
See also the section <<upgrading-a-solr-cluster.adoc#,Upgrading a Solr Cluster>> for details on how to upgrade a SolrCloud cluster.
== New Features & Enhancements
@ -54,13 +54,13 @@ At its core, Solr autoscaling provides users with a rule syntax to define prefer
** The documentation for this component is in progress; until it is available, please refer to https://issues.apache.org/jira/browse/SOLR-11144[SOLR-11144] for more details.
* There were several other new features released in earlier 6.x releases, which you may have missed:
** <<learning-to-rank.adoc#learning-to-rank,Learning to Rank>>
** <<learning-to-rank.adoc#,Learning to Rank>>
** <<highlighting.adoc#the-unified-highlighter,Unified Highlighter>>
** <<metrics-reporting.adoc#metrics-reporting,Metrics API>>. See also information about related deprecations in the section <<JMX Support and MBeans>> below.
** <<metrics-reporting.adoc#,Metrics API>>. See also information about related deprecations in the section <<JMX Support and MBeans>> below.
** <<other-parsers.adoc#payload-query-parsers,Payload queries>>
** <<stream-evaluator-reference.adoc#stream-evaluator-reference,Streaming Evaluators>>
** <<v2-api.adoc#v2-api,/v2 API>>
** <<graph-traversal.adoc#graph-traversal,Graph streaming expressions>>
** <<stream-evaluator-reference.adoc#,Streaming Evaluators>>
** <<v2-api.adoc#,/v2 API>>
** <<graph-traversal.adoc#,Graph streaming expressions>>
== Configuration and Default Changes
@ -79,7 +79,7 @@ To improve the functionality of Schemaless Mode, Solr now behaves differently wh
* Incoming fields will be indexed as `text_general` by default (you can change this). The name of the field will be the same as the field name defined in the document.
* A copy field rule will be inserted into your schema to copy the new `text_general` field to a new field with the name `<name>_str`. This field's type will be a `strings` field (to allow for multiple values). The first 256 characters of the text field will be inserted to the new `strings` field.
This behavior can be customized if you wish to remove the copy field rule, or to change the number of characters inserted to the string field, or the field type used. See the section <<schemaless-mode.adoc#schemaless-mode,Schemaless Mode>> for details.
This behavior can be customized if you wish to remove the copy field rule, or to change the number of characters inserted to the string field, or the field type used. See the section <<schemaless-mode.adoc#,Schemaless Mode>> for details.
TIP: Because copy field rules can slow indexing and increase index size, it's recommended you only use copy fields when you need to. If you do not need to sort or facet on a field, you should remove the automatically-generated copy field rule.
@ -145,7 +145,7 @@ Choose one of these field types instead:
* `SpatialRecursivePrefixTreeField`
* `RptWithGeometrySpatialField`
See the section <<spatial-search.adoc#spatial-search,Spatial Search>> for more information.
See the section <<spatial-search.adoc#,Spatial Search>> for more information.
=== JMX Support and MBeans
* The `<jmx>` element in `solrconfig.xml` has been removed in favor of `<metrics><reporter>` elements defined in `solr.xml`.
@ -172,7 +172,7 @@ The following changes were made in SolrJ.
* The `defaultSearchField` parameter in the schema is no longer supported. Use the `df` parameter instead. This option had been deprecated for several releases. See the section <<the-standard-query-parser.adoc#standard-query-parser-parameters,Standard Query Parser Parameters>> for more information.
* The `mergePolicy`, `mergeFactor` and `maxMergeDocs` parameters have been removed and are no longer supported. You should define a `mergePolicyFactory` instead. See the section <<indexconfig-in-solrconfig.adoc#mergepolicyfactory,the mergePolicyFactory>> for more information.
* The PostingsSolrHighlighter has been deprecated. It's recommended that you move to using the UnifiedHighlighter instead. See the section <<highlighting.adoc#the-unified-highlighter,Unified Highlighter>> for more information about this highlighter.
* Index-time boosts have been removed from Lucene, and are no longer available from Solr. If any boosts are provided, they will be ignored by the indexing chain. As a replacement, index-time scoring factors should be indexed in a separate field and combined with the query score using a function query. See the section <<function-queries.adoc#function-queries,Function Queries>> for more information.
* Index-time boosts have been removed from Lucene, and are no longer available from Solr. If any boosts are provided, they will be ignored by the indexing chain. As a replacement, index-time scoring factors should be indexed in a separate field and combined with the query score using a function query. See the section <<function-queries.adoc#,Function Queries>> for more information.
* The `StandardRequestHandler` is deprecated. Use `SearchHandler` instead.
* To improve parameter consistency in the Collections API, the parameter names `fromNode` for the MOVEREPLICA command and `source`, `target` for the REPLACENODE command have been deprecated and replaced with `sourceNode` and `targetNode` instead. The old names will continue to work for back-compatibility but they will be removed in Solr 8.
* The unused `valType` option has been removed from ExternalFileField, if you have this in your schema you can safely remove it.
@ -188,7 +188,7 @@ Note again that this is not a complete list of all changes that may impact your
* If you use the JSON Facet API (json.facet) with `method=stream`, you must now set `sort='index asc'` to get the streaming behavior; otherwise it won't stream. Reminder: `method` is a hint that doesn't change defaults of other parameters.
* If you use the JSON Facet API (json.facet) to facet on a numeric field and if you use `mincount=0` or if you set the prefix, you will now get an error as these options are incompatible with numeric faceting.
* Solr's logging verbosity at the INFO level has been greatly reduced, and you may need to update the log configs to use the DEBUG level to see all the logging messages you used to see at INFO level before.
* We are no longer backing up `solr.log` and `solr_gc.log` files in date-stamped copies forever. If you relied on the `solr_log_<date>` or `solr_gc_log_<date>` being in the logs folder that will no longer be the case. See the section <<configuring-logging.adoc#configuring-logging,Configuring Logging>> for details on how log rotation works as of Solr 6.3.
* We are no longer backing up `solr.log` and `solr_gc.log` files in date-stamped copies forever. If you relied on the `solr_log_<date>` or `solr_gc_log_<date>` being in the logs folder that will no longer be the case. See the section <<configuring-logging.adoc#,Configuring Logging>> for details on how log rotation works as of Solr 6.3.
* The create/deleteCollection methods on `MiniSolrCloudCluster` have been deprecated. Clients should instead use the `CollectionAdminRequest` API. In addition, `MiniSolrCloudCluster#uploadConfigDir(File, String)` has been deprecated in favour of `#uploadConfigSet(Path, String)`.
* The `bin/solr.in.sh` (`bin/solr.in.cmd` on Windows) is now completely commented by default. Previously, this wasn't so, which had the effect of masking existing environment variables.
* The `\_version_` field is no longer indexed and is now defined with `indexed=false` by default, because the field has DocValues enabled.
@ -198,7 +198,7 @@ Note again that this is not a complete list of all changes that may impact your
** The metrics "75thPctlRequestTime", "95thPctlRequestTime", "99thPctlRequestTime" and "999thPctlRequestTime" in Overseer Status API have been renamed to "75thPcRequestTime", "95thPcRequestTime" and so on for consistency with stats output in other parts of Solr.
** The metrics "avgRequestsPerMinute", "5minRateRequestsPerMinute" and "15minRateRequestsPerMinute" have been replaced by corresponding per-second rates viz. "avgRequestsPerSecond", "5minRateRequestsPerSecond" and "15minRateRequestsPerSecond" for consistency with stats output in other parts of Solr.
* A new highlighter named UnifiedHighlighter has been added. You are encouraged to try out the UnifiedHighlighter by setting `hl.method=unified` and report feedback. It's more efficient/faster than the other highlighters, especially compared to the original Highlighter. See `HighlightParams.java` for a listing of highlight parameters annotated with which highlighters use them. `hl.useFastVectorHighlighter` is now considered deprecated in lieu of `hl.method=fastVector`.
* The <<query-settings-in-solrconfig.adoc#query-settings-in-solrconfig,`maxWarmingSearchers` parameter>> now defaults to 1, and more importantly commits will now block if this limit is exceeded instead of throwing an exception (a good thing). Consequently there is no longer a risk in overlapping commits. Nonetheless users should continue to avoid excessive committing. Users are advised to remove any pre-existing `maxWarmingSearchers` entries from their `solrconfig.xml` files.
* The <<query-settings-in-solrconfig.adoc#,`maxWarmingSearchers` parameter>> now defaults to 1, and more importantly commits will now block if this limit is exceeded instead of throwing an exception (a good thing). Consequently there is no longer a risk in overlapping commits. Nonetheless users should continue to avoid excessive committing. Users are advised to remove any pre-existing `maxWarmingSearchers` entries from their `solrconfig.xml` files.
* The <<other-parsers.adoc#complex-phrase-query-parser,Complex Phrase query parser>> now supports leading wildcards. Beware of its possible heaviness, users are encouraged to use ReversedWildcardFilter in index time analysis.
* The JMX metric "avgTimePerRequest" (and the corresponding metric in the metrics API for each handler) used to be a simple non-decaying average based on total cumulative time and the number of requests. The Codahale Metrics implementation applies exponential decay to this value, which heavily biases the average towards the last 5 minutes.
* Parallel SQL now uses Apache Calcite as its SQL framework. As part of this change the default aggregation mode has been changed to `facet` rather than `map_reduce`. There have also been changes to the SQL aggregate response and some SQL syntax changes. Consult the <<parallel-sql-interface.adoc#parallel-sql-interface,Parallel SQL Interface>> documentation for full details.
* Parallel SQL now uses Apache Calcite as its SQL framework. As part of this change the default aggregation mode has been changed to `facet` rather than `map_reduce`. There have also been changes to the SQL aggregate response and some SQL syntax changes. Consult the <<parallel-sql-interface.adoc#,Parallel SQL Interface>> documentation for full details.

View File

@ -51,7 +51,7 @@ When using this parameter internal requests are sent by using HTTP/1.1.
./bin/solr start -c -Dsolr.http1=true -z localhost:2481/solr -s /path/to/solr/home
----
+
Note the above command *must* be customized for your environment. The section <<solr-control-script-reference.adoc#solr-control-script-reference,Solr Control Script Reference>> has all the possible options. If you are running Solr as a service, you may prefer to review the section <<upgrading-a-solr-cluster.adoc#upgrading-a-solr-cluster,Upgrading a Solr Cluster>>.
Note the above command *must* be customized for your environment. The section <<solr-control-script-reference.adoc#,Solr Control Script Reference>> has all the possible options. If you are running Solr as a service, you may prefer to review the section <<upgrading-a-solr-cluster.adoc#,Upgrading a Solr Cluster>>.
. When all nodes have been upgraded to 8.0, restart each one without the `-Dsolr.http1` parameter.
@ -59,7 +59,7 @@ Note the above command *must* be customized for your environment. The section <<
It is always strongly recommended that you fully reindex your documents after a major version upgrade.
Solr has a new section of the Reference Guide, <<reindexing.adoc#reindexing,Reindexing>> which covers several strategies for how to reindex.
Solr has a new section of the Reference Guide, <<reindexing.adoc#,Reindexing>> which covers several strategies for how to reindex.
[#new-features-8]
== New Features & Enhancements
@ -134,8 +134,8 @@ then you must do so with a delete-by-query technique.
* Solr has a new field in the `\_default` configset, called `_nest_path_`. This field stores the path of the document
in the hierarchy for non-root documents.
See the sections <<indexing-nested-documents.adoc#indexing-nested-documents,Indexing Nested Documents>> and
<<searching-nested-documents.adoc#searching-nested-documents,Searching Nested Documents>> for more information
See the sections <<indexing-nested-documents.adoc#,Indexing Nested Documents>> and
<<searching-nested-documents.adoc#,Searching Nested Documents>> for more information
and configuration details.
[#config-changes-8]
@ -159,8 +159,8 @@ See also the section <<other-schema-elements.adoc#similarity,Similarity>> for mo
* Memory codecs have been removed from Lucene (`MemoryPostings`, `MemoryDocValues`) and are no longer available in Solr.
If you used `postingsFormat="Memory"` or `docValuesFormat="Memory"` on any field or field type configuration then either remove that setting to use the default or experiment with one of the other options.
+
For more information on defining a codec, see the section <<codec-factory.adoc#codec-factory,Codec Factory>>;
for more information on field properties, see the section <<field-type-definitions-and-properties.adoc#field-type-definitions-and-properties, Field Type Definitions and Properties>>.
For more information on defining a codec, see the section <<codec-factory.adoc#,Codec Factory>>;
for more information on field properties, see the section <<field-type-definitions-and-properties.adoc#, Field Type Definitions and Properties>>.
*LowerCaseTokenizer*
@ -191,13 +191,13 @@ This syntax has been removed entirely and if sent to Solr it will now produce an
The pattern language is very similar but not the same.
Typically, simply update the pattern by changing an uppercase 'Z' to lowercase 'z' and that's it.
+
For the current recommended set of patterns in schemaless mode, see the section <<schemaless-mode.adoc#schemaless-mode,Schemaless Mode>>, or simply examine the `_default` configset (found in `server/solr/configsets`).
For the current recommended set of patterns in schemaless mode, see the section <<schemaless-mode.adoc#,Schemaless Mode>>, or simply examine the `_default` configset (found in `server/solr/configsets`).
+
Also note that the default set of date patterns (formats) have expanded from previous releases to subsume those patterns previously handled by the "extract" contrib (Solr Cell / Tika).
*Solr Cell*
* The extraction contrib (<<uploading-data-with-solr-cell-using-apache-tika.adoc#uploading-data-with-solr-cell-using-apache-tika,Solr Cell>>) no longer does any date parsing, and thus no longer supports the `date.formats` parameter. To ensure date strings are properly parsed, use the `ParseDateFieldUpdateProcessorFactory` in your update chain. This update request processor is found by default with the "parse-date" update processor when running Solr in "<<schemaless-mode.adoc#set-the-default-updaterequestprocessorchain,schemaless mode>>".
* The extraction contrib (<<uploading-data-with-solr-cell-using-apache-tika.adoc#,Solr Cell>>) no longer does any date parsing, and thus no longer supports the `date.formats` parameter. To ensure date strings are properly parsed, use the `ParseDateFieldUpdateProcessorFactory` in your update chain. This update request processor is found by default with the "parse-date" update processor when running Solr in "<<schemaless-mode.adoc#set-the-default-updaterequestprocessorchain,schemaless mode>>".
*Langid Contrib*
@ -209,7 +209,7 @@ The following changes impact query behavior.
*Highlighting*
* The Unified Highlighter parameter `hl.weightMatches` now defaults to `true`. See the section <<highlighting.adoc#highlighting,Highlighting>> for more information about Highlighter parameters.
* The Unified Highlighter parameter `hl.weightMatches` now defaults to `true`. See the section <<highlighting.adoc#,Highlighting>> for more information about Highlighter parameters.
*eDisMax Query Parser*
@ -276,8 +276,8 @@ When upgrading to Solr 7.7.x, users should be aware of the following major chang
*Admin UI*
* The Admin UI now presents a login screen for any users with authentication enabled on their cluster.
Clusters with <<basic-authentication-plugin.adoc#basic-authentication-plugin,Basic Authentication>> will prompt users to enter a username and password.
On clusters configured to use <<kerberos-authentication-plugin.adoc#kerberos-authentication-plugin,Kerberos Authentication>>, authentication is handled transparently by the browser as before, but if authentication fails, users will be directed to configure their browser to provide an appropriate Kerberos ticket.
Clusters with <<basic-authentication-plugin.adoc#,Basic Authentication>> will prompt users to enter a username and password.
On clusters configured to use <<kerberos-authentication-plugin.adoc#,Kerberos Authentication>>, authentication is handled transparently by the browser as before, but if authentication fails, users will be directed to configure their browser to provide an appropriate Kerberos ticket.
+
The login screen's purpose is cosmetic only - Admin UI-triggered Solr requests were subject to authentication prior to 7.7 and still are today. The login screen changes only the user experience of providing this authentication.
@ -344,7 +344,7 @@ While most users are still encouraged to use the `NRTCachingDirectoryFactory`, w
+
For more information about the new directory factory, see the Jira issue https://issues.apache.org/jira/browse/LUCENE-8438[LUCENE-8438].
+
For more information about the directory factory configuration in Solr, see the section <<datadir-and-directoryfactory-in-solrconfig.adoc#datadir-and-directoryfactory-in-solrconfig,DataDir and DirectoryFactory in SolrConfig>>.
For more information about the directory factory configuration in Solr, see the section <<datadir-and-directoryfactory-in-solrconfig.adoc#,DataDir and DirectoryFactory in SolrConfig>>.
=== Solr 7.5
@ -373,7 +373,7 @@ The `TieredMergePolicy` will also reclaim resources from segments that exceed `m
* Solr's logging configuration file is now located in `server/resources/log4j2.xml` by default.
* A bug for Windows users has been corrected. When using Solr's examples (`bin/solr start -e`) log files will now be put in the correct location (`example/` instead of `server`). See also <<installing-solr.adoc#solr-examples,Solr Examples>> and <<solr-control-script-reference.adoc#solr-control-script-reference,Solr Control Script Reference>> for more information.
* A bug for Windows users has been corrected. When using Solr's examples (`bin/solr start -e`) log files will now be put in the correct location (`example/` instead of `server`). See also <<installing-solr.adoc#solr-examples,Solr Examples>> and <<solr-control-script-reference.adoc#,Solr Control Script Reference>> for more information.
=== Solr 7.4
@ -384,13 +384,13 @@ When upgrading to Solr 7.4, users should be aware of the following major changes
*Logging*
* Solr now uses Log4j v2.11. The Log4j configuration is now in `log4j2.xml` rather than `log4j.properties` files. This is a server side change only and clients using SolrJ won't need any changes. Clients can still use any logging implementation which is compatible with SLF4J. We now let Log4j handle rotation of Solr logs at startup, and `bin/solr` start scripts will no longer attempt this nor move existing console or garbage collection logs into `logs/archived` either. See <<configuring-logging.adoc#configuring-logging,Configuring Logging>> for more details about Solr logging.
* Solr now uses Log4j v2.11. The Log4j configuration is now in `log4j2.xml` rather than `log4j.properties` files. This is a server side change only and clients using SolrJ won't need any changes. Clients can still use any logging implementation which is compatible with SLF4J. We now let Log4j handle rotation of Solr logs at startup, and `bin/solr` start scripts will no longer attempt this nor move existing console or garbage collection logs into `logs/archived` either. See <<configuring-logging.adoc#,Configuring Logging>> for more details about Solr logging.
* Configuring `slowQueryThresholdMillis` now logs slow requests to a separate file named `solr_slow_requests.log`. Previously they would get logged in the `solr.log` file.
*Legacy Scaling (non-SolrCloud)*
* In the <<index-replication.adoc#index-replication,leader-follower model>> of scaling Solr, a follower no longer commits an empty index when a completely new index is detected on leader during replication. To return to the previous behavior pass `false` to `skipCommitOnLeaderVersionZero` in the follower section of replication handler configuration, or pass it to the `fetchindex` command.
* In the <<index-replication.adoc#,leader-follower model>> of scaling Solr, a follower no longer commits an empty index when a completely new index is detected on leader during replication. To return to the previous behavior pass `false` to `skipCommitOnLeaderVersionZero` in the follower section of replication handler configuration, or pass it to the `fetchindex` command.
If you are upgrading from a version earlier than Solr 7.3, please see previous version notes below.
@ -418,7 +418,7 @@ When upgrading to Solr 7.3, users should be aware of the following major changes
*Logging*
* The default Solr log file size and number of backups have been raised to 32MB and 10 respectively. See the section <<configuring-logging.adoc#configuring-logging,Configuring Logging>> for more information about how to configure logging.
* The default Solr log file size and number of backups have been raised to 32MB and 10 respectively. See the section <<configuring-logging.adoc#,Configuring Logging>> for more information about how to configure logging.
*SolrCloud*
@ -430,7 +430,7 @@ This means to upgrade to Solr 8 in the future, you will need to be on Solr 7.3 o
*Spatial*
* If you are using the spatial JTS library with Solr, you must upgrade to 1.15.0. This new version of JTS is now dual-licensed to include a BSD style license. See the section on <<spatial-search.adoc#spatial-search,Spatial Search>> for more information.
* If you are using the spatial JTS library with Solr, you must upgrade to 1.15.0. This new version of JTS is now dual-licensed to include a BSD style license. See the section on <<spatial-search.adoc#,Spatial Search>> for more information.
*Highlighting*
@ -446,7 +446,7 @@ When upgrading to Solr 7.2, users should be aware of the following major changes
*Local Parameters*
* Starting a query string with <<local-parameters-in-queries.adoc#local-parameters-in-queries,local parameters>> `{!myparser ...}` is used to switch from one query parser to another, and is intended for use by Solr system developers, not end users doing searches. To reduce negative side-effects of unintended hack-ability, Solr now limits the cases when local parameters will be parsed to only contexts in which the default parser is "<<other-parsers.adoc#lucene-query-parser,lucene>>" or "<<other-parsers.adoc#function-query-parser,func>>".
* Starting a query string with <<local-parameters-in-queries.adoc#,local parameters>> `{!myparser ...}` is used to switch from one query parser to another, and is intended for use by Solr system developers, not end users doing searches. To reduce negative side-effects of unintended hack-ability, Solr now limits the cases when local parameters will be parsed to only contexts in which the default parser is "<<other-parsers.adoc#lucene-query-parser,lucene>>" or "<<other-parsers.adoc#function-query-parser,func>>".
+
So, if `defType=edismax` then `q={!myparser ...}` won't work. In that example, put the desired query parser into the `defType` parameter.
+
@ -503,4 +503,4 @@ See the section <<metrics-reporting.adoc#shard-and-cluster-reporters,Shard and C
* In the XML query parser (`defType=xmlparser` or `{!xmlparser ... }`) the resolving of external entities is now disallowed by default.
If you are upgrading from a version earlier than Solr 7.0, please see <<major-changes-in-solr-7.adoc#major-changes-in-solr-7,Major Changes in Solr 7>> before starting your upgrade.
If you are upgrading from a version earlier than Solr 7.0, please see <<major-changes-in-solr-7.adoc#,Major Changes in Solr 7>> before starting your upgrade.

View File

@ -29,7 +29,7 @@ Likewise, committing changes using `openSearcher=false` may result in changes co
== SolrCloud Backups
Support for backups when running SolrCloud is provided with the <<collection-management.adoc#collection-management,Collections API>>. This allows the backups to be generated across multiple shards, and restored to the same number of shards and replicas as the original collection.
Support for backups when running SolrCloud is provided with the <<collection-management.adoc#,Collections API>>. This allows the backups to be generated across multiple shards, and restored to the same number of shards and replicas as the original collection.
NOTE: SolrCloud Backup/Restore requires a shared file system mounted at the same path on all nodes, or HDFS.
@ -227,7 +227,7 @@ The repository interfaces needs to be configured in the `solr.xml` file. While r
If no repository is configured then the local filesystem repository will be used automatically.
Example `solr.xml` section to configure a repository like <<running-solr-on-hdfs.adoc#running-solr-on-hdfs,HDFS>>:
Example `solr.xml` section to configure a repository like <<running-solr-on-hdfs.adoc#,HDFS>>:
[source,xml]
----

View File

@ -99,7 +99,7 @@ Assuming you sent this request to Solr, the response body is a JSON document:
}
----
The `sample_techproducts_configs` <<config-sets.adoc#config-sets,configset>> ships with a pre-built set of managed stop words, however you should only interact with this file using the API and not edit it directly.
The `sample_techproducts_configs` <<config-sets.adoc#,configset>> ships with a pre-built set of managed stop words, however you should only interact with this file using the API and not edit it directly.
One thing that should stand out to you in this response is that it contains a `managedList` of words as well as `initArgs`. This is an important concept in this framework -- managed resources typically have configuration and data. For stop words, the only configuration parameter is a boolean that determines whether to ignore the case of tokens during stop word filtering (ignoreCase=true|false). The data is a list of words, which is represented as a JSON array named `managedList` in the response.
@ -132,7 +132,7 @@ NOTE: PUT/POST is used to add terms to an existing list instead of replacing the
=== Managing Synonyms
For the most part, the API for managing synonyms behaves similar to the API for stop words, except instead of working with a list of words, it uses a map, where the value for each entry in the map is a set of synonyms for a term. As with stop words, the `sample_techproducts_configs` <<config-sets.adoc#config-sets,configset>> includes a pre-built set of synonym mappings suitable for the sample data that is activated by the following field type definition in `schema.xml`:
For the most part, the API for managing synonyms behaves similar to the API for stop words, except instead of working with a list of words, it uses a map, where the value for each entry in the map is a set of synonyms for a term. As with stop words, the `sample_techproducts_configs` <<config-sets.adoc#,configset>> includes a pre-built set of synonym mappings suitable for the sample data that is activated by the following field type definition in `schema.xml`:
[source,xml]
----
@ -208,7 +208,7 @@ Lastly, you can delete a mapping by sending a DELETE request to the managed endp
Changes made to managed resources via this REST API are not applied to the active Solr components until the Solr collection (or Solr core in single server mode) is reloaded.
For example: after adding or deleting a stop word, you must reload the core/collection before changes become active; related APIs: <<coreadmin-api.adoc#coreadmin-api,CoreAdmin API>> and <<collections-api.adoc#collections-api,Collections API>>.
For example: after adding or deleting a stop word, you must reload the core/collection before changes become active; related APIs: <<coreadmin-api.adoc#,CoreAdmin API>> and <<collections-api.adoc#,Collections API>>.
This approach is required when running in distributed mode so that we are assured changes are applied to all cores in a collection at the same time so that behavior is consistent and predictable. It goes without saying that you dont want one of your replicas working with a different set of stop words or synonyms than the others.
@ -218,7 +218,7 @@ However, the intent of this API implementation is that changes will be applied u
[IMPORTANT]
====
Changing things like stop words and synonym mappings typically require reindexing existing documents if being used by index-time analyzers. The RestManager framework does not guard you from this, it simply makes it possible to programmatically build up a set of stop words, synonyms, etc. See the section <<reindexing.adoc#reindexing,Reindexing>> for more information about reindexing your documents.
Changing things like stop words and synonym mappings typically require reindexing existing documents if being used by index-time analyzers. The RestManager framework does not guard you from this, it simply makes it possible to programmatically build up a set of stop words, synonyms, etc. See the section <<reindexing.adoc#,Reindexing>> for more information about reindexing your documents.
====
== RestManager Endpoint

View File

@ -25,44 +25,44 @@ image::images/math-expressions/searchiris.png[]
== Table of Contents
*<<visualization.adoc#visualization,Visualizations>>*: Gallery of streaming expression and math expression visualizations.
*<<visualization.adoc#,Visualizations>>*: Gallery of streaming expression and math expression visualizations.
*<<math-start.adoc#math-start,Getting Started>>*: Getting started with streaming expressions, math expressions, and visualization.
*<<math-start.adoc#,Getting Started>>*: Getting started with streaming expressions, math expressions, and visualization.
*<<loading.adoc#loading,Data Loading>>*: Visualizing, transforming and loading CSV files.
*<<loading.adoc#,Data Loading>>*: Visualizing, transforming and loading CSV files.
*<<search-sample.adoc#search-sample,Searching, Sampling and Aggregation>>*: Searching, sampling, aggregation and visualization of result sets.
*<<search-sample.adoc#,Searching, Sampling and Aggregation>>*: Searching, sampling, aggregation and visualization of result sets.
*<<transform.adoc#transform,Transforming Data>>*: Transforming and filtering result sets.
*<<transform.adoc#,Transforming Data>>*: Transforming and filtering result sets.
*<<scalar-math.adoc#scalar-math,Scalar Math>>*: Math functions and visualization applied to numbers.
*<<scalar-math.adoc#,Scalar Math>>*: Math functions and visualization applied to numbers.
*<<vector-math.adoc#vector-math,Vector Math>>*: Vector math, manipulation and visualization.
*<<vector-math.adoc#,Vector Math>>*: Vector math, manipulation and visualization.
*<<variables.adoc#variables, Variables and Vectorization>>*: Vectorizing result sets and assigning and visualizing variables.
*<<variables.adoc#, Variables and Vectorization>>*: Vectorizing result sets and assigning and visualizing variables.
*<<matrix-math.adoc#matrix-math,Matrix Math>>*: Matrix math, manipulation and visualization.
*<<matrix-math.adoc#,Matrix Math>>*: Matrix math, manipulation and visualization.
*<<term-vectors.adoc#term-vectors,Text Analysis and Term Vectors>>*: Text analysis and TF-IDF term vectors.
*<<term-vectors.adoc#,Text Analysis and Term Vectors>>*: Text analysis and TF-IDF term vectors.
*<<probability-distributions.adoc#probability-distributions,Probability>>*: Continuous and discrete probability distribution functions.
*<<probability-distributions.adoc#,Probability>>*: Continuous and discrete probability distribution functions.
*<<statistics.adoc#statistics,Statistics>>*: Descriptive statistics, histograms, percentiles, correlation, inference tests and other stats functions.
*<<statistics.adoc#,Statistics>>*: Descriptive statistics, histograms, percentiles, correlation, inference tests and other stats functions.
*<<regression.adoc#regression,Linear Regression>>*: Simple and multivariate linear regression.
*<<regression.adoc#,Linear Regression>>*: Simple and multivariate linear regression.
*<<curve-fitting.adoc#curve-fitting,Curve Fitting>>*: Polynomial, harmonic and Gaussian curve fitting.
*<<curve-fitting.adoc#,Curve Fitting>>*: Polynomial, harmonic and Gaussian curve fitting.
*<<time-series.adoc#time-series,Time Series>>*: Time series aggregation, visualization, smoothing, differencing, anomaly detection and forecasting.
*<<time-series.adoc#,Time Series>>*: Time series aggregation, visualization, smoothing, differencing, anomaly detection and forecasting.
*<<numerical-analysis.adoc#numerical-analysis,Interpolation and Numerical Calculus>>*: Interpolation, derivatives and integrals.
*<<numerical-analysis.adoc#,Interpolation and Numerical Calculus>>*: Interpolation, derivatives and integrals.
*<<dsp.adoc#dsp,Signal Processing>>*: Convolution, cross-correlation, autocorrelation and fast Fourier transforms.
*<<dsp.adoc#,Signal Processing>>*: Convolution, cross-correlation, autocorrelation and fast Fourier transforms.
*<<simulations.adoc#simulations,Simulations>>*: Monte Carlo simulations and random walks
*<<simulations.adoc#,Simulations>>*: Monte Carlo simulations and random walks
*<<machine-learning.adoc#machine-learning,Machine Learning>>*: Distance, KNN, DBSCAN, K-means, fuzzy K-means and other ML functions.
*<<machine-learning.adoc#,Machine Learning>>*: Distance, KNN, DBSCAN, K-means, fuzzy K-means and other ML functions.
*<<computational-geometry.adoc#computational-geometry,Computational Geometry>>*: Convex Hulls and Enclosing Disks.
*<<computational-geometry.adoc#,Computational Geometry>>*: Convex Hulls and Enclosing Disks.
*<<logs.adoc#logs,Appendix A>>*: Solr log analytics and visualization.
*<<logs.adoc#,Appendix A>>*: Solr log analytics and visualization.

View File

@ -16,7 +16,7 @@
// specific language governing permissions and limitations
// under the License.
The MBean Request Handler offers programmatic access to the information provided on the <<plugins-stats-screen.adoc#plugins-stats-screen,Plugin/Stats>> page of the Admin UI.
The MBean Request Handler offers programmatic access to the information provided on the <<plugins-stats-screen.adoc#,Plugin/Stats>> page of the Admin UI.
The MBean Request Handler accepts the following parameters:
@ -30,7 +30,7 @@ Restricts results by category name.
Specifies whether statistics are returned with results. You can override the `stats` parameter on a per-field basis. The default is `false`.
`wt`::
The output format. This operates the same as the <<response-writers.adoc#response-writers,`wt` parameter in a query>>. The default is `json`.
The output format. This operates the same as the <<response-writers.adoc#,`wt` parameter in a query>>. The default is `json`.
== MBeanRequestHandler Examples

View File

@ -95,7 +95,7 @@ In the future, metrics will be added for shard leaders and cluster nodes, includ
The metrics available in your system can be customized by modifying the `<metrics>` element in `solr.xml`.
TIP: See also the section <<format-of-solr-xml.adoc#format-of-solr-xml,Format of Solr.xml>> for more information about the `solr.xml` file, where to find it, and how to edit it.
TIP: See also the section <<format-of-solr-xml.adoc#,Format of Solr.xml>> for more information about the `solr.xml` file, where to find it, and how to edit it.
=== Disabling the Metrics Collection
The `<metrics>` element in `solr.xml` supports one attribute `enabled`, which takes a boolean value,

View File

@ -18,7 +18,7 @@
If you use https://prometheus.io[Prometheus] and https://grafana.com[Grafana] for metrics storage and data visualization, Solr includes a Prometheus exporter to collect metrics and other data.
A Prometheus exporter (`solr-exporter`) allows users to monitor not only Solr metrics which come from <<metrics-reporting.adoc#metrics-api,Metrics API>>, but also facet counts which come from <<searching.adoc#searching,Searching>> and responses to <<collections-api.adoc#collections-api,Collections API>> commands and <<ping.adoc#ping,PingRequestHandler>> requests.
A Prometheus exporter (`solr-exporter`) allows users to monitor not only Solr metrics which come from <<metrics-reporting.adoc#metrics-api,Metrics API>>, but also facet counts which come from <<searching.adoc#,Searching>> and responses to <<collections-api.adoc#,Collections API>> commands and <<ping.adoc#,PingRequestHandler>> requests.
This graphic provides a more detailed view:
@ -143,11 +143,11 @@ All <<#command-line-parameters,command line parameters>> are able to be provided
=== Getting Metrics from a Secured SolrCloud
Your SolrCloud might be secured by measures described in <<securing-solr.adoc#securing-solr,Securing Solr>>.
The security configuration can be injected into `solr-exporter` using environment variables in a fashion similar to other clients using <<using-solrj.adoc#using-solrj,SolrJ>>.
Your SolrCloud might be secured by measures described in <<securing-solr.adoc#,Securing Solr>>.
The security configuration can be injected into `solr-exporter` using environment variables in a fashion similar to other clients using <<using-solrj.adoc#,SolrJ>>.
This is possible because the main script picks up <<Environment Variable Options>> and passes them on to the Java process.
Example for a SolrCloud instance secured by <<basic-authentication-plugin.adoc#basic-authentication-plugin,Basic Authentication>>, <<enabling-ssl.adoc#enabling-ssl,SSL>> and <<zookeeper-access-control.adoc#zookeeper-access-control,ZooKeeper Access Control>>:
Example for a SolrCloud instance secured by <<basic-authentication-plugin.adoc#,Basic Authentication>>, <<enabling-ssl.adoc#,SSL>> and <<zookeeper-access-control.adoc#,ZooKeeper Access Control>>:
Suppose you have a file `basicauth.properties` with the Solr Basic-Auth credentials:
@ -319,10 +319,10 @@ The `solr-exporter` configuration file always starts and closes with two simple
Between these elements, the data the `solr-exporter` should request is defined. There are several possible types of requests to make:
[horizontal]
`<ping>`:: Scrape the response to a <<ping.adoc#ping,PingRequestHandler>> request.
`<ping>`:: Scrape the response to a <<ping.adoc#,PingRequestHandler>> request.
`<metrics>`:: Scrape the response to a <<metrics-reporting.adoc#metrics-api,Metrics API>> request.
`<collections>`:: Scrape the response to a <<collections-api.adoc#collections-api,Collections API>> request.
`<search>`:: Scrape the response to a <<searching.adoc#searching,search>> request.
`<collections>`:: Scrape the response to a <<collections-api.adoc#,Collections API>> request.
`<search>`:: Scrape the response to a <<searching.adoc#,search>> request.
Within each of these types, we need to define the query and how to work with the response. To do this, we define two additional elements:

View File

@ -22,20 +22,20 @@ Administration and monitoring can be performed using the web-based administratio
Common administrative tasks include:
<<metrics-reporting.adoc#metrics-reporting,Metrics Reporting>>: Details of Solr's metrics registries and Metrics API.
<<metrics-reporting.adoc#,Metrics Reporting>>: Details of Solr's metrics registries and Metrics API.
<<metrics-history.adoc#metrics-history,Metrics History>>: Metrics history collection, configuration and API.
<<metrics-history.adoc#,Metrics History>>: Metrics history collection, configuration and API.
<<mbean-request-handler.adoc#mbean-request-handler,MBean Request Handler>>: How to use Solr's MBeans for programmatic access to the system plugins and stats.
<<mbean-request-handler.adoc#,MBean Request Handler>>: How to use Solr's MBeans for programmatic access to the system plugins and stats.
<<configuring-logging.adoc#configuring-logging,Configuring Logging>>: Describes how to configure logging for Solr.
<<configuring-logging.adoc#,Configuring Logging>>: Describes how to configure logging for Solr.
<<using-jmx-with-solr.adoc#using-jmx-with-solr,Using JMX with Solr>>: Describes how to use Java Management Extensions with Solr.
<<using-jmx-with-solr.adoc#,Using JMX with Solr>>: Describes how to use Java Management Extensions with Solr.
<<monitoring-solr-with-prometheus-and-grafana.adoc#monitoring-solr-with-prometheus-and-grafana,Monitoring Solr with Prometheus and Grafana>>: Describes how to monitor Solr with Prometheus and Grafana.
<<monitoring-solr-with-prometheus-and-grafana.adoc#,Monitoring Solr with Prometheus and Grafana>>: Describes how to monitor Solr with Prometheus and Grafana.
<<performance-statistics-reference.adoc#performance-statistics-reference,Performance Statistics Reference>>: Additional information on statistics returned from JMX.
<<performance-statistics-reference.adoc#,Performance Statistics Reference>>: Additional information on statistics returned from JMX.
<<solr-tracing.adoc#solr-tracing,Distributed Solr Tracing>>: Describes how to do distributed tracing for Solr requests.
<<solr-tracing.adoc#,Distributed Solr Tracing>>: Describes how to do distributed tracing for Solr requests.

View File

@ -74,7 +74,7 @@ Sets the maximum number of tokens to parse in each example document field that i
Specifies if the query will be boosted by the interesting term relevance. It can be either "true" or "false".
`mlt.qf`::
Query fields and their boosts using the same format as that used by the <<the-dismax-query-parser.adoc#the-dismax-query-parser,DisMax Query Parser>>. These fields must also be specified in `mlt.fl`.
Query fields and their boosts using the same format as that used by the <<the-dismax-query-parser.adoc#,DisMax Query Parser>>. These fields must also be specified in `mlt.fl`.
== Parameters for the MoreLikeThisComponent
@ -109,4 +109,4 @@ Unless `mlt.boost=true`, all terms will have `boost=1.0`.
== MoreLikeThis Query Parser
The `mlt` query parser provides a mechanism to retrieve documents similar to a given document, like the handler. More information on the usage of the mlt query parser can be found in the section <<other-parsers.adoc#other-parsers,Other Parsers>>.
The `mlt` query parser provides a mechanism to retrieve documents similar to a given document, like the handler. More information on the usage of the mlt query parser can be found in the section <<other-parsers.adoc#,Other Parsers>>.

View File

@ -20,11 +20,11 @@ In addition to the main query parsers discussed earlier, there are several other
This section details the other parsers, and gives examples for how they might be used.
Many of these parsers are expressed the same way as <<local-parameters-in-queries.adoc#local-parameters-in-queries,Local Parameters in Queries>>.
Many of these parsers are expressed the same way as <<local-parameters-in-queries.adoc#,Local Parameters in Queries>>.
== Block Join Query Parsers
There are two query parsers that support block joins. These parsers allow indexing and searching for relational content that has been <<indexing-nested-documents.adoc#indexing-nested-documents, indexed as Nested Documents>>.
There are two query parsers that support block joins. These parsers allow indexing and searching for relational content that has been <<indexing-nested-documents.adoc#, indexed as Nested Documents>>.
The example usage of the query parsers below assumes the following documents have been indexed:
@ -297,7 +297,7 @@ The `CollapsingQParser` is really a _post filter_ that provides more performant
This parser collapses the result set to a single document per group before it forwards the result set to the rest of the search components. So all downstream components (faceting, highlighting, etc.) will work with the collapsed result set.
Details about using the `CollapsingQParser` can be found in the section <<collapse-and-expand-results.adoc#collapse-and-expand-results,Collapse and Expand Results>>.
Details about using the `CollapsingQParser` can be found in the section <<collapse-and-expand-results.adoc#,Collapse and Expand Results>>.
== Complex Phrase Query Parser
@ -406,7 +406,7 @@ q=+field:text +COLOR:Red +SIZE:XL
== Function Query Parser
The `FunctionQParser` extends the `QParserPlugin` and creates a function query from the input value. This is only one way to use function queries in Solr; for another, more integrated, approach, see the section on <<function-queries.adoc#function-queries,Function Queries>>.
The `FunctionQParser` extends the `QParserPlugin` and creates a function query from the input value. This is only one way to use function queries in Solr; for another, more integrated, approach, see the section on <<function-queries.adoc#,Function Queries>>.
Example:
@ -484,7 +484,7 @@ Boolean that indicates if the results of the query should be filtered so that on
=== Graph Query Limitations
The `graph` parser only works in single node Solr installations, or with <<solrcloud.adoc#solrcloud,SolrCloud>> collections that use exactly 1 shard.
The `graph` parser only works in single node Solr installations, or with <<solrcloud.adoc#,SolrCloud>> collections that use exactly 1 shard.
=== Graph Query Examples
@ -890,7 +890,7 @@ Example:
{!ltr model=myModel reRankDocs=100}
----
Details about using the `LTRQParserPlugin` can be found in the <<learning-to-rank.adoc#learning-to-rank,Learning To Rank>> section.
Details about using the `LTRQParserPlugin` can be found in the <<learning-to-rank.adoc#,Learning To Rank>> section.
== Max Score Query Parser
@ -1082,7 +1082,7 @@ http://localhost:8983/solr/techproducts?q=memory _query_:{!rank f='pagerank', fu
The `ReRankQParserPlugin` is a special purpose parser for Re-Ranking the top results of a simple query using a more complex ranking query.
Details about using the `ReRankQParserPlugin` can be found in the <<query-re-ranking.adoc#query-re-ranking,Query Re-Ranking>> section.
Details about using the `ReRankQParserPlugin` can be found in the <<query-re-ranking.adoc#,Query Re-Ranking>> section.
== Simple Query Parser
@ -1136,7 +1136,7 @@ Any errors in syntax are ignored and the query parser will interpret queries as
There are two spatial QParsers in Solr: `geofilt` and `bbox`. But there are other ways to query spatially: using the `frange` parser with a distance function, using the standard (lucene) query parser with the range syntax to pick the corners of a rectangle, or with RPT and BBoxField you can use the standard query parser but use a special syntax within quotes that allows you to pick the spatial predicate.
All these options are documented further in the section <<spatial-search.adoc#spatial-search,Spatial Search>>.
All these options are documented further in the section <<spatial-search.adoc#,Spatial Search>>.
== Surround Query Parser
@ -1232,7 +1232,7 @@ If no analysis or transformation is desired for any type of field, see the <<Raw
`TermsQParser` functions similarly to the <<Term Query Parser,Term Query Parser>> but takes in multiple values separated by commas and returns documents matching any of the specified values.
This can be useful for generating filter queries from the external human readable terms returned by the faceting or terms components, and may be more efficient in some cases than using the <<the-standard-query-parser.adoc#the-standard-query-parser,Standard Query Parser>> to generate a boolean query since the default implementation `method` avoids scoring.
This can be useful for generating filter queries from the external human readable terms returned by the faceting or terms components, and may be more efficient in some cases than using the <<the-standard-query-parser.adoc#,Standard Query Parser>> to generate a boolean query since the default implementation `method` avoids scoring.
This query parser takes the following parameters:

View File

@ -46,16 +46,16 @@ However, a biography will likely contains lots of words you don't care about and
The solution to both these problems is field analysis. For the biography field, you can tell Solr how to break apart the biography into words. You can tell Solr that you want to make all the words lower case, and you can tell Solr to remove accents marks.
Field analysis is an important part of a field type. <<understanding-analyzers-tokenizers-and-filters.adoc#understanding-analyzers-tokenizers-and-filters,Understanding Analyzers, Tokenizers, and Filters>> is a detailed description of field analysis.
Field analysis is an important part of a field type. <<understanding-analyzers-tokenizers-and-filters.adoc#,Understanding Analyzers, Tokenizers, and Filters>> is a detailed description of field analysis.
== Solr's Schema File
Solr stores details about the field types and fields it is expected to understand in a schema file. The name and location of this file may vary depending on how you initially configured Solr or if you modified it later.
* `managed-schema` is the name for the schema file Solr uses by default to support making Schema changes at runtime via the <<schema-api.adoc#schema-api,Schema API>>, or <<schemaless-mode.adoc#schemaless-mode,Schemaless Mode>> features. You may <<schema-factory-definition-in-solrconfig.adoc#schema-factory-definition-in-solrconfig,explicitly configure the managed schema features>> to use an alternative filename if you choose, but the contents of the files are still updated automatically by Solr.
* `schema.xml` is the traditional name for a schema file which can be edited manually by users who use the <<schema-factory-definition-in-solrconfig.adoc#schema-factory-definition-in-solrconfig,`ClassicIndexSchemaFactory`>>.
* If you are using SolrCloud you may not be able to find any file by these names on the local filesystem. You will only be able to see the schema through the Schema API (if enabled) or through the Solr Admin UI's <<cloud-screens.adoc#cloud-screens,Cloud Screens>>.
* `managed-schema` is the name for the schema file Solr uses by default to support making Schema changes at runtime via the <<schema-api.adoc#,Schema API>>, or <<schemaless-mode.adoc#,Schemaless Mode>> features. You may <<schema-factory-definition-in-solrconfig.adoc#,explicitly configure the managed schema features>> to use an alternative filename if you choose, but the contents of the files are still updated automatically by Solr.
* `schema.xml` is the traditional name for a schema file which can be edited manually by users who use the <<schema-factory-definition-in-solrconfig.adoc#,`ClassicIndexSchemaFactory`>>.
* If you are using SolrCloud you may not be able to find any file by these names on the local filesystem. You will only be able to see the schema through the Schema API (if enabled) or through the Solr Admin UI's <<cloud-screens.adoc#,Cloud Screens>>.
Whichever name of the file in use in your installation, the structure of the file is not changed. However, the way you interact with the file will change. If you are using the managed schema, it is expected that you only interact with the file with the Schema API, and never make manual edits. If you do not use the managed schema, you will only be able to make manual edits to the file, the Schema API will not support any modifications.
Note that if you are not using the Schema API yet you do use SolrCloud, you will need to interact with `schema.xml` through ZooKeeper using upconfig and downconfig commands to make a local copy and upload your changes. The options for doing this are described in <<solr-control-script-reference.adoc#solr-control-script-reference,Solr Control Script Reference>> and <<using-zookeeper-to-manage-configuration-files.adoc#using-zookeeper-to-manage-configuration-files,Using ZooKeeper to Manage Configuration Files>>.
Note that if you are not using the Schema API yet you do use SolrCloud, you will need to interact with `schema.xml` through ZooKeeper using upconfig and downconfig commands to make a local copy and upload your changes. The options for doing this are described in <<solr-control-script-reference.adoc#,Solr Control Script Reference>> and <<using-zookeeper-to-manage-configuration-files.adoc#,Using ZooKeeper to Manage Configuration Files>>.

View File

@ -22,9 +22,9 @@ When a user runs a search in Solr, the search query is processed by a *request h
Search applications select a particular request handler by default. In addition, applications can be configured to allow users to override the default selection in preference of a different request handler.
To process a search query, a request handler calls a *query parser*, which interprets the terms and parameters of a query. Different query parsers support different syntax. Solr's default query parser is known as the <<the-standard-query-parser.adoc#the-standard-query-parser,Standard Query Parser>>,or more commonly just the "lucene" query parser. Solr also includes the <<the-dismax-query-parser.adoc#the-dismax-query-parser,DisMax>>query parser, and the <<the-extended-dismax-query-parser.adoc#the-extended-dismax-query-parser,Extended DisMax>> (eDisMax) query parser. The <<the-standard-query-parser.adoc#the-standard-query-parser,standard>> query parser's syntax allows for greater precision in searches, but the DisMax query parser is much more tolerant of errors. The DisMax query parser is designed to provide an experience similar to that of popular search engines such as Google, which rarely display syntax errors to users. The Extended DisMax query parser is an improved version of DisMax that handles the full Lucene query syntax while still tolerating syntax errors. It also includes several additional features.
To process a search query, a request handler calls a *query parser*, which interprets the terms and parameters of a query. Different query parsers support different syntax. Solr's default query parser is known as the <<the-standard-query-parser.adoc#,Standard Query Parser>>,or more commonly just the "lucene" query parser. Solr also includes the <<the-dismax-query-parser.adoc#,DisMax>>query parser, and the <<the-extended-dismax-query-parser.adoc#,Extended DisMax>> (eDisMax) query parser. The <<the-standard-query-parser.adoc#,standard>> query parser's syntax allows for greater precision in searches, but the DisMax query parser is much more tolerant of errors. The DisMax query parser is designed to provide an experience similar to that of popular search engines such as Google, which rarely display syntax errors to users. The Extended DisMax query parser is an improved version of DisMax that handles the full Lucene query syntax while still tolerating syntax errors. It also includes several additional features.
In addition, there are <<common-query-parameters.adoc#common-query-parameters,common query parameters>> that are accepted by all query parsers.
In addition, there are <<common-query-parameters.adoc#,common query parameters>> that are accepted by all query parsers.
Input to a query parser can include:
@ -34,13 +34,13 @@ Input to a query parser can include:
Search parameters may also specify a *filter query*. As part of a search response, a filter query runs a query against the entire index and caches the results. Because Solr allocates a separate cache for filter queries, the strategic use of filter queries can improve search performance. (Despite their similar names, query filters are not related to analysis filters. Filter queries perform queries at search time against data already in the index, while analysis filters, such as Tokenizers, parse content for indexing, following specified rules).
A search query can request that certain terms be highlighted in the search response; that is, the selected terms will be displayed in colored boxes so that they "jump out" on the screen of search results. <<highlighting.adoc#highlighting,*Highlighting*>> can make it easier to find relevant passages in long documents returned in a search. Solr supports multi-term highlighting. Solr includes a rich set of search parameters for controlling how terms are highlighted.
A search query can request that certain terms be highlighted in the search response; that is, the selected terms will be displayed in colored boxes so that they "jump out" on the screen of search results. <<highlighting.adoc#,*Highlighting*>> can make it easier to find relevant passages in long documents returned in a search. Solr supports multi-term highlighting. Solr includes a rich set of search parameters for controlling how terms are highlighted.
Search responses can also be configured to include *snippets* (document excerpts) featuring highlighted text. Popular search engines such as Google and Yahoo! return snippets in their search results: 3-4 lines of text offering a description of a search result.
To help users zero in on the content they're looking for, Solr supports two special ways of grouping search results to aid further exploration: faceting and clustering.
<<faceting.adoc#faceting,*Faceting*>> is the arrangement of search results into categories (which are based on indexed terms). Within each category, Solr reports on the number of hits for relevant term, which is called a facet constraint. Faceting makes it easy for users to explore search results on sites such as movie sites and product review sites, where there are many categories and many items within a category.
<<faceting.adoc#,*Faceting*>> is the arrangement of search results into categories (which are based on indexed terms). Within each category, Solr reports on the number of hits for relevant term, which is called a facet constraint. Faceting makes it easy for users to explore search results on sites such as movie sites and product review sites, where there are many categories and many items within a category.
The screen shot below shows an example of faceting from the CNET Web site (CBS Interactive Inc.), which was the first site to use Solr.
@ -50,9 +50,9 @@ Faceting makes use of fields defined when the search applications were indexed.
*Clustering* groups search results by similarities discovered when a search is executed, rather than when content is indexed. The results of clustering often lack the neat hierarchical organization found in faceted search results, but clustering can be useful nonetheless. It can reveal unexpected commonalities among search results, and it can help users rule out content that isn't pertinent to what they're really searching for.
Solr also supports a feature called <<morelikethis.adoc#morelikethis,MoreLikeThis>>, which enables users to submit new queries that focus on particular terms returned in an earlier query. MoreLikeThis queries can make use of faceting or clustering to provide additional aid to users.
Solr also supports a feature called <<morelikethis.adoc#,MoreLikeThis>>, which enables users to submit new queries that focus on particular terms returned in an earlier query. MoreLikeThis queries can make use of faceting or clustering to provide additional aid to users.
A Solr component called a <<response-writers.adoc#response-writers,*response writer*>> manages the final presentation of the query response. Solr includes a variety of response writers, including an <<response-writers.adoc#standard-xml-response-writer,XML Response Writer>> and a <<response-writers.adoc#json-response-writer,JSON Response Writer>>.
A Solr component called a <<response-writers.adoc#,*response writer*>> manages the final presentation of the query response. Solr includes a variety of response writers, including an <<response-writers.adoc#standard-xml-response-writer,XML Response Writer>> and a <<response-writers.adoc#json-response-writer,JSON Response Writer>>.
The diagram below summarizes some key elements of the search process.

View File

@ -27,11 +27,11 @@ image::images/overview-of-the-solr-admin-ui/dashboard.png[image,height=400]
The left-side of the screen is a menu under the Solr logo that provides the navigation through the screens of the UI.
The first set of links are for system-level information and configuration and provide access to <<logging.adoc#logging,Logging>>, <<collections-core-admin.adoc#collections-core-admin,Collection/Core Administration>>, and <<java-properties.adoc#java-properties,Java Properties>>, among other things.
The first set of links are for system-level information and configuration and provide access to <<logging.adoc#,Logging>>, <<collections-core-admin.adoc#,Collection/Core Administration>>, and <<java-properties.adoc#,Java Properties>>, among other things.
At the end of this information is at least one pulldown listing Solr cores configured for this instance.
On <<solrcloud.adoc#solrcloud,SolrCloud>> nodes, an additional pulldown list shows all collections in this cluster.
Clicking on a collection or core name shows secondary menus of information for the specified collection or core, such as a <<schema-browser-screen.adoc#schema-browser-screen,Schema Browser>>, <<files-screen.adoc#files-screen,Config Files>>, <<plugins-stats-screen.adoc#plugins-stats-screen,Plugins & Statistics>>, and an ability to perform <<query-screen.adoc#query-screen,Queries>> on indexed data.
On <<solrcloud.adoc#,SolrCloud>> nodes, an additional pulldown list shows all collections in this cluster.
Clicking on a collection or core name shows secondary menus of information for the specified collection or core, such as a <<schema-browser-screen.adoc#,Schema Browser>>, <<files-screen.adoc#,Config Files>>, <<plugins-stats-screen.adoc#,Plugins & Statistics>>, and an ability to perform <<query-screen.adoc#,Queries>> on indexed data.
The left-side navigation appears on every screen, while the center changes to the detail of the option selected.
The Dashboard shows several information items in the center of the screen, including system uptime, the version being run, system-level data, JVM arguments, and the security plugins enabled (if any).
@ -51,7 +51,7 @@ If authentication has been enabled, Solr will present a login screen to unauthen
image::images/overview-of-the-solr-admin-ui/login.png[]
This login screen currently only works with Basic Authentication.
See the section <<basic-authentication-plugin.adoc#basic-authentication-plugin,Basic Authentication Plugin>> for
See the section <<basic-authentication-plugin.adoc#,Basic Authentication Plugin>> for
details on how to configure Solr to use this method of authentication.
Once logged in, the left-hand navigation will show the current user with an option to logout.
@ -77,7 +77,7 @@ These icons include the following links.
|Issue Tracker |Navigates to the JIRA issue tracking server for the Apache Solr project. This server resides at https://issues.apache.org/jira/browse/SOLR.
|IRC Channel |Navigates to Solr's http://en.wikipedia.org/wiki/Internet_Relay_Chat[IRC] live-chat room: http://webchat.freenode.net/?channels=#solr.
|Community forum |Navigates to the Apache Wiki page which has further information about ways to engage in the Solr User community mailing lists: https://cwiki.apache.org/confluence/display/solr/UsingMailingLists.
|Solr Query Syntax |Navigates to the section <<query-syntax-and-parsing.adoc#query-syntax-and-parsing,Query Syntax and Parsing>> in this Reference Guide.
|Solr Query Syntax |Navigates to the section <<query-syntax-and-parsing.adoc#,Query Syntax and Parsing>> in this Reference Guide.
|===
These links cannot be modified without editing the `index.html` in the `server/solr/solr-webapp` directory that contains the Admin UI files.

View File

@ -19,14 +19,14 @@
The package manager in Solr allows installation and updating of Solr-specific packages in Solr's cluster environment.
In this system, a _package_ is a set of Java jar files (usually one) containing one or more <<solr-plugins.adoc#solr-plugins,Solr plugins>>. Each jar file is also accompanied by a signature string (which can be verified against a supplied public key).
In this system, a _package_ is a set of Java jar files (usually one) containing one or more <<solr-plugins.adoc#,Solr plugins>>. Each jar file is also accompanied by a signature string (which can be verified against a supplied public key).
A key design aspect of this system is the ability to install or update packages in a cluster environment securely without the need to restart every node.
Other elements of the design include the ability to install from a remote repository; package standardization; a command line interface (CLI); and a package store.
This section will focus on how to use the package manager to install and update packages.
For technical details, see the section <<package-manager-internals.adoc#package-manager-internals,Package Manager internals>>.
For technical details, see the section <<package-manager-internals.adoc#,Package Manager internals>>.
== Interacting with the Package Manager
@ -48,7 +48,7 @@ $ bin/solr -c -Denable.packages=true
----
WARNING: There are security consequences to enabling the package manager.
If an unauthorized user gained access to the system, they would have write access to ZooKeeper and could install packages from untrusted sources. Always ensure you have secured Solr with firewalls and <<authentication-and-authorization-plugins.adoc#authentication-and-authorization-plugins,authentication>> before enabling the package manager.
If an unauthorized user gained access to the system, they would have write access to ZooKeeper and could install packages from untrusted sources. Always ensure you have secured Solr with firewalls and <<authentication-and-authorization-plugins.adoc#,authentication>> before enabling the package manager.
=== Add Trusted Repositories
@ -122,7 +122,7 @@ For example, if a package named `mypackage` contains a request handler, we would
<requestHandler name="/myhandler" class="mypackage:full.path.to.MyClass"></requestHandler>
----
Then use either the Collections API <<collection-management.adoc#reload,RELOAD command>> or the <<collections-core-admin.adoc#collections-core-admin,Admin UI>> to reload the collection.
Then use either the Collections API <<collection-management.adoc#reload,RELOAD command>> or the <<collections-core-admin.adoc#,Admin UI>> to reload the collection.
Next, set the package version that this collection is using. If the collection is named `collection1`, the package name is `mypackage`, and the installed version is `1.0.0`, the command would look like this:

View File

@ -105,7 +105,7 @@ When the `responseHeader` no longer includes `"partialResults": true`, and `curs
. `sort` clauses must include the uniqueKey field (either `asc` or `desc`).
+
If `id` is your uniqueKey field, then sort parameters like `id asc` and `name asc, id desc` would both work fine, but `name asc` by itself would not
. Sorts including <<working-with-dates.adoc#working-with-dates,Date Math>> based functions that involve calculations relative to `NOW` will cause confusing results, since every document will get a new sort value on every subsequent request. This can easily result in cursors that never end, and constantly return the same documents over and over even if the documents are never updated.
. Sorts including <<working-with-dates.adoc#,Date Math>> based functions that involve calculations relative to `NOW` will cause confusing results, since every document will get a new sort value on every subsequent request. This can easily result in cursors that never end, and constantly return the same documents over and over even if the documents are never updated.
+
In this situation, choose & re-use a fixed value for the <<working-with-dates.adoc#now,`NOW` request param>> in all of your cursor requests.
@ -270,4 +270,4 @@ while (true) {
}
----
TIP: For certain specialized cases, the <<exporting-result-sets.adoc#exporting-result-sets,/export handler>> may be an option.
TIP: For certain specialized cases, the <<exporting-result-sets.adoc#,/export handler>> may be an option.

Some files were not shown because too many files have changed in this diff Show More