Ref Guide: fix typos & abbreviated words

This commit is contained in:
Cassandra Targett 2019-02-08 13:09:22 -06:00
parent 56007af4a4
commit 32443cf8e3
28 changed files with 41 additions and 41 deletions

View File

@ -712,13 +712,12 @@ http://localhost:8983/solr/admin/collections?action=CREATEALIAS&name=testalias&c
</response>
----
Create an alias named "myTimeData" for data begining on `2018-01-15` in the UTC time zone and partitioning daily
based on the `evt_dt` field in the incomming documents. Data more than an hour beyond the latest (most recent)
partiton is to be rejected and collections are created using a config set named myConfig and
*Input*
Create an alias named "myTimeData" for data beginning on `2018-01-15` in the UTC time zone and partitioning daily
based on the `evt_dt` field in the incoming documents. Data more than one hour beyond the latest (most recent)
partition is to be rejected and collections are created using a configset named "myConfig".
[source,text]
----
http://localhost:8983/solr/admin/collections?action=CREATEALIAS&name=myTimeData&router.start=NOW/DAY&router.field=evt_dt&router.name=time&router.interval=%2B1DAY&router.maxFutureMs=3600000&create-collection.collection.configName=myConfig&create-collection.numShards=2
@ -736,11 +735,12 @@ http://localhost:8983/solr/admin/collections?action=CREATEALIAS&name=myTimeData&
</response>
----
A somewhat contrived example demonstrating the <<v2-api.adoc#top-v2-api,V2 API>> usage and additional collection creation options.
Notice that the collection creation parameters follow the v2 API naming convention, not the v1 naming conventions.
*Input*
A somewhat contrived example demonstrating the <<v2-api.adoc#top-v2-api,V2 API>> usage and additional collection creation options.
Notice that the collection creation parameters follow the v2 API naming convention, not the v1 naming conventions.
[source,json]
----
POST /api/c

View File

@ -26,7 +26,7 @@ Solr ships with two example configsets located in `server/solr/configsets`, whic
If you are using Solr in standalone mode, configsets are created on the filesystem.
To create a configset, add a new directory under the configset base directory. The configset will be identified by the name of this directory. Then into this copy the config directory you want to share. The structure should look something like this:
To create a configset, add a new directory under the configset base directory. The configset will be identified by the name of this directory. Then into this copy the configuration directory you want to share. The structure should look something like this:
[source,bash]
----

View File

@ -99,7 +99,7 @@ The configset to be created when the upload is complete. This parameter is requi
The body of the request should be a zip file that contains the configset. The zip file must be created from within the `conf` directory (i.e., `solrconfig.xml` must be the top level entry in the zip file).
Here is an example on how to create the zip file named "myconfig.zip" and upload it as a config set named "myConfigSet":
Here is an example on how to create the zip file named "myconfig.zip" and upload it as a configset named "myConfigSet":
[source,bash]
----

View File

@ -51,9 +51,9 @@ We've covered the options in the following sections:
== Substituting Properties in Solr Config Files
Solr supports variable substitution of property values in config files, which allows runtime specification of various configuration options in `solrconfig.xml`. The syntax is `${propertyname[:option default value]`}. This allows defining a default that can be overridden when Solr is launched. If a default value is not specified, then the property _must_ be specified at runtime or the configuration file will generate an error when parsed.
Solr supports variable substitution of property values in configuration files, which allows runtime specification of various configuration options in `solrconfig.xml`. The syntax is `${propertyname[:option default value]`}. This allows defining a default that can be overridden when Solr is launched. If a default value is not specified, then the property _must_ be specified at runtime or the configuration file will generate an error when parsed.
There are multiple methods for specifying properties that can be used in configuration files. Of those below, strongly consider "config overlay" as the preferred approach, as it stays local to the config set and because it's easy to modify.
There are multiple methods for specifying properties that can be used in configuration files. Of those below, strongly consider "config overlay" as the preferred approach, as it stays local to the configset and is easy to modify.
=== JVM System Properties

View File

@ -60,7 +60,7 @@ Your CREATE call must be able to find a configuration, or it will not succeed.
When you are running SolrCloud and create a new core for a collection, the configuration will be inherited from the collection. Each collection is linked to a configName, which is stored in ZooKeeper. This satisfies the config requirement. There is something to note, though if you're running SolrCloud, you should *NOT* be using the CoreAdmin API at all. Use the <<collections-api.adoc#collections-api,Collections API>>.
When you are not running SolrCloud, if you have <<config-sets.adoc#config-sets,Config Sets>> defined, you can use the configSet parameter as documented below. If there are no config sets, then the `instanceDir` specified in the CREATE call must already exist, and it must contain a `conf` directory which in turn must contain `solrconfig.xml`, your schema (usually named either `managed-schema` or `schema.xml`), and any files referenced by those configs.
When you are not running SolrCloud, if you have <<config-sets.adoc#config-sets,Config Sets>> defined, you can use the configSet parameter as documented below. If there are no configsets, then the `instanceDir` specified in the CREATE call must already exist, and it must contain a `conf` directory which in turn must contain `solrconfig.xml`, your schema (usually named either `managed-schema` or `schema.xml`), and any files referenced by those configs.
The config and schema filenames can be specified with the `config` and `schema` parameters, but these are expert options. One thing you could do to avoid creating the `conf` directory is use `config` and `schema` parameters that point at absolute paths, but this can lead to confusing configurations unless you fully understand what you are doing.
====

View File

@ -36,7 +36,7 @@ element `<solrDataHome>` then the location of data directory will be `<SOLR_DATA
== Specifying the DirectoryFactory For Your Index
The default {solr-javadocs}/solr-core/org/apache/solr/core/NRTCachingDirectoryFactory.html[`solr.NRTCachingDirectoryFactory`] is filesystem based, and tries to pick the best implementation for the current JVM and platform. You can force a particular implementation and/or config options by specifying {solr-javadocs}/solr-core/org/apache/solr/core/MMapDirectoryFactory.html[`solr.MMapDirectoryFactory`], {solr-javadocs}/solr-core/org/apache/solr/core/NIOFSDirectoryFactory.html[`solr.NIOFSDirectoryFactory`], or {solr-javadocs}/solr-core/org/apache/solr/core/SimpleFSDirectoryFactory.html[`solr.SimpleFSDirectoryFactory`].
The default {solr-javadocs}/solr-core/org/apache/solr/core/NRTCachingDirectoryFactory.html[`solr.NRTCachingDirectoryFactory`] is filesystem based, and tries to pick the best implementation for the current JVM and platform. You can force a particular implementation and/or configuration options by specifying {solr-javadocs}/solr-core/org/apache/solr/core/MMapDirectoryFactory.html[`solr.MMapDirectoryFactory`], {solr-javadocs}/solr-core/org/apache/solr/core/NIOFSDirectoryFactory.html[`solr.NIOFSDirectoryFactory`], or {solr-javadocs}/solr-core/org/apache/solr/core/SimpleFSDirectoryFactory.html[`solr.SimpleFSDirectoryFactory`].
[source,xml]
----

View File

@ -30,7 +30,7 @@ In Lucene 4.0, a new approach was introduced. DocValue fields are now column-ori
To use docValues, you only need to enable it for a field that you will use it with. As with all schema design, you need to define a field type and then define fields of that type with docValues enabled. All of these actions are done in `schema.xml`.
Enabling a field for docValues only requires adding `docValues="true"` to the field (or field type) definition, as in this example from the `schema.xml` of Solr's `sample_techproducts_configs` <<config-sets.adoc#config-sets,config set>>:
Enabling a field for docValues only requires adding `docValues="true"` to the field (or field type) definition, as in this example from the `schema.xml` of Solr's `sample_techproducts_configs` <<config-sets.adoc#config-sets,configset>>:
[source,xml]
----

View File

@ -58,7 +58,7 @@ the output would be:
<float name="score">0.343</float>
...
----
* Use in a parameter that is explicitly for specifying functions, such as the eDisMax query parser's <<the-extended-dismax-query-parser.adoc#the-extended-dismax-query-parser,`boost`>> param, or DisMax query parser's <<the-dismax-query-parser.adoc#bf-boost-functions-parameter,`bf` (boost function) parameter>>. (Note that the `bf` parameter actually takes a list of function queries separated by white space and each with an optional boost. Make sure you eliminate any internal white space in single function queries when using `bf`). For example:
* Use in a parameter that is explicitly for specifying functions, such as the eDisMax query parser's <<the-extended-dismax-query-parser.adoc#extended-dismax-parameters,`boost` parameter>>, or the DisMax query parser's <<the-dismax-query-parser.adoc#bf-boost-functions-parameter,`bf` (boost function) parameter>>. (Note that the `bf` parameter actually takes a list of function queries separated by white space and each with an optional boost. Make sure you eliminate any internal white space in single function queries when using `bf`). For example:
+
[source,text]
----

View File

@ -314,7 +314,7 @@ Custom JSON Updates:: Add and update custom JSON-formatted documents.
You can see configuration for all request handlers, including the implicit request handlers, via the <<config-api.adoc#config-api,Config API>>.
To include the expanded paramset in the response, as well as the effective parameters from merging the paramset parameters with the built-in parameters, use the `expandParams` request param. For the `/export` request handler, you can make a request like this:
To include the expanded paramset in the response, as well as the effective parameters from merging the paramset parameters with the built-in parameters, use the `expandParams` request parameter. For the `/export` request handler, you can make a request like this:
[.dynamic-tabs]

View File

@ -227,7 +227,7 @@ This parameter indicates the facet algorithm to use:
* "stream" Presently equivalent to "enum"
* "smart" Pick the best method for the field type (this is the default)
|prelim_sort |An optional parameter for specifying an approximation of the final `sort` to use during initial collection of top buckets when the <<json-facet-api.adoc#sorting-facets-by-nested-functions,`sort` param is very costly>>.
|prelim_sort |An optional parameter for specifying an approximation of the final `sort` to use during initial collection of top buckets when the <<json-facet-api.adoc#sorting-facets-by-nested-functions,`sort` parameter is very costly>>.
|===
=== Query Facet

View File

@ -29,7 +29,7 @@ For information about language detection at index time, see <<detecting-language
Protects words from being modified by stemmers. A customized protected word list may be specified with the "protected" attribute in the schema. Any words in the protected word list will not be modified by any stemmer in Solr.
A sample Solr `protwords.txt` with comments can be found in the `sample_techproducts_configs` <<config-sets.adoc#config-sets,config set>> directory:
A sample Solr `protwords.txt` with comments can be found in the `sample_techproducts_configs` <<config-sets.adoc#config-sets,configset>> directory:
[source,xml]
----
@ -1267,7 +1267,7 @@ Removes terms with one of the configured parts-of-speech. `JapaneseTokenizer` an
*Arguments:*
`tags`:: filename for a list of parts-of-speech for which to remove terms; see `conf/lang/stoptags_ja.txt` in the `sample_techproducts_config` <<config-sets.adoc#config-sets,config set>> for an example.
`tags`:: filename for a list of parts-of-speech for which to remove terms; see `conf/lang/stoptags_ja.txt` in the `sample_techproducts_config` <<config-sets.adoc#config-sets,configset>> for an example.
`enablePositionIncrements`:: if `luceneMatchVersion` is `4.3` or earlier and `enablePositionIncrements="false"`, no position holes will be left by this filter when it removes tokens. *This argument is invalid if `luceneMatchVersion` is `5.0` or later.*

View File

@ -167,7 +167,7 @@ The following changes were made in SolrJ.
* `HttpClientInterceptorPlugin` is now `HttpClientBuilderPlugin` and must work with a `SolrHttpClientBuilder` rather than an `HttpClientConfigurer`.
* `HttpClientUtil` now allows configuring `HttpClient` instances via `SolrHttpClientBuilder` rather than an `HttpClientConfigurer`. Use of env variable `SOLR_AUTHENTICATION_CLIENT_CONFIGURER` no longer works, please use `SOLR_AUTHENTICATION_CLIENT_BUILDER`
* `SolrClient` implementations now use their own internal configuration for socket timeouts, connect timeouts, and allowing redirects rather than what is set as the default when building the `HttpClient` instance. Use the appropriate setters on the `SolrClient` instance.
* `HttpSolrClient#setAllowCompression` has been removed and compression must be enabled as a constructor param.
* `HttpSolrClient#setAllowCompression` has been removed and compression must be enabled as a constructor parameter.
* `HttpSolrClient#setDefaultMaxConnectionsPerHost` and `HttpSolrClient#setMaxTotalConnections` have been removed. These now default very high and can only be changed via parameter when creating an HttpClient instance.
=== Other Deprecations and Removals

View File

@ -231,4 +231,4 @@ Example `solr.xml` section to configure a repository like <<running-solr-on-hdfs
</backup>
----
Better throughput might be achieved by increasing buffer size with `<int name="solr.hdfs.buffer.size">262144</int>`. Buffer size is specified in bytes, by default it's 4KB.
Better throughput might be achieved by increasing buffer size with `<int name="solr.hdfs.buffer.size">262144</int>`. Buffer size is specified in bytes, by default it's 4096 bytes (4KB).

View File

@ -200,7 +200,7 @@ Good query selection is key with this type of listener. It's best to choose your
There are two types of events that can trigger a listener. A `firstSearcher` event occurs when a new searcher is being prepared but there is no current registered searcher to handle requests or to gain auto-warming data from (i.e., on Solr startup). A `newSearcher` event is fired whenever a new searcher is being prepared and there is a current searcher handling requests.
The (commented out) examples below can be found in the `solrconfig.xml` file of the `sample_techproducts_configs` <<config-sets.adoc#config-sets,config set>>included with Solr, and demonstrate using the `solr.QuerySenderListener` class to warm a set of explicit queries:
The (commented out) examples below can be found in the `solrconfig.xml` file of the `sample_techproducts_configs` <<config-sets.adoc#config-sets,configset>> included with Solr, and demonstrate using the `solr.QuerySenderListener` class to warm a set of explicit queries:
[source,xml]
----

View File

@ -118,7 +118,7 @@ Solr ships with many out-of-the-box request handlers that may only be configured
=== Viewing Expanded Paramsets and Effective Parameters with RequestHandlers
To see the expanded paramset and the resulting effective parameters for a RequestHandler defined with `useParams`, use the `expandParams` request param. As an example, for the `/export` request handler:
To see the expanded paramset and the resulting effective parameters for a RequestHandler defined with `useParams`, use the `expandParams` request parameter. As an example, for the `/export` request handler:
[source,bash]
----
@ -155,7 +155,7 @@ It is possible to pass more than one parameter set in the same request. For exam
http://localhost/solr/techproducts/select?useParams=myFacets,myQueries
----
In the above example the param set 'myQueries' is applied on top of 'myFacets'. So, values in 'myQueries' take precedence over values in 'myFacets'. Additionally, any values passed in the request take precedence over `useParams` parameters. This acts like the "defaults" specified in the `<requestHandler>` definition in `solrconfig.xml`.
In the above example the parameter set 'myQueries' is applied on top of 'myFacets'. So, values in 'myQueries' take precedence over values in 'myFacets'. Additionally, any values passed in the request take precedence over `useParams` parameters. This acts like the "defaults" specified in the `<requestHandler>` definition in `solrconfig.xml`.
The parameter sets can be used directly in a request handler definition as follows. Please note that the `useParams` specified is always applied even if the request contains `useParams`.

View File

@ -167,7 +167,7 @@ The Content-Type of the response is set according to the `<xsl:output>` statemen
=== XSLT Configuration
The example below, from the `sample_techproducts_configs` <<response-writers.adoc#response-writers,config set>> in the Solr distribution, shows how the XSLT Response Writer is configured.
The example below, from the `sample_techproducts_configs` <<response-writers.adoc#response-writers,configset>> in the Solr distribution, shows how the XSLT Response Writer is configured.
[source,xml]
----

View File

@ -26,7 +26,7 @@ These Solr features, all controlled via `solrconfig.xml`, are:
== Using the Schemaless Example
The three features of schemaless mode are pre-configured in the `_default` <<config-sets.adoc#config-sets,config set>> in the Solr distribution. To start an example instance of Solr using these configs, run the following command:
The three features of schemaless mode are pre-configured in the `_default` <<config-sets.adoc#config-sets,configset>> in the Solr distribution. To start an example instance of Solr using these configs, run the following command:
[source,bash]
----
@ -66,7 +66,7 @@ You can use the `/schema/fields` <<schema-api.adoc#schema-api,Schema API>> to co
== Configuring Schemaless Mode
As described above, there are three configuration elements that need to be in place to use Solr in schemaless mode. In the `_default` config set included with Solr these are already configured. If, however, you would like to implement schemaless on your own, you should make the following changes.
As described above, there are three configuration elements that need to be in place to use Solr in schemaless mode. In the `_default` configset included with Solr these are already configured. If, however, you would like to implement schemaless on your own, you should make the following changes.
=== Enable Managed Schema

View File

@ -698,7 +698,7 @@ Use the `zk downconfig` command to download a configuration set from ZooKeeper t
All parameters listed below are required.
`-n <name>`::
Name of config set in ZooKeeper to download. The Admin UI Cloud \-> Tree \-> configs node lists all available configuration sets.
Name of the configset in ZooKeeper to download. The Admin UI Cloud \-> Tree \-> configs node lists all available configuration sets.
+
*Example*: `-n myconfig`

View File

@ -322,7 +322,7 @@ The response indicates that there are 4 hits (`"numFound":4`). We've only includ
Note the `responseHeader` before the documents. This header will include the parameters you have set for the search. By default it shows only the parameters _you_ have set for this query, which in this case is only your query term.
The documents we got back include all the fields for each document that were indexed. This is, again, default behavior. If you want to restrict the fields in the response, you can use the `fl` param, which takes a comma-separated list of field names. This is one of the available fields on the query form in the Admin UI.
The documents we got back include all the fields for each document that were indexed. This is, again, default behavior. If you want to restrict the fields in the response, you can use the `fl` parameter, which takes a comma-separated list of field names. This is one of the available fields on the query form in the Admin UI.
Put "id" (without quotes) in the "fl" box and hit btn:[Execute Query] again. Or, to specify it with curl:
@ -706,7 +706,7 @@ On the Admin UI Query tab, if you check the `facet` checkbox, you'll see a few f
.Facet options in the Query screen
image::images/solr-tutorial/tutorial-admin-ui-facet-options.png[Solr Quick Start: Query tab facet options]
To see facet counts from all documents (`q=\*:*`): turn on faceting (`facet=true`), and specify the field to facet on via the `facet.field` param. If you only want facets, and no document contents, specify `rows=0`. The `curl` command below will return facet counts for the `genre_str` field:
To see facet counts from all documents (`q=\*:*`): turn on faceting (`facet=true`), and specify the field to facet on via the `facet.field` parameter. If you only want facets, and no document contents, specify `rows=0`. The `curl` command below will return facet counts for the `genre_str` field:
`curl "http://localhost:8983/solr/films/select?q=\*:*&rows=0&facet=true&facet.field=genre_str"`

View File

@ -216,5 +216,5 @@ The following properties are available in context and can be referenced from tem
----
This configuration specifies that each time one of the listed stages is reached, or before and after each of the listed
actions is executed, the listener will send the templated payload to a URL that also depends on the config and the current event,
actions is executed, the listener will send the templated payload to a URL that also depends on the configuration and the current event,
and with a custom header that indicates the trigger name.

View File

@ -374,7 +374,7 @@ This trigger calculates node-level cumulative rates using per-replica rates repo
replicas that are part of monitored collections / shards on each node. This means that it may report
some nodes as "cold" (underutilized) because it ignores other, perhaps more active, replicas
belonging to other collections. Also, nodes that don't host any of the monitored replicas or
those that are explicitly excluded by `node` config property won't be reported at all.
those that are explicitly excluded by `node` configuration property won't be reported at all.
.Calculating `waitFor`
[CAUTION]

View File

@ -1163,7 +1163,7 @@ rollup(
)
----
The example about shows the rollup function wrapping the search function. Notice that search function is using the `/export` handler to provide the entire result set to the rollup stream. Also notice that the search function's *sort param* matches up with the rollup's `over` parameter. This allows the rollup function to rollup the over the `a_s` field, one group at a time.
The example about shows the rollup function wrapping the search function. Notice that search function is using the `/export` handler to provide the entire result set to the rollup stream. Also notice that the search function's `sort` parameter matches up with the rollup's `over` parameter. This allows the rollup function to rollup the over the `a_s` field, one group at a time.
== scoreNodes

View File

@ -183,8 +183,8 @@ facet(collection1,
count(*))
----
The example above shows a facet function with rollups over three buckets, where the buckets are returned in descending order by bucket value.
The rows param returns 10 rows and the offset param starts returning rows from the 20th row.
The example above shows a `facet` function with rollups over three buckets, where the buckets are returned in descending order by bucket value.
The `rows` parameter returns 10 rows and the `offset` parameter starts returning rows from the 20th row.
== features

View File

@ -131,7 +131,7 @@ You can boost results that have a field that matches a specific value:
http://localhost:8983/solr/techproducts/select?q=video&defType=edismax&qf=features^20.0+text^0.3&bq=cat:electronics^5.0
----
Using the "mm" param, 1 and 2 word queries require that all of the optional clauses match, but for queries with three or more clauses one missing clause is allowed:
Using the `mm` parameter, 1 and 2 word queries require that all of the optional clauses match, but for queries with three or more clauses one missing clause is allowed:
[source,text]
----

View File

@ -215,7 +215,7 @@ The status field will be non-zero in case of failure.
=== Using XSLT to Transform XML Index Updates
The UpdateRequestHandler allows you to index any arbitrary XML using the `<tr>` parameter to apply an https://en.wikipedia.org/wiki/XSLT[XSL transformation]. You must have an XSLT stylesheet in the `conf/xslt` directory of your <<config-sets.adoc#config-sets,config set>> that can transform the incoming data to the expected `<add><doc/></add>` format, and use the `tr` parameter to specify the name of that stylesheet.
The UpdateRequestHandler allows you to index any arbitrary XML using the `<tr>` parameter to apply an https://en.wikipedia.org/wiki/XSLT[XSL transformation]. You must have an XSLT stylesheet in the `conf/xslt` directory of your <<config-sets.adoc#config-sets,configset>> that can transform the incoming data to the expected `<add><doc/></add>` format, and use the `tr` parameter to specify the name of that stylesheet.
Here is an example XSLT stylesheet:

View File

@ -55,11 +55,11 @@ Append `/_introspect` to any valid v2 API path and the API specification will be
`\http://localhost:8983/api/c/_introspect`
To limit the introspect output to include just one particular HTTP method, add request param `method` with value `GET`, `POST`, or `DELETE`.
To limit the introspect output to include just one particular HTTP method, add the request parameter `method` with value `GET`, `POST`, or `DELETE`.
`\http://localhost:8983/api/c/_introspect?method=POST`
Most endpoints support commands provided in a body sent via POST. To limit the introspect output to only one command, add request param `command=_command-name_`.
Most endpoints support commands provided in a body sent via POST. To limit the introspect output to only one command, add the request parameter `command=_command-name_`.
`\http://localhost:8983/api/c/gettingstarted/_introspect?method=POST&command=modify`

View File

@ -18,7 +18,7 @@
Solr includes a sample search UI based on the <<response-writers.adoc#velocity-writer,VelocityResponseWriter>> (also known as Solritas) that demonstrates several useful features, such as searching, faceting, highlighting, autocomplete, and geospatial searching.
When using the `sample_techproducts_configs` config set, you can access the Velocity sample Search UI: `\http://localhost:8983/solr/techproducts/browse`
When using the `sample_techproducts_configs` configset, you can access the Velocity sample Search UI: `\http://localhost:8983/solr/techproducts/browse`
.The Velocity Search UI
image::images/velocity-search-ui/techproducts_browse.png[image,width=500]

View File

@ -97,7 +97,7 @@ There are two scripts that impact ZooKeeper ACLs:
* For *nix systems: `bin/solr` & `server/scripts/cloud-scripts/zkcli.sh`
* For Windows systems: `bin/solr.cmd` & `server/scripts/cloud-scripts/zkcli.bat`
These Solr scripts can enable use of ZK ACLs by setting the appropriate system properties: uncomment the following and replace the passwords with ones you choose to enable the above-described VM parameters ACL and credentials providers in the following files:
These Solr scripts can enable use of ZooKeeper ACLs by setting the appropriate system properties: uncomment the following and replace the passwords with ones you choose to enable the above-described VM parameters ACL and credentials providers in the following files:
.solr.in.sh
[source,bash]