mirror of https://github.com/apache/lucene.git
SOLR-12927: copy edits (i.e., e.g., capitalized titles, etc.)
This commit is contained in:
parent
8791a38d75
commit
92c83264c8
|
@ -278,7 +278,7 @@ All parameters must be the same type after implicit casting is done.
|
|||
|
||||
=== Fill Missing
|
||||
If the 1^st^ expression does not have values, fill it with the values for the 2^nd^ expression.
|
||||
Both expressions must be of the same type and cardinality after implicit casting is done
|
||||
Both expressions must be of the same type and cardinality after implicit casting is done.
|
||||
|
||||
`fill_missing(< T >, < T >)` \=> `< T >`::
|
||||
* `fill_missing([], 3)` \=> `[3]`
|
||||
|
@ -287,7 +287,7 @@ Both expressions must be of the same type and cardinality after implicit casting
|
|||
|
||||
=== Remove
|
||||
Remove all occurrences of the 2^nd^ expression's value from the values of the 1^st^ expression.
|
||||
Both expressions must be of the same type after implicit casting is done
|
||||
Both expressions must be of the same type after implicit casting is done.
|
||||
|
||||
`remove(< T >, < _Single_ T >)` \=> `< T >`::
|
||||
* `remove([1,2,3,2], 2)` \=> `[1, 3]`
|
||||
|
|
|
@ -43,7 +43,7 @@ image::images/cloud-screens/cloud-tree.png[image,width=487,height=250]
|
|||
As an aid to debugging, the data shown in the "Tree" view can be exported locally using the following command `bin/solr zk ls -r /`
|
||||
|
||||
== ZK Status View
|
||||
The "ZK Status" view gives an overview over the Zookeepers used by Solr. It lists whether running in `standalone` or `ensemble` mode, shows how many zookeepers are configured, and then displays a table listing detailed monitoring status for each of the zookeepers, including who is the leader, configuration parameters and more.
|
||||
The "ZK Status" view gives an overview over the ZooKeeper servers or ensemble used by Solr. It lists whether running in `standalone` or `ensemble` mode, shows how many zookeepers are configured, and then displays a table listing detailed monitoring status for each of the zookeepers, including who is the leader, configuration parameters and more.
|
||||
|
||||
image::images/cloud-screens/cloud-zkstatus.png[image,width=512,height=509]
|
||||
|
||||
|
|
|
@ -17,8 +17,7 @@
|
|||
// under the License.
|
||||
|
||||
|
||||
This section of the math expressions user guide covers computational geometry
|
||||
functions.
|
||||
This section of the math expressions user guide covers computational geometry functions.
|
||||
|
||||
== Convex Hull
|
||||
|
||||
|
|
|
@ -235,7 +235,7 @@ Multi-valued, directories that would be merged.
|
|||
Multi-valued, source cores that would be merged.
|
||||
|
||||
`async`::
|
||||
Request ID to track this action which will be processed asynchronously
|
||||
Request ID to track this action which will be processed asynchronously.
|
||||
|
||||
|
||||
[[coreadmin-split]]
|
||||
|
|
|
@ -72,7 +72,7 @@ The properties that can be specified for a given field type fall into three majo
|
|||
|
||||
=== General Properties
|
||||
|
||||
These are the general properties for fields
|
||||
These are the general properties for fields:
|
||||
|
||||
`name`::
|
||||
The name of the fieldType. This value gets used in field definitions, in the "type" attribute. It is strongly recommended that names consist of alphanumeric or underscore characters only and not start with a digit. This is not currently strictly enforced.
|
||||
|
|
|
@ -40,7 +40,7 @@ For example, here is one of the `<initParams>` sections defined by default in th
|
|||
|
||||
This sets the default search field ("df") to be "_text_" for all of the request handlers named in the path section. If we later want to change the `/query` request handler to search a different field by default, we could override the `<initParams>` by defining the parameter in the `<requestHandler>` section for `/query`.
|
||||
|
||||
The syntax and semantics are similar to that of a `<requestHandler>`. The following are the attributes
|
||||
The syntax and semantics are similar to that of a `<requestHandler>`. The following are the attributes:
|
||||
|
||||
`path`::
|
||||
A comma-separated list of paths which will use the parameters. Wildcards can be used in paths to define nested paths, as described below.
|
||||
|
|
|
@ -183,7 +183,7 @@ include::{example-source-dir}JsonRequestApiTest.java[tag=solrj-json-terms-facet2
|
|||
|
||||
=== JSON Extensions
|
||||
|
||||
The *Noggit* JSON parser that is used by Solr accepts a number of JSON extensions such as
|
||||
The *Noggit* JSON parser that is used by Solr accepts a number of JSON extensions such as,
|
||||
|
||||
* bare words can be left unquoted
|
||||
* single line comments using either `//` or `#`
|
||||
|
|
|
@ -153,7 +153,7 @@ include::{example-source-dir}JsonRequestApiTest.java[tag=solrj-json-query-params
|
|||
====
|
||||
--
|
||||
|
||||
Which is equivalent to
|
||||
Which is equivalent to:
|
||||
|
||||
[source,bash]
|
||||
curl "http://localhost:8983/solr/techproducts/query?fl=name,price&q=memory&rows=1"
|
||||
|
|
|
@ -78,11 +78,11 @@ Please review the <<schema-factory-definition-in-solrconfig.adoc#schema-factory-
|
|||
|
||||
Solr's default behavior when a Schema does not explicitly define a global <<other-schema-elements.adoc#other-schema-elements,`<similarity/>`>> is now dependent on the `luceneMatchVersion` specified in the `solrconfig.xml`. When `luceneMatchVersion < 6.0`, an instance of `ClassicSimilarityFactory` will be used, otherwise an instance of `SchemaSimilarityFactory` will be used. Most notably this change means that users can take advantage of per Field Type similarity declarations, without needing to also explicitly declare a global usage of `SchemaSimilarityFactory`.
|
||||
|
||||
Regardless of whether it is explicitly declared, or used as an implicit global default, `SchemaSimilarityFactory` 's implicit behavior when a Field Types do not declare an explicit `<similarity />` has also been changed to depend on the the `luceneMatchVersion`. When `luceneMatchVersion < 6.0`, an instance of `ClassicSimilarity` will be used, otherwise an instance of `BM25Similarity` will be used. A `defaultSimFromFieldType` init option may be specified on the `SchemaSimilarityFactory` declaration to change this behavior. Please review the `SchemaSimilarityFactory` javadocs for more details
|
||||
Regardless of whether it is explicitly declared, or used as an implicit global default, `SchemaSimilarityFactory` 's implicit behavior when a Field Types do not declare an explicit `<similarity />` has also been changed to depend on the the `luceneMatchVersion`. When `luceneMatchVersion < 6.0`, an instance of `ClassicSimilarity` will be used, otherwise an instance of `BM25Similarity` will be used. A `defaultSimFromFieldType` init option may be specified on the `SchemaSimilarityFactory` declaration to change this behavior. Please review the `SchemaSimilarityFactory` javadocs for more details.
|
||||
|
||||
== Replica & Shard Delete Command Changes
|
||||
|
||||
DELETESHARD and DELETEREPLICA now default to deleting the instance directory, data directory, and index directory for any replica they delete. Please review the <<collections-api.adoc#collections-api,Collection API>> documentation for details on new request parameters to prevent this behavior if you wish to keep all data on disk when using these commands
|
||||
DELETESHARD and DELETEREPLICA now default to deleting the instance directory, data directory, and index directory for any replica they delete. Please review the <<collections-api.adoc#collections-api,Collection API>> documentation for details on new request parameters to prevent this behavior if you wish to keep all data on disk when using these commands.
|
||||
|
||||
== facet.date.* Parameters Removed
|
||||
|
||||
|
|
|
@ -202,13 +202,13 @@ http://localhost:8983/solr/admin/cores?action=DELETESNAPSHOT&core=techproducts&c
|
|||
The delete snapshot request parameters are:
|
||||
|
||||
`commitName`::
|
||||
Specify the commit name to be deleted
|
||||
Specify the commit name to be deleted.
|
||||
|
||||
`core`::
|
||||
The name of the core whose snapshot we want to delete
|
||||
The name of the core whose snapshot we want to delete.
|
||||
|
||||
`async`::
|
||||
Request ID to track this action which will be processed asynchronously
|
||||
Request ID to track this action which will be processed asynchronously.
|
||||
|
||||
== Backup/Restore Storage Repositories
|
||||
|
||||
|
|
|
@ -83,7 +83,7 @@ $ ./bin/solr-exporter -p 9854 -z localhost:2181/solr -f ./conf/solr-exporter-con
|
|||
|
||||
=== Command Line Parameters
|
||||
|
||||
The parameters in the example start commands shown above
|
||||
The parameters in the example start commands shown above:
|
||||
|
||||
`h`, `--help`::
|
||||
Displays command line help and usage.
|
||||
|
|
|
@ -93,7 +93,7 @@ In addition to returning the top N sorted results (where you can control N using
|
|||
|
||||
=== Constraints when using Cursors
|
||||
|
||||
There are a few important constraints to be aware of when using `cursorMark` parameter in a Solr request
|
||||
There are a few important constraints to be aware of when using `cursorMark` parameter in a Solr request.
|
||||
|
||||
. `cursorMark` and `start` are mutually exclusive parameters.
|
||||
* Your requests must either not include a `start` parameter, or it must be specified with a value of "```0```".
|
||||
|
|
|
@ -45,7 +45,7 @@ This command will ping the core name for a response.
|
|||
http://localhost:8983/solr/<collection-name>/admin/ping?distrib=true&wt=xml
|
||||
----
|
||||
|
||||
This command will ping all replicas of the given collection name for a response
|
||||
This command will ping all replicas of the given collection name for a response:
|
||||
|
||||
*Sample Output*
|
||||
|
||||
|
|
|
@ -16,8 +16,7 @@
|
|||
// specific language governing permissions and limitations
|
||||
// under the License.
|
||||
|
||||
This section of the user guide covers the
|
||||
probability distribution
|
||||
This section of the user guide covers the probability distribution
|
||||
framework included in the math expressions library.
|
||||
|
||||
== Probability Distribution Framework
|
||||
|
|
|
@ -313,7 +313,7 @@ curl -X POST -H 'Content-type:application/json' --data-binary '{
|
|||
|
||||
The `add-field-type` command adds a new field type to your schema.
|
||||
|
||||
All of the field type properties available when editing `schema.xml` by hand are available for use in a POST request. The structure of the command is a json mapping of the standard field type definition, including the name, class, index and query analyzer definitions, etc. Details of all of the available options are described in the section <<solr-field-types.adoc#solr-field-types,Solr Field Types>>.
|
||||
All of the field type properties available when editing `schema.xml` by hand are available for use in a POST request. The structure of the command is a JSON mapping of the standard field type definition, including the name, class, index and query analyzer definitions, etc. Details of all of the available options are described in the section <<solr-field-types.adoc#solr-field-types,Solr Field Types>>.
|
||||
|
||||
For example, to create a new field type named "myNewTxtField", you can POST a request as follows:
|
||||
|
||||
|
@ -426,7 +426,7 @@ curl -X POST -H 'Content-type:application/json' --data-binary '{
|
|||
|
||||
The `replace-field-type` command replaces a field type in your schema. Note that you must supply the full definition for a field type - this command will *not* partially modify a field type's definition. If the field type does not exist in the schema an error is thrown.
|
||||
|
||||
All of the field type properties available when editing `schema.xml` by hand are available for use in a POST request. The structure of the command is a json mapping of the standard field type definition, including the name, class, index and query analyzer definitions, etc. Details of all of the available options are described in the section <<solr-field-types.adoc#solr-field-types,Solr Field Types>>.
|
||||
All of the field type properties available when editing `schema.xml` by hand are available for use in a POST request. The structure of the command is a JSON mapping of the standard field type definition, including the name, class, index and query analyzer definitions, etc. Details of all of the available options are described in the section <<solr-field-types.adoc#solr-field-types,Solr Field Types>>.
|
||||
|
||||
For example, to replace the definition of a field type named "myNewTxtField", you can make a POST request as follows:
|
||||
|
||||
|
@ -1187,7 +1187,7 @@ The output will simply be the schema version in use.
|
|||
|
||||
==== Show Schema Version Example
|
||||
|
||||
Get the schema version
|
||||
Get the schema version:
|
||||
|
||||
[source,bash]
|
||||
----
|
||||
|
|
|
@ -770,7 +770,7 @@ Copy a single file from ZooKeeper to local.
|
|||
|
||||
=== Remove a znode from ZooKeeper
|
||||
|
||||
Use the `zk rm` command to remove a znode (and optionally all child nodes) from ZooKeeper
|
||||
Use the `zk rm` command to remove a znode (and optionally all child nodes) from ZooKeeper.
|
||||
|
||||
==== ZK Remove Parameters
|
||||
|
||||
|
@ -806,7 +806,7 @@ Examples of this command with the parameters are:
|
|||
|
||||
=== Move One ZooKeeper znode to Another (Rename)
|
||||
|
||||
Use the `zk mv` command to move (rename) a ZooKeeper znode
|
||||
Use the `zk mv` command to move (rename) a ZooKeeper znode.
|
||||
|
||||
==== ZK Move Parameters
|
||||
|
||||
|
|
|
@ -585,7 +585,7 @@ First, we are using a "managed schema", which is configured to only be modified
|
|||
|
||||
Second, we are using "field guessing", which is configured in the `solrconfig.xml` file (and includes most of Solr's various configuration settings). Field guessing is designed to allow us to start using Solr without having to define all the fields we think will be in our documents before trying to index them. This is why we call it "schemaless", because you can start quickly and let Solr create fields for you as it encounters them in documents.
|
||||
|
||||
Sounds great! Well, not really, there are limitations. It's a bit brute force, and if it guesses wrong, you can't change much about a field after data has been indexed without having to reindex. If we only have a few thousand documents that might not be bad, but if you have millions and millions of documents, or, worse, don't have access to the original data anymore, this can be a real problem.
|
||||
Sounds great! Well, not really, there are limitations. It's a bit brute force, and if it guesses wrong, you can't change much about a field after data has been indexed without having to re-index. If we only have a few thousand documents that might not be bad, but if you have millions and millions of documents, or, worse, don't have access to the original data anymore, this can be a real problem.
|
||||
|
||||
For these reasons, the Solr community does not recommend going to production without a schema that you have defined yourself. By this we mean that the schemaless features are fine to start with, but you should still always make sure your schema matches your expectations for how you want your data indexed and how users are going to query it.
|
||||
|
||||
|
@ -936,7 +936,7 @@ Go ahead and edit any of the existing example data files, change some of the dat
|
|||
|
||||
=== Deleting Data
|
||||
|
||||
If you need to iterate a few times to get your schema right, you may want to delete documents to clear out the collection and try again. Note, however, that merely removing documents doesn't change the underlying field definitions. Essentially, this will allow you to reindex your data after making changes to fields for your needs.
|
||||
If you need to iterate a few times to get your schema right, you may want to delete documents to clear out the collection and try again. Note, however, that merely removing documents doesn't change the underlying field definitions. Essentially, this will allow you to re-index your data after making changes to fields for your needs.
|
||||
|
||||
You can delete data by POSTing a delete command to the update URL and specifying the value of the document's unique key field, or a query that matches multiple documents (be careful with that one!). We can use `bin/post` to delete documents also if we structure the request properly.
|
||||
|
||||
|
|
|
@ -123,7 +123,7 @@ When upgrading to Solr 7.4, users should be aware of the following major changes
|
|||
|
||||
*Logging*
|
||||
|
||||
* Solr now uses Log4j v2.11. The Log4j configuration is now in `log4j2.xml` rather than `log4j.properties` files. This is a server side change only and clients using SolrJ won't need any changes. Clients can still use any logging implementation which is compatible with SLF4J. We now let Log4j handle rotation of solr logs at startup, and `bin/solr` start scripts will no longer attempt this nor move existing console or garbage collection logs into `logs/archived` either. See <<configuring-logging.adoc#configuring-logging,Configuring Logging>> for more details about Solr logging.
|
||||
* Solr now uses Log4j v2.11. The Log4j configuration is now in `log4j2.xml` rather than `log4j.properties` files. This is a server side change only and clients using SolrJ won't need any changes. Clients can still use any logging implementation which is compatible with SLF4J. We now let Log4j handle rotation of Solr logs at startup, and `bin/solr` start scripts will no longer attempt this nor move existing console or garbage collection logs into `logs/archived` either. See <<configuring-logging.adoc#configuring-logging,Configuring Logging>> for more details about Solr logging.
|
||||
|
||||
* Configuring `slowQueryThresholdMillis` now logs slow requests to a separate file named `solr_slow_requests.log`. Previously they would get logged in the `solr.log` file.
|
||||
|
||||
|
|
|
@ -155,12 +155,12 @@ The `replica` attribute value can be specified in one of the following forms:
|
|||
|
||||
You can specify one of the following as the value of a `replica` and `cores` policy rule attribute:
|
||||
|
||||
* an exact integer (e.g. `2`)
|
||||
* an exclusive lower integer bound (e.g. `>0`)
|
||||
* an exclusive upper integer bound (e.g. `<3`)
|
||||
* an exact integer (e.g., `2`)
|
||||
* an exclusive lower integer bound (e.g., `>0`)
|
||||
* an exclusive upper integer bound (e.g., `<3`)
|
||||
* a decimal value, interpreted as an acceptable range of core counts, from the floor of the value to the ceiling of the value, with the system preferring the rounded value (e.g., `1.6`: `1` or `2` is acceptable, and `2` is preferred)
|
||||
* a <<range-operator,range>> of acceptable replica/core counts, as inclusive lower and upper integer bounds separated by a hyphen (e.g. `3-5`)
|
||||
* a percentage (e.g. `33%`), which is multiplied at runtime either by the number of <<Replica Selector and Rule Evaluation Context,selected replicas>> (for a `replica` constraint) or the number of cores in the cluster (for a `cores` constraint). This value is then interpreted as described above for a literal decimal value.
|
||||
* a <<range-operator,range>> of acceptable replica/core counts, as inclusive lower and upper integer bounds separated by a hyphen (e.g., `3-5`)
|
||||
* a percentage (e.g., `33%`), which is multiplied at runtime either by the number of <<Replica Selector and Rule Evaluation Context,selected replicas>> (for a `replica` constraint) or the number of cores in the cluster (for a `cores` constraint). This value is then interpreted as described above for a literal decimal value.
|
||||
|
||||
NOTE: Using an exact integer value for count constraints is of limited utility, since collection or cluster changes could quickly invalidate them. For example, attempting to add a third replica to each shard of a collection on a two-node cluster with policy rule `{"replica":1, "shard":"#EACH", "node":"#ANY"}` would cause a violation, since at least one node would have to host more than one replica. Percentage rules are less brittle. Rewriting the rule as `{"replica":"50%", "shard":"#EACH", "node":"#ANY"}` eliminates the violation: `50% of 3 replicas = 1.5 replicas per node`, meaning that it's acceptable for a node to host either one or two replicas of each shard.
|
||||
|
||||
|
@ -213,7 +213,7 @@ The port of the node to which the rule should apply. The <<not-operator,`!` (no
|
|||
|
||||
[[freedisk-attribute]]
|
||||
`freedisk`::
|
||||
The free disk space in gigabytes of the node. This must be a positive 64-bit integer value, or a <<percentage-function,percentage>>. If a percentage is specified, either an upper or lower bound may also be specified using the `<` or `>` operators, respectively, e.g. `>50%`, `<25%`.
|
||||
The free disk space in gigabytes of the node. This must be a positive 64-bit integer value, or a <<percentage-function,percentage>>. If a percentage is specified, either an upper or lower bound may also be specified using the `<` or `>` operators, respectively, e.g., `>50%`, `<25%`.
|
||||
|
||||
[[host-attribute]]
|
||||
`host`::
|
||||
|
@ -277,7 +277,7 @@ This supports values calculated at the time of execution.
|
|||
* [[all-function]]`#ALL`: Applies to the <<replica-attribute,`replica` attribute>> only. This means all replicas that meet the rule condition.
|
||||
* [[each-function]]`#EACH`: Applies to the <<shard-attribute,`shard` attribute>> (meaning the rule should be evaluated separately for each shard), and to the attributes used to define the buckets for the <<equal-function,#EQUAL function>> (meaning all possible values for the bucket-defining attribute).
|
||||
* [[equal-function]]`#EQUAL`: Applies to the <<replica-attribute,`replica`>> and <<cores-attribute,`cores`>> attributes only. This means an equal number of replicas/cores in each bucket. The buckets can be defined using the below attributes with a value that can either be <<each-function,`#EACH`>> or a list specified with the <<array-operator,array operator (`[]`)>>:
|
||||
** <<node-attribute,`node`>> \<- <<Rule Types,global rules>>, i.e. those with the <<cores-attribute,`cores` attribute>>, may only specify this attribute
|
||||
** <<node-attribute,`node`>> \<- <<Rule Types,global rules>>, i.e., those with the <<cores-attribute,`cores` attribute>>, may only specify this attribute
|
||||
** <<sysprop-attribute,`sysprop.*`>>
|
||||
** <<port-attribute,`port`>>
|
||||
** <<diskType-attribute,`diskType`>>
|
||||
|
|
|
@ -22,7 +22,8 @@
|
|||
|
||||
The `cartesianProduct` function turns a single tuple with a multi-valued field (i.e., an array) into multiple tuples, one for each value in the array field. That is, given a single tuple containing an array of N values for fieldA, the `cartesianProduct` function will output N tuples, each with one value from the original tuple's array. In essence, you can flatten arrays for further processing.
|
||||
|
||||
For example, using `cartesianProduct` you can turn this tuple
|
||||
For example, using `cartesianProduct` you can turn this tuple:
|
||||
|
||||
[source,text]
|
||||
----
|
||||
{
|
||||
|
@ -31,7 +32,8 @@ For example, using `cartesianProduct` you can turn this tuple
|
|||
}
|
||||
----
|
||||
|
||||
into the following 3 tuples
|
||||
into the following 3 tuples:
|
||||
|
||||
[source,text]
|
||||
----
|
||||
{
|
||||
|
@ -67,7 +69,7 @@ cartesianProduct(
|
|||
|
||||
=== cartesianProduct Examples
|
||||
|
||||
The following examples show different outputs for this source tuple
|
||||
The following examples show different outputs for this source tuple:
|
||||
|
||||
[source,text]
|
||||
----
|
||||
|
|
|
@ -212,9 +212,9 @@ A document that contains "Hans Anderson" will match, but a document that contain
|
|||
|
||||
Finally, in addition to the phrase fields (`pf`) parameter, `edismax` also supports the `pf2` and `pf3` parameters, for fields over which to create bigram and trigram phrase queries. The phrase slop for these parameters' queries can be specified using the `ps2` and `ps3` parameters, respectively. If you use `pf2`/`pf3` but not `ps2`/`ps3`, then the phrase slop for these parameters' queries will be taken from the `ps` parameter, if any.
|
||||
|
||||
=== Synonyms expansion in phrase queries with slop
|
||||
=== Synonyms Expansion in Phrase Queries with Slop
|
||||
|
||||
When a phrase query with slop (e.g. `pf` with `ps`) triggers synonym expansions, a separate clause will be generated for each combination of synonyms. For example, with configured synonyms `dog,canine` and `cat,feline`, the query `"dog chased cat"` will generate the following phrase query clauses:
|
||||
When a phrase query with slop (e.g., `pf` with `ps`) triggers synonym expansions, a separate clause will be generated for each combination of synonyms. For example, with configured synonyms `dog,canine` and `cat,feline`, the query `"dog chased cat"` will generate the following phrase query clauses:
|
||||
|
||||
* `"dog chased cat"`
|
||||
* `"canine chased cat"`
|
||||
|
|
|
@ -416,8 +416,8 @@ There are two restrictions: wildcards can only be used at the end of the `json-p
|
|||
A single asterisk `\*` maps only to direct children, and a double asterisk `**` maps recursively to all descendants. The following are example wildcard path mappings:
|
||||
|
||||
* `f=$FQN:/**`: maps all fields to the fully qualified name (`$FQN`) of the JSON field. The fully qualified name is obtained by concatenating all the keys in the hierarchy with a period (`.`) as a delimiter. This is the default behavior if no `f` path mappings are specified.
|
||||
* `f=/docs/*`: maps all the fields under docs and in the name as given in json
|
||||
* `f=/docs/**`: maps all the fields under docs and its children in the name as given in json
|
||||
* `f=/docs/*`: maps all the fields under docs and in the name as given in JSON
|
||||
* `f=/docs/**`: maps all the fields under docs and its children in the name as given in JSON
|
||||
* `f=searchField:/docs/*`: maps all fields under /docs to a single field called ‘searchField’
|
||||
* `f=searchField:/docs/**`: maps all fields under /docs and its children to searchField
|
||||
|
||||
|
|
|
@ -296,7 +296,7 @@ To log substituted subquery request parameters, add the corresponding parameter
|
|||
|
||||
==== Cores and Collections in SolrCloud
|
||||
|
||||
Use `foo:[subquery fromIndex=departments]` to invoke subquery on another core on the same node. This is what `{!join}` does for non-SolrCloud mode. But with SolrCloud, just (and only) explicitly specify its native parameters like `collection, shards` for subquery, e.g.:
|
||||
Use `foo:[subquery fromIndex=departments]` to invoke subquery on another core on the same node. This is what `{!join}` does for non-SolrCloud mode. But with SolrCloud, just (and only) explicitly specify its native parameters like `collection, shards` for subquery, for example:
|
||||
|
||||
[source,plain,subs="quotes"]
|
||||
q=\*:*&fl=\*,foo:[subquery]&foo.q=cloud&**foo.collection**=departments
|
||||
|
|
|
@ -568,7 +568,7 @@ Nested documents may be indexed via either the XML or JSON data syntax, and is a
|
|||
** it may be infeasible to use `required`
|
||||
** even child documents need a unique `id`
|
||||
* You must include a field that identifies the parent document as a parent; it can be any field that suits this purpose, and it will be used as input for the <<other-parsers.adoc#block-join-query-parsers,block join query parsers>>.
|
||||
* If you associate a child document as a field (e.g. comment), that field need not be defined in the schema, and probably
|
||||
* If you associate a child document as a field (e.g., comment), that field need not be defined in the schema, and probably
|
||||
shouldn't be as it would be confusing. There is no child document field type.
|
||||
|
||||
=== XML Examples
|
||||
|
@ -640,4 +640,3 @@ For the anonymous relationship, note the special `\_childDocuments_` key whose c
|
|||
}
|
||||
]
|
||||
----
|
||||
|
||||
|
|
|
@ -26,7 +26,7 @@ If you want to supply your own `ContentHandler` for Solr to use, you can extend
|
|||
|
||||
When using the Solr Cell framework, it is helpful to keep the following in mind:
|
||||
|
||||
* Tika will automatically attempt to determine the input document type (e.g. Word, PDF, HTML) and extract the content appropriately.
|
||||
* Tika will automatically attempt to determine the input document type (e.g., Word, PDF, HTML) and extract the content appropriately.
|
||||
If you like, you can explicitly specify a MIME type for Tika with the `stream.type` parameter.
|
||||
See http://tika.apache.org/{ivy-tika-version}/formats.html for the file types supported.
|
||||
* Briefly, Tika internally works by synthesizing an XHTML document from the core content of the parsed document which is passed to a configured http://www.saxproject.org/quickstart.html[SAX] ContentHandler provided by Solr Cell.
|
||||
|
@ -155,7 +155,7 @@ Defines a file path and name for a file of file name to password mappings.
|
|||
Specifies the optional name of the file. Tika can use it as a hint for detecting a file's MIME type.
|
||||
|
||||
`resource.password`::
|
||||
Defines a password to use for a password-protected PDF or OOXML file
|
||||
Defines a password to use for a password-protected PDF or OOXML file.
|
||||
|
||||
`tika.config`::
|
||||
Defines a file path and name to a customized Tika configuration file. This is only required if you have customized your Tika implementation.
|
||||
|
|
|
@ -820,7 +820,7 @@ timeout::
|
|||
The query timeout in seconds. The default is 5 minutes (300 seconds).
|
||||
|
||||
cursorMark="true"::
|
||||
Use this to enable cursor for efficient result set scrolling
|
||||
Use this to enable cursor for efficient result set scrolling.
|
||||
|
||||
sort="id asc"::
|
||||
This should be used to specify a sort parameter referencing the uniqueKey field of the source Solr instance. See <<pagination-of-results.adoc#pagination-of-results,Pagination of Results>> for details.
|
||||
|
|
|
@ -146,7 +146,7 @@ Example of introspect for a POST API: `\http://localhost:8983/api/c/gettingstart
|
|||
}
|
||||
----
|
||||
|
||||
The `"commands"` section in the above example has one entry for each command supported at this endpoint. The key is the command name and the value is a json object describing the command structure using JSON schema (see http://json-schema.org/ for a description).
|
||||
The `"commands"` section in the above example has one entry for each command supported at this endpoint. The key is the command name and the value is a JSON object describing the command structure using JSON schema (see http://json-schema.org/ for a description).
|
||||
|
||||
== Invocation Examples
|
||||
|
||||
|
|
|
@ -60,7 +60,7 @@ During query processing, range and point queries are both supported.
|
|||
|
||||
=== Sub-field Suffixes
|
||||
|
||||
You must specify parameters `amountLongSuffix` and `codeStrSuffix`, corresponding to dynamic fields to be used for the raw amount and the currency dynamic sub-fields, e.g.:
|
||||
You must specify parameters `amountLongSuffix` and `codeStrSuffix`, corresponding to dynamic fields to be used for the raw amount and the currency dynamic sub-fields, for example:
|
||||
|
||||
[source,xml]
|
||||
----
|
||||
|
|
|
@ -102,7 +102,7 @@ Note that while date math is most commonly used relative to `NOW` it can be appl
|
|||
|
||||
The `NOW` parameter is used internally by Solr to ensure consistent date math expression parsing across multiple nodes in a distributed request. But it can be specified to instruct Solr to use an arbitrary moment in time (past or future) to override for all situations where the the special value of "```NOW```" would impact date math expressions.
|
||||
|
||||
It must be specified as a (long valued) milliseconds since epoch
|
||||
It must be specified as a (long valued) milliseconds since epoch.
|
||||
|
||||
Example:
|
||||
|
||||
|
|
|
@ -30,7 +30,7 @@ Content stored in ZooKeeper is critical to the operation of a SolrCloud cluster.
|
|||
* Changing cluster state information into something wrong or inconsistent might very well make a SolrCloud cluster behave strangely.
|
||||
* Adding a delete-collection job to be carried out by the Overseer will cause data to be deleted from the cluster.
|
||||
|
||||
You may want to enable ZooKeeper ACLs with Solr if you grant access to your ZooKeeper ensemble to entities you do not trust, or if you want to reduce risk of bad actions resulting from, e.g.:
|
||||
You may want to enable ZooKeeper ACLs with Solr if you grant access to your ZooKeeper ensemble to entities you do not trust, or if you want to reduce risk of bad actions resulting from, for example:
|
||||
|
||||
* Malware that found its way into your system.
|
||||
* Other systems using the same ZooKeeper ensemble (a "bad thing" might be done by accident).
|
||||
|
|
Loading…
Reference in New Issue