mirror of https://github.com/apache/lucene.git
Ref Guide: fix headline case, e.g & i.e, random spaces
This commit is contained in:
parent
96cf2d1762
commit
8dd2ab52b4
|
@ -160,7 +160,7 @@ Step 5: Add the jar to your collection `gettingstarted`
|
|||
}'
|
||||
----
|
||||
|
||||
Step 6 : Create a new request handler '/test' for the collection 'gettingstarted' from the jar we just added
|
||||
Step 6: Create a new request handler '/test' for the collection 'gettingstarted' from the jar we just added
|
||||
|
||||
[source,bash]
|
||||
----
|
||||
|
@ -193,13 +193,13 @@ output:
|
|||
"loader":"org.apache.solr.core.MemClassLoader"}
|
||||
----
|
||||
|
||||
=== Updating remote jars
|
||||
=== Updating Remote Jars
|
||||
|
||||
Example:
|
||||
|
||||
* Host the new jar to a new url. eg: http://localhost:8000/runtimelibs_v2.jar
|
||||
* get the `sha512` hash of the new jar
|
||||
* run the update-runtime lib command
|
||||
* Host the new jar to a new url, e.g., http://localhost:8000/runtimelibs_v2.jar
|
||||
* Get the `sha512` hash of the new jar.
|
||||
* Run the `update-runtimelib` command.
|
||||
|
||||
[source,bash]
|
||||
----
|
||||
|
@ -209,7 +209,8 @@ Example:
|
|||
"sha512" : "<replace-the-new-sha512-digest-here>"}
|
||||
}'
|
||||
----
|
||||
NOTE: Always upload your jar to a new url as the Solr cluster is still referring to the old jar. If the existing jar is modified it can cause errors as the hash may not match
|
||||
|
||||
NOTE: Always upload your jar to a new url as the Solr cluster is still referring to the old jar. If the existing jar is modified it can cause errors as the hash may not match.
|
||||
|
||||
== Securing Runtime Libraries
|
||||
|
||||
|
|
|
@ -254,7 +254,7 @@ field during indexing is impractical, or the TRA behavior is desired across mult
|
|||
Dimensional Routed aliases may be used. This feature has been designed to handle an arbitrary number
|
||||
and combination of category and time dimensions in any order, but users are cautioned to carefully
|
||||
consider the total number of collections that will result from such configurations. Collection counts
|
||||
in the high hundreds or low 1000's begin to pose significant challenges with zookeeper.
|
||||
in the high hundreds or low 1000's begin to pose significant challenges with ZooKeeper.
|
||||
|
||||
NOTE: DRA's are a new feature and presently only 2 dimensions are supported. More dimensions will
|
||||
be supported in the future (see https://issues.apache.org/jira/browse/SOLR-13628 for progress)
|
||||
|
@ -270,9 +270,9 @@ with 30 minute intervals):
|
|||
|
||||
Note that the initial collection will be a throw away place holder for any DRA containing a category based dimension.
|
||||
Name generation for each sub-part of a collection name is identical to the corresponding potion of the component
|
||||
dimension type. (e.g. a category value generating __CRA__ or __TRA__ would still produce an error)
|
||||
dimension type. (e.g., a category value generating __CRA__ or __TRA__ would still produce an error)
|
||||
|
||||
WARNING: The prior warning about reindexing documents with different route value applies to every dimensio of
|
||||
WARNING: The prior warning about reindexing documents with different route value applies to every dimension of
|
||||
a DRA. DRA's are inappropriate for documents where categories or timestamps used in routing will change (this of
|
||||
course applies to other route values in future RA types too).
|
||||
|
||||
|
|
|
@ -159,7 +159,9 @@ include::securing-solr.adoc[tag=list-of-authorization-plugins]
|
|||
[#configuring-audit-logging]
|
||||
== Audit Logging
|
||||
|
||||
<<audit-logging.adoc#audit-logging,Audit logging>> plugins helps you keep an audit trail of events happening in your Solr cluster. Audit logging may e.g. ship data to an external audit service. A custom plugin can be implemented by extending the `AuditLoggerPlugin` class.
|
||||
<<audit-logging.adoc#audit-logging,Audit logging>> plugins help you keep an audit trail of events happening in your Solr cluster.
|
||||
Audit logging may e.g., ship data to an external audit service.
|
||||
A custom plugin can be implemented by extending the `AuditLoggerPlugin` class.
|
||||
|
||||
== Authenticating in the Admin UI
|
||||
|
||||
|
@ -169,14 +171,14 @@ When authentication is required the Admin UI will presented you with a login dia
|
|||
|
||||
* <<basic-authentication-plugin.adoc#basic-authentication-plugin,Basic Authentication Plugin>>
|
||||
* <<jwt-authentication-plugin.adoc#jwt-authentication-plugin,JWT Authentication Plugin>>
|
||||
|
||||
|
||||
If your plugin of choice is not supported, the Admin UI will still let you perform unrestricted operations, while for restricted operations you will need to interact with Solr by sending HTTP requests instead of through the graphical user interface of the Admin UI. All operations supported by Admin UI can be performed through Solr's RESTful APIs.
|
||||
|
||||
== Securing Inter-Node Requests
|
||||
|
||||
There are a lot of requests that originate from the Solr nodes itself. For example, requests from overseer to nodes, recovery threads, etc. We call these 'inter-node' request. Solr has a special built-in `PKIAuthenticationPlugin` (see below) that will always be available to secure inter-node traffic.
|
||||
There are a lot of requests that originate from the Solr nodes itself. For example, requests from overseer to nodes, recovery threads, etc. We call these 'inter-node' request. Solr has a special built-in `PKIAuthenticationPlugin` (see below) that will always be available to secure inter-node traffic.
|
||||
|
||||
Each Authentication plugin may also decide to secure inter-node requests on its own. They may do this through the so-called `HttpClientBuilder` mechanism, or they may alternatively choose on a per-request basis whether to delegate to PKI or not by overriding a `interceptInternodeRequest()` method from the base class, where any HTTP headers can be set.
|
||||
Each Authentication plugin may also decide to secure inter-node requests on its own. They may do this through the so-called `HttpClientBuilder` mechanism, or they may alternatively choose on a per-request basis whether to delegate to PKI or not by overriding a `interceptInternodeRequest()` method from the base class, where any HTTP headers can be set.
|
||||
|
||||
=== PKIAuthenticationPlugin
|
||||
|
||||
|
|
|
@ -132,7 +132,7 @@ Add, edit or delete a cluster-wide property.
|
|||
=== CLUSTERPROP Parameters
|
||||
|
||||
`name`::
|
||||
The name of the property. Supported properties names are `autoAddReplicas`, `legacyCloud` , `location`, `maxCoresPerNode` and `urlScheme`. Other properties can be set
|
||||
The name of the property. Supported properties names are `autoAddReplicas`, `legacyCloud`, `location`, `maxCoresPerNode` and `urlScheme`. Other properties can be set
|
||||
(for example, if you need them for custom plugins) but they must begin with the prefix `ext.`. Unknown properties that don't begin with `ext.` will be rejected.
|
||||
|
||||
`val`::
|
||||
|
|
|
@ -56,7 +56,7 @@ redirectUris ; Valid location(s) for redirect after external authenticat
|
|||
issuers ; List of issuers (Identity providers) to support. See section <<issuer-configuration,Issuer configuration>> for configuration syntax ;
|
||||
|===
|
||||
|
||||
=== Issuer configuration
|
||||
=== Issuer Configuration
|
||||
|
||||
This plugin supports one or more token issuers (IdPs). Issuers are configured as a list of JSON objects under the `issuers` configuration key. The first issuer in the list is the "Primary Issuer", which is the one used for logging in to the Admin UI.
|
||||
|
||||
|
|
|
@ -116,7 +116,7 @@ The start commands provided with the Prometheus Exporter support the use of cust
|
|||
Sets the initial (`Xms`) and max (`Xmx`) Java heap size. The default is `512m`.
|
||||
|
||||
`JAVA_MEM`::
|
||||
Custom java memory settings (e.g. `-Xms1g -Xmx2g`). This is ignored if `JAVA_HEAP` is provided.
|
||||
Custom java memory settings (e.g., `-Xms1g -Xmx2g`). This is ignored if `JAVA_HEAP` is provided.
|
||||
|
||||
`GC_TUNE`::
|
||||
Custom Java garbage collection settings. The default is `-XX:+UseG1GC`.
|
||||
|
|
|
@ -33,7 +33,8 @@ Solr caches are associated with a specific instance of an Index Searcher, a spec
|
|||
|
||||
When a new searcher is opened, the current searcher continues servicing requests while the new one auto-warms its cache. The new searcher uses the current searcher's cache to pre-populate its own. When the new searcher is ready, it is registered as the current searcher and begins handling all new search requests. The old searcher will be closed once it has finished servicing all its requests.
|
||||
|
||||
=== Cache implementations
|
||||
=== Cache Implementations
|
||||
|
||||
In Solr, the following cache implementations are available: recommended `solr.search.CaffeineCache`, and legacy implementations: `solr.search.LRUCache`, `solr.search.FastLRUCache,` and `solr.search.LFUCache`.
|
||||
|
||||
The `CaffeineCache` is an implementation backed by the https://github.com/ben-manes/caffeine[Caffeine caching library]. By default it uses a Window TinyLFU (W-TinyLFU) eviction policy, which allows the eviction based on both frequency and recency of use in O(1) time with a small footprint. Generally this cache implementation is recommended over other legacy caches as it usually offers lower memory footprint, higher hit ratio and better multi-threaded performance over legacy caches.
|
||||
|
|
|
@ -129,7 +129,7 @@ collection:: An optional property identifying which collection(s) this permissio
|
|||
====
|
||||
The collection property can only be used to match _collections_. It currently cannot be used to match aliases. Aliases are resolved before Solr's security plugins are invoked; a `collection` property given an alias will never match because RBAP will be comparing an alias name to already-resolved collection names. Instead, set a `collection` property that contains all collections in the alias concerned (or the `*` wildcard).
|
||||
====
|
||||
path:: An optional property identifying which paths this permission applies to. The value can either be a single path string, or a JSON array containing multiple strings. For APIs accessing collections, path values should start after the collection name, and often just look like the request handler (e.g. `"/select"`). For collection-agnostic ("admin") APIs, path values should start at the `"/admin` path segment. The wildcard `\*` can be used to indicate that this permission applies to all paths. If not specified, this property defaults to `null`.
|
||||
path:: An optional property identifying which paths this permission applies to. The value can either be a single path string, or a JSON array containing multiple strings. For APIs accessing collections, path values should start after the collection name, and often just look like the request handler (e.g., `"/select"`). For collection-agnostic ("admin") APIs, path values should start at the `"/admin` path segment. The wildcard `\*` can be used to indicate that this permission applies to all paths. If not specified, this property defaults to `null`.
|
||||
method:: An optional property identifying which HTTP methods this permission applies to. Options include `HEAD`, `POST`, `PUT`, `GET`, `DELETE`, and the wildcard `\*`. Multiple values can also be specified using a JSON array. If not specified, this property defaults to `*`.
|
||||
params:: An optional property identifying which query parameters this permission applies to. The value is a JSON object containing the names and values of request parameters that must be matched for this permission to apply.
|
||||
+
|
||||
|
|
|
@ -891,11 +891,11 @@ Export all documents from a collection `gettingstarted` to a file called `gettin
|
|||
|
||||
*Arguments*
|
||||
|
||||
* `format` : `jsonl` (default) or `javabin`. `format=javabin` exports to a file with extension `.javabin` which is the native Solr format. This is compact & faster to import
|
||||
* `format` : `jsonl` (default) or `javabin`. `format=javabin` exports to a file with extension `.javabin` which is the native Solr format. This is compact & faster to import.
|
||||
* `out` : export file name
|
||||
* `query` : a custom query , default is *:*
|
||||
* `fields`: a comma separated list of fields to be exported
|
||||
* `limit` : no:of docs. default is 100 , send -1 to import all the docs
|
||||
* `query` : a custom query, default is `*:*`.
|
||||
* `fields`: a comma separated list of fields to be exported.
|
||||
* `limit` : number of documents, default is 100, send `-1` to import all the documents.
|
||||
|
||||
=== Importing the data to a collection
|
||||
|
||||
|
@ -905,4 +905,4 @@ Export all documents from a collection `gettingstarted` to a file called `gettin
|
|||
|
||||
*Example: importing the `javabin` files*
|
||||
|
||||
`curl -X POST --header "Content-Type: application/javabin" --data-binary @gettingstarted.javabin http://localhost:8983/solr/gettingstarted/update?commit=true`
|
||||
`curl -X POST --header "Content-Type: application/javabin" --data-binary @gettingstarted.javabin http://localhost:8983/solr/gettingstarted/update?commit=true`
|
||||
|
|
|
@ -49,7 +49,7 @@ in `io.opentracing.util.GlobalTracer`. By doing this some backend like DataDog i
|
|||
https://docs.datadoghq.com/tracing/setup/java/[datadog-java-agent] use Javaagent to register a `Tracer` in
|
||||
`io.opentracing.util.GlobalTracer`.
|
||||
|
||||
=== Configuring sample rate
|
||||
=== Configuring Sample Rate
|
||||
|
||||
By default only 0.1% of requests are sampled, this ensure that tracing activities does not affect performance of a node.
|
||||
|
||||
|
|
|
@ -320,11 +320,11 @@ The suggested `operation` is an API call that can be invoked to remedy the curre
|
|||
|
||||
The types of suggestions available are
|
||||
|
||||
* `violation` : Fixes a violation to one or more policy rules
|
||||
* `repair` : Add missing replicas
|
||||
* `improvement` : move replicas around so that the load is more evenly balanced according to the autoscaling preferences
|
||||
* `violation`: Fixes a violation to one or more policy rules
|
||||
* `repair`: Add missing replicas
|
||||
* `improvement`: move replicas around so that the load is more evenly balanced according to the autoscaling preferences
|
||||
|
||||
By default, the suggestions API return all of the above , in that order. However it is possible to fetch only certain types by adding a request parameter `type`. e.g: `type=violation&type=repair`
|
||||
By default, the suggestions API returns all of the above, in that order. However it is possible to fetch only certain types by adding a request parameter `type`. e.g: `type=violation&type=repair`
|
||||
|
||||
=== Inline Policy Configuration
|
||||
|
||||
|
|
|
@ -126,7 +126,6 @@ examples
|
|||
[source,json]
|
||||
{ "replica" : "<2", "node":"#ANY"}
|
||||
|
||||
|
||||
[source,json]
|
||||
//place 3 replicas in the group of nodes node-name1, node-name2
|
||||
{ "replica" : "3", "nodeset":["node-name1","node-name2"]}
|
||||
|
@ -134,7 +133,7 @@ examples
|
|||
[source,json]
|
||||
{ "nodeset":{"<property-name>":"<property-value>"}}
|
||||
|
||||
The property names can be one of `node` , `host` , `sysprop.*` , `freedisk` , `ip_*` , `nodeRole` , `heapUsage` , `metrics.*`
|
||||
The property names can be one of: `node`, `host`, `sysprop.*`, `freedisk`, `ip_*`, `nodeRole`, `heapUsage`, `metrics.*`.
|
||||
|
||||
when using the `nodeset` attribute, an optional attribute `put` can be used to specify how to distribute the replicas in that node set.
|
||||
|
||||
|
|
|
@ -241,7 +241,7 @@ Setting the hostname of the Solr server is recommended, especially when running
|
|||
|
||||
=== Environment banner in Admin UI
|
||||
|
||||
To guard against accidentally doing changes to the wrong cluster, you may configure a visual indication in the Admin UI of whether you currently work with a production environment or not. To do this, edit your `solr.in.sh` or `solr.in.cmd` file with a `-Dsolr.environment=prod` setting, or set the cluster property named `environment`. To specify label and/or color, use a comma delimited format as below. The `+` character can be used instead of space to avoid quoting. Colors may be valid CSS colors or numeric e.g. `#ff0000` for bright red. Examples of valid environment configs:
|
||||
To guard against accidentally doing changes to the wrong cluster, you may configure a visual indication in the Admin UI of whether you currently work with a production environment or not. To do this, edit your `solr.in.sh` or `solr.in.cmd` file with a `-Dsolr.environment=prod` setting, or set the cluster property named `environment`. To specify label and/or color, use a comma delimited format as below. The `+` character can be used instead of space to avoid quoting. Colors may be valid CSS colors or numeric, e.g., `#ff0000` for bright red. Examples of valid environment configs:
|
||||
|
||||
* `prod`
|
||||
* `test,label=Functional+test`
|
||||
|
@ -273,7 +273,7 @@ The `bin/solr` script simply passes options starting with `-D` on to the JVM dur
|
|||
SOLR_OPTS="$SOLR_OPTS -Dsolr.autoSoftCommit.maxTime=10000"
|
||||
----
|
||||
|
||||
=== Ulimit settings (*nix operating systems)
|
||||
=== Ulimit Settings (*nix Operating Systems)
|
||||
|
||||
There are several settings that should be monitored and set as high as possible, "unlimited" by preference. On most "*nix" operating systems, you can see the current values by typing the following at a command prompt.
|
||||
|
||||
|
@ -282,15 +282,16 @@ There are several settings that should be monitored and set as high as possible,
|
|||
ulimit -a
|
||||
----
|
||||
|
||||
These four settings in particular are important to have set very high, unlimited by preference.
|
||||
These four settings in particular are important to have set very high, unlimited if possible.
|
||||
|
||||
* max processes (ulimit -u): 65,000 is the recommended _minimum_
|
||||
* file handles (ulimit -n): 65,000 is the recommended _minimum_. All the files used by all replicas have their file handles open at once so this can grow quite large.
|
||||
* virtual memory (ulimit -v): Set to unlimited. This is used to by MMapping the indexes.
|
||||
* max memory size (ulimit -m): Also used by MMap, set to unlimited.
|
||||
* If your system supports it, `sysctl vm.max_map_count`, should be set to unlimited as well.
|
||||
* max processes (`ulimit -u`): 65,000 is the recommended _minimum_.
|
||||
* file handles (`ulimit -n`): 65,000 is the recommended _minimum_. All the files used by all replicas have their file handles open at once so this can grow quite large.
|
||||
* virtual memory (`ulimit -v`): Set to unlimited. This is used to by MMapping the indexes.
|
||||
* max memory size (`ulimit -m`): Also used by MMap, set to unlimited.
|
||||
* If your system supports it, `sysctl vm.max_map_count`, should be set to unlimited as well.
|
||||
|
||||
We strongly recommend that these settings be permanently raised. The exact process to permanently raise them will vary per operating system. Some systems require editing configuration files and restarting your server. Consult your system administrators for guidance in your particular environment.
|
||||
|
||||
[WARNING]
|
||||
====
|
||||
Check these limits every time you upgrade your kernel or operating system. These operations can reset these to their defaults.
|
||||
|
@ -299,7 +300,6 @@ Check these limits every time you upgrade your kernel or operating system. These
|
|||
[WARNING]
|
||||
====
|
||||
If these limits are exceeded, the problems reported by Solr vary depending on the specific operation responsible for exceeding the limit. Errors such as "too many open files", "connection error", and "max processes exceeded" have been reported, as well as SolrCloud recovery failures.
|
||||
|
||||
====
|
||||
|
||||
== Running Multiple Solr Nodes per Host
|
||||
|
|
|
@ -415,7 +415,7 @@ $ curl -X POST -H 'Content-Type: application/json' 'http://localhost:8983/solr/t
|
|||
"ccc",1632740949182382080]}
|
||||
----
|
||||
|
||||
In this example, we have added 2 documents "aaa" and "ccc". As we have specified the parameter `\_version_=-1` , this request should not add the document with the id `aaa` because it already exists. The request succeeds & does not throw any error because the `failOnVersionConflicts=false` parameter is specified. The response shows that only document `ccc` is added and `aaa` is silently ignored.
|
||||
In this example, we have added 2 documents "aaa" and "ccc". As we have specified the parameter `\_version_=-1`, this request should not add the document with the id `aaa` because it already exists. The request succeeds & does not throw any error because the `failOnVersionConflicts=false` parameter is specified. The response shows that only document `ccc` is added and `aaa` is silently ignored.
|
||||
|
||||
|
||||
For more information, please also see Yonik Seeley's presentation on https://www.youtube.com/watch?v=WYVM6Wz-XTw[NoSQL features in Solr 4] from Apache Lucene EuroCon 2012.
|
||||
|
|
Loading…
Reference in New Issue