mirror of https://github.com/apache/lucene.git
Ref Guide: fix typos and clarify some language for autoscaling docs
This commit is contained in:
parent
73d1b07f8e
commit
f140971bdf
|
@ -49,13 +49,13 @@ The selection of the node that will host the new replica is made according to th
|
|||
|
||||
== Cluster Preferences
|
||||
|
||||
Cluster preferences allow you to tell Solr how to assess system load on each node. This information is used to guide selection of the node(s) on which cluster management operations will be performed.
|
||||
Cluster preferences allow you to tell Solr how to assess system load on each node. This information is used to guide selection of the node(s) on which cluster management operations will be performed.
|
||||
|
||||
In general, when an operation increases replica counts, the *least loaded* <<solrcloud-autoscaling-policy-preferences.adoc#node-selector,qualified node>> will be chosen, and when the operation reduces replica counts, the *most loaded* qualified node will be chosen.
|
||||
In general, when an operation increases replica counts, the *least loaded* <<solrcloud-autoscaling-policy-preferences.adoc#node-selector,qualified node>> will be chosen, and when the operation reduces replica counts, the *most loaded* qualified node will be chosen.
|
||||
|
||||
The default cluster preferences are `[{minimize:cores},{maximize:freedisk}]`, which tells Solr to minimize the number of cores on all nodes and if number of cores are equal, maximize the free disk space available. In this case, the least loaded node is the one with the fewest cores or if two nodes have an equal number of cores, the node with the most free disk space.
|
||||
|
||||
You can learn more about preferences in the <<solrcloud-autoscaling-policy-preferences.adoc#solrcloud-autoscaling-policy-preferences,Autoscaling Cluster Preferences>> section.
|
||||
You can learn more about preferences in the section on <<solrcloud-autoscaling-policy-preferences.adoc#cluster-preferences-specification,Cluster Preferences Specification>>.
|
||||
|
||||
== Cluster Policy
|
||||
|
||||
|
@ -63,7 +63,7 @@ A cluster policy is a set of rules that a node, shard, or collection must satisf
|
|||
|
||||
There are many metrics on which the rule can be based, e.g., system load average, heap usage, free disk space, etc. The full list of supported metrics can be found in the section describing <<solrcloud-autoscaling-policy-preferences.adoc#policy-rule-attributes,Autoscaling Policy Rule Attributes>>.
|
||||
|
||||
When a node, shard, or collection does not satisfy a policy rule, we call it a *violation*. By default, cluster management operations will fail if there is even one violation. You can allow operations to succeed in the face of a violation by marking the corresponding rule with <<solrcloud-autoscaling-policy-preferences.adoc#rule-strictness,`"strict":false`>>. When you do this, Solr ensures that cluster management operations minimize the number of violations.
|
||||
When a node, shard, or collection does not satisfy a policy rule, we call it a *violation*. By default, cluster management operations will fail if there is even one violation. You can allow operations to succeed in the face of a violation by marking the corresponding rule with <<solrcloud-autoscaling-policy-preferences.adoc#rule-strictness,`"strict":false`>>. When you do this, Solr ensures that cluster management operations minimize the number of violations.
|
||||
|
||||
Solr also supports <<solrcloud-autoscaling-policy-preferences.adoc#collection-specific-policy,collection-specific policies>>, which operate in tandem with the cluster policy.
|
||||
|
||||
|
|
|
@ -26,7 +26,11 @@ See the section <<Example: Manual Collection Creation with a Policy>> for an exa
|
|||
|
||||
== Cluster Preferences Specification
|
||||
|
||||
A preference is a hint to Solr on how to sort nodes based on their utilization. The default cluster preference is to sort by the total number of Solr cores (or replicas) hosted by a node. Therefore, by default, when selecting a node to which to add a replica, Solr can apply the preferences and choose the node with the fewest cores.
|
||||
A preference is a hint to Solr on how to sort nodes based on their utilization.
|
||||
|
||||
The default cluster preference is to sort by the total number of Solr cores (or replicas) hosted by a node, with a precision of 1.
|
||||
Therefore, by default, when selecting a node to which to add a replica, Solr can apply the preferences and choose the node with the fewest cores.
|
||||
In the case of a tie in the number of cores, available freedisk will be used to further sort nodes.
|
||||
|
||||
More than one preference can be added to break ties. For example, we may choose to use free disk space to break ties if the number of cores on two nodes is the same. The node with the higher free disk space can be chosen as the target of the cluster operation.
|
||||
|
||||
|
|
|
@ -318,7 +318,6 @@ Non-zero values are useful for large indexes with aggressively growing size, as
|
|||
avalanches of split shard requests when the total size of the index
|
||||
reaches even multiples of the maximum shard size thresholds.
|
||||
|
||||
|
||||
Events generated by this trigger contain additional details about the shards
|
||||
that exceeded thresholds and the types of violations (upper / lower bounds, bytes / docs metrics).
|
||||
|
||||
|
@ -546,4 +545,4 @@ ever executing if a new scheduled event is ready as soon as the cooldown period
|
|||
|
||||
Solr randomizes the order in which the triggers are resumed after the cooldown period to mitigate this problem. However, it is recommended that scheduled triggers
|
||||
are not used with low `every` values and an external scheduling process such as cron be used for such cases instead.
|
||||
====
|
||||
====
|
||||
|
|
Loading…
Reference in New Issue