mirror of https://github.com/apache/lucene.git
SOLR-10821: resolve TODOs; copy edits & cleanups; reorder section flow
This commit is contained in:
parent
53db72c598
commit
80530c14a3
|
@ -1,4 +1,4 @@
|
|||
= SolrCloud Autoscaling API
|
||||
= Autoscaling API
|
||||
:page-shortname: solrcloud-autoscaling-api
|
||||
:page-permalink: solrcloud-autoscaling-api.html
|
||||
:page-toclevels: 2
|
||||
|
@ -20,7 +20,7 @@
|
|||
// specific language governing permissions and limitations
|
||||
// under the License.
|
||||
|
||||
The Autoscaling API can be used to manage autoscaling policies and preferences, and to get diagnostics on the state of the cluster.
|
||||
The Autoscaling API is used to manage autoscaling policies and preferences, and to get diagnostics on the state of the cluster.
|
||||
|
||||
== Read API
|
||||
|
||||
|
@ -56,7 +56,7 @@ The output will contain cluster preferences, cluster policy and collection speci
|
|||
|
||||
== Diagnostics API
|
||||
|
||||
The diagnostics API shows the violations, if any, of all conditions in the cluster or collection-specific policy. It is available at the `/admin/autoscaling/diagnostics` path.
|
||||
The diagnostics API shows the violations, if any, of all conditions in the cluster and, if applicable, the collection-specific policy. It is available at the `/admin/autoscaling/diagnostics` path.
|
||||
|
||||
This API does not take any parameters.
|
||||
|
||||
|
@ -150,15 +150,17 @@ The Write API is available at the same `/admin/autoscaling` and `/v2/cluster/aut
|
|||
|
||||
The payload of the POST request is a JSON message with commands to set and remove components. Multiple commands can be specified together in the payload. The commands are executed in the order specified and the changes are atomic, i.e., either all succeed or none.
|
||||
|
||||
=== set-cluster-preferences: Create and Modify Cluster Preferences
|
||||
=== Create and Modify Cluster Preferences
|
||||
|
||||
The cluster preferences are specified as a list of sort preferences. Multiple sorting preferences can be specified and they are applied in order.
|
||||
Cluster preferences are specified as a list of sort preferences. Multiple sorting preferences can be specified and they are applied in order.
|
||||
|
||||
They are defined using the `set-cluster-preferences` command.
|
||||
|
||||
Each preference is a JSON map having the following syntax:
|
||||
|
||||
`{'<sort_order>': '<sort_param>', 'precision' : '<precision_val>'}`
|
||||
`{'<sort_order>':'<sort_param>', 'precision':'<precision_val>'}`
|
||||
|
||||
You can see the __TODO__ section to know more about the allowed values for the `sort_order`, `sort_param` and `precision` parameters.
|
||||
See the section <<solrcloud-autoscaling-policy-preferences.adoc#cluster-preferences-specification,Cluster Preferences Specification>> for details about the allowed values for the `sort_order`, `sort_param` and `precision` parameters.
|
||||
|
||||
Changing the cluster preferences after the cluster is already built doesn't automatically reconfigure the cluster. However, all future cluster management operations will use the changed preferences.
|
||||
|
||||
|
@ -167,9 +169,9 @@ Changing the cluster preferences after the cluster is already built doesn't auto
|
|||
[source,json]
|
||||
----
|
||||
{
|
||||
"set-cluster-preferences" : [
|
||||
{"minimize": "cores"}
|
||||
]
|
||||
"set-cluster-preferences" : [
|
||||
{"minimize": "cores"}
|
||||
]
|
||||
}
|
||||
----
|
||||
|
||||
|
@ -221,17 +223,21 @@ We can remove all cluster preferences by setting preferences to an empty list.
|
|||
}
|
||||
----
|
||||
|
||||
=== set-cluster-policy: Create and Modify Cluster Policies
|
||||
=== Create and Modify Cluster Policies
|
||||
|
||||
You can see the __TODO__ section to know more about the allowed values for each condition in the policy.
|
||||
Cluster policies are set using the `set-cluster-policy` command.
|
||||
|
||||
Like `set-cluster-preferences`, the policy definition is a JSON map defining the desired attributes and values.
|
||||
|
||||
Refer to the <<solrcloud-autoscaling-policy-preferences.adoc#policy-specification,Policy Specification>> section for details of the allowed values for each condition in the policy.
|
||||
|
||||
*Input*:
|
||||
[source,json]
|
||||
----
|
||||
{
|
||||
"set-cluster-policy": [
|
||||
{"replica": "<2", "shard": "#EACH", "node": "#ANY"}
|
||||
]
|
||||
"set-cluster-policy": [
|
||||
{"replica": "<2", "shard": "#EACH", "node": "#ANY"}
|
||||
]
|
||||
}
|
||||
----
|
||||
|
||||
|
@ -249,6 +255,7 @@ Output:
|
|||
----
|
||||
|
||||
We can remove all cluster policy conditions by setting policy to an empty list.
|
||||
|
||||
[source,json]
|
||||
----
|
||||
{
|
||||
|
@ -258,21 +265,21 @@ We can remove all cluster policy conditions by setting policy to an empty list.
|
|||
|
||||
Changing the cluster policy after the cluster is already built doesn't automatically reconfigure the cluster. However, all future cluster management operations will use the changed cluster policy.
|
||||
|
||||
=== set-policy: Create and Modify Collection-Specific Policy
|
||||
=== Create and Modify Collection-Specific Policy
|
||||
|
||||
This command accepts a map of policy name to the list of conditions for that policy. Multiple named policies can be specified together. A named policy that does not exist already is created and if the named policy accepts already then it is replaced.
|
||||
The `set-policy` command accepts a map of policy names to the list of conditions for that policy. Multiple named policies can be specified together. A named policy that does not exist already is created and if the named policy accepts already then it is replaced.
|
||||
|
||||
You can see the __TODO__ section to know more about the allowed values for each condition in the policy.
|
||||
Refer to the <<solrcloud-autoscaling-policy-preferences.adoc#policy-specification,Policy Specification>> section for details of the allowed values for each condition in the policy.
|
||||
|
||||
*Input*
|
||||
|
||||
[source,json]
|
||||
----
|
||||
{
|
||||
"set-policy": {
|
||||
"policy1": [
|
||||
{"replica": "1", "shard": "#EACH", "port": "8983"}
|
||||
]
|
||||
"set-policy": {
|
||||
"policy1": [
|
||||
{"replica": "1", "shard": "#EACH", "port": "8983"}
|
||||
]
|
||||
}
|
||||
}
|
||||
----
|
||||
|
@ -293,9 +300,9 @@ You can see the __TODO__ section to know more about the allowed values for each
|
|||
|
||||
Changing the policy after the collection is already built doesn't automatically reconfigure the collection. However, all future cluster management operations will use the changed policy.
|
||||
|
||||
=== remove-policy: Remove a Collection-Specific Policy
|
||||
=== Remove a Collection-Specific Policy
|
||||
|
||||
This command accepts a policy name to be removed from Solr. The policy being removed must not be attached to any collection otherwise the command will fail.
|
||||
The `remove-policy` command accepts a policy name to be removed from Solr. The policy being removed must not be attached to any collection otherwise the command will fail.
|
||||
|
||||
*Input*
|
||||
[source,json]
|
||||
|
@ -316,4 +323,4 @@ This command accepts a policy name to be removed from Solr. The policy being rem
|
|||
}
|
||||
----
|
||||
|
||||
If you attempt to remove a policy that is being used by a collection then this command will fail to delete the policy until the collection itself is deleted.
|
||||
If you attempt to remove a policy that is being used by a collection, this command will fail to delete the policy until the collection itself is deleted.
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
= Overview of Autoscaling in SolrCloud
|
||||
= Overview of SolrCloud Autoscaling
|
||||
:page-shortname: solrcloud-autoscaling-overview
|
||||
:page-permalink: solrcloud-autoscaling-overview.html
|
||||
:page-toclevels: 1
|
||||
|
@ -20,40 +20,40 @@
|
|||
// specific language governing permissions and limitations
|
||||
// under the License.
|
||||
|
||||
Autoscaling in Solr aims to provide good defaults such that the cluster remains balanced and stable in the face of various events such as a node joining the cluster or leaving the cluster. This is achieved by satisfying a set of rules and sorting preferences that help Solr select the target of cluster management operations.
|
||||
Autoscaling in Solr aims to provide good defaults so a SolrCloud cluster remains balanced and stable in the face of various cluster change events. This balance is achieved by satisfying a set of rules and sorting preferences to select the target of cluster management operations.
|
||||
|
||||
== Cluster Preferences
|
||||
|
||||
Cluster preferences, as the name suggests, apply to all cluster management operations regardless of which collection they affect.
|
||||
|
||||
A preference is a set of conditions that help Solr select nodes that either maximize or minimize given metrics. For example, a preference `{minimize : cores}` will help Solr select nodes such that the number of cores on each node is minimized. We write cluster preference in a way that reduces the overall load on the system. You can add more than one preferences to break ties.
|
||||
A preference is a set of conditions that help Solr select nodes that either maximize or minimize given metrics. For example, a preference such as `{minimize:cores}` will help Solr select nodes such that the number of cores on each node is minimized. We write cluster preferences in a way that reduces the overall load on the system. You can add more than one preferences to break ties.
|
||||
|
||||
The default cluster preferences consist of the above example (`{minimize : cores}`) which is to minimize the number of cores on all nodes.
|
||||
|
||||
You can learn more about preferences in the __TODO__ section.
|
||||
You can learn more about preferences in the <<solrcloud-autoscaling-policy-preferences.adoc#solrcloud-autoscaling-policy-preferences,Autoscaling Cluster Preferences>> section.
|
||||
|
||||
== Cluster Policy
|
||||
|
||||
A cluster policy is a set of conditions that a node, shard, or collection must satisfy before it can be chosen as the target of a cluster management operation. These conditions are applied across the cluster regardless of the collection being managed. For example, the condition `{"cores":"<10", "node":"#ANY"}` means that any node must have less than ten Solr cores in total regardless of which collection they belong to.
|
||||
A cluster policy is a set of conditions that a node, shard, or collection must satisfy before it can be chosen as the target of a cluster management operation. These conditions are applied across the cluster regardless of the collection being managed. For example, the condition `{"cores":"<10", "node":"#ANY"}` means that any node must have less than 10 Solr cores in total regardless of which collection they belong to.
|
||||
|
||||
There are many metrics on which the condition can be based e.g., system load average, heap usage, free disk space etc. The full list of supported metrics can be found at __TODO__ section.
|
||||
There are many metrics on which the condition can be based, e.g., system load average, heap usage, free disk space, etc. The full list of supported metrics can be found in the section describing <<solrcloud-autoscaling-policy-preferences.adoc#policy-attributes,Policy Attributes>>.
|
||||
|
||||
When a node, shard or collection does not satisfy the policy, we call it a *violation*. Solr ensures that cluster management operations minimize the number of violations. The cluster management operations are either invoked manually by us. In future, these cluster management operations may be invoked automatically in response to cluster events such as node being added or lost.
|
||||
When a node, shard, or collection does not satisfy the policy, we call it a *violation*. Solr ensures that cluster management operations minimize the number of violations. Cluster management operations are currently invoked manually. In the future, these cluster management operations may be invoked automatically in response to cluster events such as a node being added or lost.
|
||||
|
||||
== Collection-Specific Policies
|
||||
|
||||
Sometimes a collection may need conditions in addition to those specified in the cluster policy. In such cases, we can create named policies that can be used for specific collections. Firstly, we can use the `set-policy` API to create a new policy and then specify the `policy=<policy_name>` parameter to the CREATE command of the Collection API.
|
||||
A collection may need conditions in addition to those specified in the cluster policy. In such cases, we can create named policies that can be used for specific collections. Firstly, we can use the `set-policy` API to create a new policy and then specify the `policy=<policy_name>` parameter to the CREATE command of the Collection API.
|
||||
|
||||
`/admin/collections?action=CREATE&name=coll1&numShards=1&replicationFactor=2&policy=policy1`
|
||||
|
||||
The above create collection command will associate a policy named `policy1` with the collection named `coll1`. Only a single policy may be associated with a collection.
|
||||
|
||||
Note that the collection-specific policy is applied *in addition* to the cluster policy, i.e., it is not an override but an augmentation. Therefore the collection will follow all conditions laid out in the cluster preferences, cluster policy, and the policy named `policy1`.
|
||||
Note that the collection-specific policy is applied *in addition to* the cluster policy, i.e., it is not an override but an augmentation. Therefore the collection will follow all conditions laid out in the cluster preferences, cluster policy, and the policy named `policy1`.
|
||||
|
||||
You can learn more about collection specific policies in the __TODO__ section.
|
||||
You can learn more about collection-specific policies in the section <<solrcloud-autoscaling-policy-preferences.adoc#defining-collection-specific-policies,Defining Collection-Specific Policies>>.
|
||||
|
||||
== Autoscaling APIs
|
||||
|
||||
The autoscaling APIs available at `/admin/autoscaling` can be used to read and modify each of the components discussed above.
|
||||
|
||||
You can learn more about these APIs in the __TODO__ section.
|
||||
You can learn more about these APIs in the section <<solrcloud-autoscaling-api.adoc#solrcloud-autoscaling-api,Autoscaling API>>.
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
= SolrCloud Autoscaling Policy and Preferences
|
||||
= Autoscaling Policy and Preferences
|
||||
:page-shortname: solrcloud-autoscaling-policy-preferences
|
||||
:page-permalink: solrcloud-autoscaling-policy-preferences.html
|
||||
:page-toclevels: 2
|
||||
|
@ -20,71 +20,77 @@
|
|||
// specific language governing permissions and limitations
|
||||
// under the License.
|
||||
|
||||
The autoscaling policy and preferences are a set of rules and sorting preferences that help Solr select the target of cluster management operations such that the overall load on the cluster is balanced.
|
||||
The autoscaling policy and preferences are a set of rules and sorting preferences that help Solr select the target of cluster management operations so the overall load on the cluster remains balanced.
|
||||
|
||||
== Cluster preferences specification
|
||||
== Cluster Preferences Specification
|
||||
|
||||
A preference is a hint to Solr on how to sort nodes based on their utilization. The default cluster preference is to sort by the total number of Solr cores (or replicas) hosted by the node. Therefore, by default, when selecting a node to add a replica, Solr can apply the preferences and choose the node with the least number of cores.
|
||||
A preference is a hint to Solr on how to sort nodes based on their utilization. The default cluster preference is to sort by the total number of Solr cores (or replicas) hosted by a node. Therefore, by default, when selecting a node to add a replica, Solr can apply the preferences and choose the node with the least number of cores.
|
||||
|
||||
More than one preferences can be added to break ties. For example, we may choose to use free disk space to break ties if number of cores on two nodes are the same so that the node with the higher free disk space can be chosen as the target of the cluster operation.
|
||||
More than one preferences can be added to break ties. For example, we may choose to use free disk space to break ties if the number of cores on two nodes are the same so the node with the higher free disk space can be chosen as the target of the cluster operation.
|
||||
|
||||
Each preference is of the following form:
|
||||
|
||||
[source,json]
|
||||
----
|
||||
{"<sort_order>": "<sort_param>", "precision" : "<precision_val>"}
|
||||
----
|
||||
{"<sort_order>":"<sort_param>", "precision":"<precision_val>"}
|
||||
|
||||
`sort_order`::
|
||||
The value can be either `maximize` or `minimize`. `minimize` sorts the nodes with least value as the least loaded. e.g `{"minimize" : "cores"}` sorts the nodes with the least number of cores as the least loaded node. `{"maximize" : "freedisk"}` sorts the nodes with maximum free disk space as the least loaded node. The objective of the system is to make every node the least loaded. So, e.g. in case of a `MOVEREPLICA` operation, it usually targets the _most loaded_ node and takes load off of it. In a sort of more loaded to less loaded, minimize is akin to sort in descending order and maximize is akin to sorting in ascending order. This is a required parameter.
|
||||
The value can be either `maximize` or `minimize`. `minimize` sorts the nodes with least value as the least loaded. For example, `{"minimize":"cores"}` sorts the nodes with the least number of cores as the least loaded node. A sort order such as `{"maximize":"freedisk"}` sorts the nodes with maximum free disk space as the least loaded node.
|
||||
+
|
||||
The objective of the system is to make every node the least loaded. So, in case of a `MOVEREPLICA` operation, it usually targets the _most loaded_ node and takes load off of it. In a sort of more loaded to less loaded, `minimize` is akin to sort in descending order and `maximize` is akin to sorting in ascending order.
|
||||
+
|
||||
This is a required parameter.
|
||||
|
||||
`sort_param`::
|
||||
One and only one of the following supported parameter must be specified:
|
||||
1. `cores`: The number of total Solr cores on a node
|
||||
2. `freedisk`: The amount of free disk space for Solr's data home directory. This is always in gigabytes.
|
||||
3. `sysLoadAvg`: The system load average on a node as reported by the Metrics API under the key `solr.jvm/os.systemLoadAverage`. This is always a double value between 0 and 1 and the higher the value, the more loaded the node is.
|
||||
4. `heapUsage`: The heap usage of a node as reported by the Metrics API under the key `solr.jvm/memory.heap.usage`. This is always a double value between 0 and 1 and the higher the value, the more loaded the node is.
|
||||
One and only one of the following supported parameters must be specified:
|
||||
|
||||
. `cores`: The number of total Solr cores on a node.
|
||||
. `freedisk`: The amount of free disk space for Solr's data home directory. This is always in gigabytes.
|
||||
. `sysLoadAvg`: The system load average on a node as reported by the Metrics API under the key `solr.jvm/os.systemLoadAverage`. This is always a double value between 0 and 1 and the higher the value, the more loaded the node is.
|
||||
. `heapUsage`: The heap usage of a node as reported by the Metrics API under the key `solr.jvm/memory.heap.usage`. This is always a double value between 0 and 1 and the higher the value, the more loaded the node is.
|
||||
|
||||
`precision`::
|
||||
Precision tells the system the minimum (absolute) difference between 2 values to treat them as distinct values. For example, a precision of 10 for `freedisk` means that two nodes whose free disk space is within 10GB of each other should be treated as equal for the purpose of sorting. This helps create ties without which, specifying multiple preferences is not useful. This is an optional parameter whose value must be a positive integer. The maximum value of precision must be less than the maximum value of the `sort_value`, if any.
|
||||
Precision tells the system the minimum (absolute) difference between 2 values to treat them as distinct values.
|
||||
+
|
||||
For example, a precision of 10 for `freedisk` means that two nodes whose free disk space is within 10GB of each other should be treated as equal for the purpose of sorting. This helps create ties without which specifying multiple preferences is not useful. This is an optional parameter whose value must be a positive integer. The maximum value of `precision` must be less than the maximum value of the `sort_value`, if any.
|
||||
|
||||
See the `set-cluster-preferences` API section for details on how to manage cluster preferences.
|
||||
See the section <<solrcloud-autoscaling-api.adoc#create-and-modify-cluster-preferences,set-cluster-preferences API>> for details on how to manage cluster preferences.
|
||||
|
||||
=== Examples of Cluster Preferences
|
||||
|
||||
The following is the default cluster preferences. This is applied automatically by Solr when no explicit cluster preferences have been set using the Autoscaling API.
|
||||
[source,json]
|
||||
----
|
||||
[{"minimize":"cores"}]
|
||||
----
|
||||
==== Default Preferences
|
||||
The following shows the default cluster preferences. This is applied automatically by Solr when no explicit cluster preferences have been set using the <<solrcloud-autoscaling-api.adoc#solrcloud-autoscaling-api,Autoscaling API>>.
|
||||
|
||||
[source,json]
|
||||
[{"minimize":"cores"}]
|
||||
|
||||
==== Minimize Cores; Maximize Free Disk
|
||||
In this example, we want to minimize the number of Solr cores and in case of a tie, maximize the amount of free disk space on each node.
|
||||
|
||||
In this example, we want to minimize the number of solr cores and in case of tie, maximize the amount of free disk space on each node.
|
||||
[source,json]
|
||||
----
|
||||
[
|
||||
{"minimize" : "cores"},
|
||||
{"maximize" : "freedisk"}
|
||||
]
|
||||
----
|
||||
|
||||
==== Add Precision to Free Disk; Minimize System Load
|
||||
In this example, we add a precision to the `freedisk` parameter so that nodes with free disk space within 10GB of each other are considered equal. In such a case, the tie is broken by minimizing `sysLoadAvg`.
|
||||
|
||||
[source,json]
|
||||
----
|
||||
[
|
||||
{"minimize" : "cores"},
|
||||
{"maximize" : "freedisk", "precision" : 10},
|
||||
{"minimize" : "sysLoadAvg"}
|
||||
]
|
||||
----
|
||||
|
||||
== Policy specification
|
||||
== Policy Specification
|
||||
|
||||
A policy is a hard rule to be satisfied by each node. If a node does not satisfy the rule then it is called a `violation`. Solr ensures that the number of violations are minimized while invoking any cluster management operations.
|
||||
A policy is a hard rule to be satisfied by each node. If a node does not satisfy the rule then it is called a *violation*. Solr ensures that the number of violations are minimized while invoking any cluster management operations.
|
||||
|
||||
=== Policy attributes
|
||||
=== Policy Attributes
|
||||
A policy can have the following attributes:
|
||||
|
||||
`cores`::
|
||||
This is a special attribute that applies to the entire cluster. It can only be used along with the `node` attribute and no other. This parameter is optional.
|
||||
This is a special attribute that applies to the entire cluster. It can only be used along with the `node` attribute and no other. This attribute is optional.
|
||||
|
||||
`collection`::
|
||||
The name of the collection to which the policy rule should apply. If omitted, the rule applies to all collections. This attribute is optional.
|
||||
|
@ -98,7 +104,7 @@ The number of replicas that must exist to satisfy the rule. This must be a posit
|
|||
`strict`::
|
||||
An optional boolean value. The default is `true`. If true, the rule must be satisfied. If false, Solr tries to satisfy the rule on a best effort basis but if no node can satisfy the rule then any node may be chosen.
|
||||
|
||||
One and only one of the following attribute can be specified in addition to the above attributes:
|
||||
One and only one of the following attributes can be specified in addition to the above attributes:
|
||||
|
||||
`node`::
|
||||
The name of the node to which the rule should apply. The default value is `#ANY` which means that any node in the cluster may satisfy the rule.
|
||||
|
@ -121,11 +127,11 @@ The heap usage of the node as reported by the Metrics API under the key `solr.jv
|
|||
`nodeRole`::
|
||||
The role of the node. The only supported value currently is `overseer`.
|
||||
|
||||
`ip_1 , ip_2, ip_3, ip_4`:
|
||||
`ip_1 , ip_2, ip_3, ip_4`::
|
||||
The least significant to most significant segments of IP address. For example, for an IP address `192.168.1.2`, `ip_1 = 2`, `ip_2 = 1`, `ip_3 = 168`, `ip_4 = 192`.
|
||||
|
||||
`sysprop.<system_property_name>`:
|
||||
The system property set on the node on startup.
|
||||
`sysprop.<system_property_name>`::
|
||||
Any arbitrary system property set on the node on startup.
|
||||
|
||||
=== Policy Operators
|
||||
|
||||
|
@ -136,74 +142,68 @@ Each attribute in the policy may specify one of the following operators along wi
|
|||
* `!`: Not
|
||||
* None means equal
|
||||
|
||||
=== Examples of policy rules
|
||||
=== Examples of Policy Rules
|
||||
|
||||
`Example 1`::
|
||||
Do not place more than one replica of the same shard on the same node
|
||||
==== Limit Replica Placement
|
||||
Do not place more than one replica of the same shard on the same node:
|
||||
|
||||
[source,json]
|
||||
----
|
||||
{"replica": "<2", "shard": "#EACH", "node": "#ANY"}
|
||||
----
|
||||
|
||||
`Example 2`::
|
||||
==== Limit Cores per Node
|
||||
Do not place more than 10 cores in any node. This rule can only be added to the cluster policy because it mentions the `cores` attribute that is only applicable cluster-wide.
|
||||
|
||||
[source,json]
|
||||
----
|
||||
{"cores": "<10", "node": "#ANY"}
|
||||
----
|
||||
|
||||
`Example 3`::
|
||||
==== Place Replicas Based on Port
|
||||
Place exactly 1 replica of each shard of collection `xyz` on a node running on port `8983`
|
||||
|
||||
[source,json]
|
||||
----
|
||||
{"replica": 1, "shard": "#EACH", "collection": "xyz", "port": "8983"}
|
||||
----
|
||||
|
||||
`Example 4`::
|
||||
Place all replicas on a node with system property `availability_zone=us-east-1a`. Note that we have to write this rule in the negative sense i.e. *0* replicas must be on nodes *not* having the sysprop `availability_zone=us-east-1a`
|
||||
==== Place Replicas Based on a System Property
|
||||
Place all replicas on a node with system property `availability_zone=us-east-1a`. Note that we have to write this rule in the negative sense i.e., *0* replicas must be on nodes *not* having the system property `availability_zone=us-east-1a`
|
||||
|
||||
[source,json]
|
||||
----
|
||||
{"replica": 0, "sysprop.availability_zone": "!us-east-1a"}
|
||||
----
|
||||
|
||||
`Example 5`::
|
||||
==== Place Replicas Based on Node Role
|
||||
Do not place any replica on a node which has the overseer role. Note that the role is added by the `addRole` collection API. It is *not* automatically the node which is currently the overseer.
|
||||
|
||||
[source,json]
|
||||
----
|
||||
{"replica": 0, "nodeRole": "overseer"}
|
||||
----
|
||||
|
||||
`Example 6`::
|
||||
==== Place Replicas Based on Free Disk
|
||||
Place all replicas in nodes with freedisk more than 500GB. Here again, we have to write the rule in the negative sense.
|
||||
|
||||
[source,json]
|
||||
----
|
||||
{"replica": 0, "freedisk": "<500"}
|
||||
----
|
||||
|
||||
`Example 7`::
|
||||
==== Try to Place Replicas Based on Free Disk
|
||||
Place all replicas in nodes with freedisk more than 500GB when possible. Here we use the strict keyword to signal that this rule is to be honored on a best effort basis.
|
||||
|
||||
[source,json]
|
||||
----
|
||||
{"replica": 0, "freedisk": "<500", "strict" : false}
|
||||
----
|
||||
|
||||
|
||||
== Cluster Policy vs Collection-specific Policy
|
||||
== Defining Collection-Specific Policies
|
||||
|
||||
By default, the cluster policy, if it exists, is used automatically for all collections in the cluster. However, we can create named policies which can be attached to a collection at the time of its creation by specifying the policy name along with a `policy` parameter.
|
||||
|
||||
When a collection-specific policy is used, the rules in that policy are appended to the rules in the cluster policy and the combination of both are used. Therefore, it is recommended that you do not add rules to collection-specific policy that conflict with the ones in the cluster policy. Doing so will disqualify all nodes in the cluster from matching all criteria and make the policy useless. Also, if `maxShardsPerNode` is specified during the time of collection creation then both `maxShardsPerNode` and the policy rules must be satisfied.
|
||||
When a collection-specific policy is used, the rules in that policy are *appended* to the rules in the cluster policy and the combination of both are used. Therefore, it is recommended that you do not add rules to collection-specific policy that conflict with the ones in the cluster policy. Doing so will disqualify all nodes in the cluster from matching all criteria and make the policy useless.
|
||||
|
||||
Some attributes such as `cores` can only be used in the cluster policy.
|
||||
It is possible to override conditions specified in the cluster policy using collection-specific policy. For example, if a clause `{replica:'<3', node:'#ANY'}` is present in the cluster policy and the collection-specific policy has a clause `{replica:'<4', node:'#ANY'}`, the cluster policy is ignored in favor of the collection policy.
|
||||
|
||||
The policy is used by Collection APIs such as:
|
||||
Also, if `maxShardsPerNode` is specified during the time of collection creation, then both `maxShardsPerNode` and the policy rules must be satisfied.
|
||||
|
||||
* create
|
||||
* createshard
|
||||
* addreplica
|
||||
* restore
|
||||
* splitshard
|
||||
Some attributes such as `cores` can only be used in the cluster policy. See the section above on policy attributes for details.
|
||||
|
||||
In future, the policy and preferences will be used by the Autoscaling framework to automatically change the cluster in response to events such as a node being added or lost.
|
||||
The policy is used by these <<collections-api.adoc#collections-api,Collections API>> commands:
|
||||
|
||||
* CREATE
|
||||
* CREATESHARD
|
||||
* ADDREPLICA
|
||||
* RESTORE
|
||||
* SPLITSHARD
|
||||
|
||||
In the future, the policy and preferences will be used by the Autoscaling framework to automatically change the cluster in response to events such as a node being added or lost.
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
= SolrCloud Autoscaling
|
||||
:page-shortname: solrcloud-autoscaling
|
||||
:page-permalink: solrcloud-autoscaling.html
|
||||
:page-children: solrcloud-autoscaling-overview, solrcloud-autoscaling-api, solrcloud-autoscaling-policy-preferences
|
||||
:page-children: solrcloud-autoscaling-overview, solrcloud-autoscaling-policy-preferences, solrcloud-autoscaling-api
|
||||
// Licensed to the Apache Software Foundation (ASF) under one
|
||||
// or more contributor license agreements. See the NOTICE file
|
||||
// distributed with this work for additional information
|
||||
|
@ -19,6 +19,7 @@
|
|||
// specific language governing permissions and limitations
|
||||
// under the License.
|
||||
|
||||
[.lead]
|
||||
The goal of autoscaling is to make SolrCloud cluster management easier by providing a way for changes to the cluster to be more automatic and more intelligent.
|
||||
|
||||
Autoscaling includes an API to manage cluster-wide and collection-specific policies and preferences and a rules syntax to define the guidelines for your cluster. Future Solr releases will include features to utilize the policies and preferences so they perform actions automatically when the rules are violated.
|
||||
|
|
Loading…
Reference in New Issue