mirror of https://github.com/apache/lucene.git
SOLR-10831: add docs for replica types
This commit is contained in:
parent
0e927c6484
commit
de5ae4096d
|
@ -54,7 +54,16 @@ The number of shards to be created as part of the collection. This is a required
|
|||
A comma separated list of shard names, e.g., `shard-x,shard-y,shard-z`. This is a required parameter when the `router.name` is `implicit`.
|
||||
|
||||
`replicationFactor`::
|
||||
The number of replicas to be created for each shard. The default is `1`.
|
||||
The number of replicas to be created for each shard. The default is `1`. This will create a NRT type of replica. If you want another type of replica, see the `tlogReplicas` and `pullReplica` parameters. See the section <<shards-and-indexing-data-in-solrcloud.adoc#types-of-replicas,Types of Replicas>> for more information about replica types.
|
||||
|
||||
`nrtReplicas`::
|
||||
The number of NRT (Near-Real-Time) replicas to create for this collection. This type of replica maintains a transaction log and updates its index locally. If you want all of your replicas to be of this type, you can simply use `replicationFactor` instead.
|
||||
|
||||
`tlogReplicas`::
|
||||
The number of TLOG replicas to create for this collection. This type of replica maintains a transaction log but only updates its index via replication from a leader. See the section <<shards-and-indexing-data-in-solrcloud.adoc#types-of-replicas,Types of Replicas>> for more information about replica types.
|
||||
|
||||
`pullReplicas`::
|
||||
The number of PULL replicas to create for this collection. This type of replica does not maintain a transaction log and only updates its index via replication from a leader. This type is not eligible to become a leader and should not be the only type of replicas in the collection. See the section <<shards-and-indexing-data-in-solrcloud.adoc#types-of-replicas,Types of Replicas>> for more information about replica types.
|
||||
|
||||
`maxShardsPerNode`::
|
||||
When creating collections, the shards and/or replicas are spread across all available (i.e., live) nodes, and two replicas of the same shard will never be on the same node.
|
||||
|
@ -711,10 +720,21 @@ Ignored if the `shard` param is also specified.
|
|||
The name of the node where the replica should be created.
|
||||
|
||||
`instanceDir`::
|
||||
The instanceDir for the core that will be created
|
||||
The instanceDir for the core that will be created.
|
||||
|
||||
`dataDir`::
|
||||
The directory in which the core should be created
|
||||
The directory in which the core should be created.
|
||||
|
||||
`type`::
|
||||
The type of replica to create. These possible values are allowed:
|
||||
+
|
||||
--
|
||||
* `nrt`: The NRT type maintains a transaction log and updates its index locally. This is the default and the most commonly used.
|
||||
* `tlog`: The TLOG type maintains a transaction log but only updates its index via replication.
|
||||
* `pull`: The PULL type does not maintain a transaction log and only updates its index via replication. This type is not eligible to become a leader.
|
||||
--
|
||||
+
|
||||
See the section <<shards-and-indexing-data-in-solrcloud.adoc#types-of-replicas,Types of Replicas>> for more information about replica type options.
|
||||
|
||||
`property._name_=_value_`::
|
||||
Set core property _name_ to _value_. See <<defining-core-properties.adoc#defining-core-properties,Defining core.properties>> for details about supported properties and values.
|
||||
|
|
|
@ -20,7 +20,9 @@
|
|||
|
||||
When your collection is too large for one node, you can break it up and store it in sections by creating multiple *shards*.
|
||||
|
||||
A Shard is a logical partition of the collection, containing a subset of documents from the collection, such that every document in a collection is contained in exactly one Shard. Which shard contains a each document in a collection depends on the overall "Sharding" strategy for that collection. For example, you might have a collection where the "country" field of each document determines which shard it is part of, so documents from the same country are co-located. A different collection might simply use a "hash" on the uniqueKey of each document to determine its Shard.
|
||||
A Shard is a logical partition of the collection, containing a subset of documents from the collection, such that every document in a collection is contained in exactly one Shard. Which shard contains each document in a collection depends on the overall "Sharding" strategy for that collection.
|
||||
|
||||
For example, you might have a collection where the "country" field of each document determines which shard it is part of, so documents from the same country are co-located. A different collection might simply use a "hash" on the uniqueKey of each document to determine its Shard.
|
||||
|
||||
Before SolrCloud, Solr supported Distributed Search, which allowed one query to be executed across multiple shards, so the query was executed against the entire Solr index and no documents would be missed from the search results. So splitting an index across shards is not exclusively a SolrCloud concept. There were, however, several problems with the distributed approach that necessitated improvement with SolrCloud:
|
||||
|
||||
|
@ -28,7 +30,9 @@ Before SolrCloud, Solr supported Distributed Search, which allowed one query to
|
|||
. There was no support for distributed indexing, which meant that you needed to explicitly send documents to a specific shard; Solr couldn't figure out on its own what shards to send documents to.
|
||||
. There was no load balancing or failover, so if you got a high number of queries, you needed to figure out where to send them and if one shard died it was just gone.
|
||||
|
||||
SolrCloud fixes all those problems. There is support for distributing both the index process and the queries automatically, and ZooKeeper provides failover and load balancing. Additionally, every shard can also have multiple replicas for additional robustness.
|
||||
SolrCloud addresses those limitations. There is support for distributing both the index process and the queries automatically, and ZooKeeper provides failover and load balancing. Additionally, every shard can have multiple replicas for additional robustness.
|
||||
|
||||
== Leaders and Replicas
|
||||
|
||||
In SolrCloud there are no masters or slaves. Instead, every shard consists of at least one physical *replica*, exactly one of which is a *leader*. Leaders are automatically elected, initially on a first-come-first-served basis, and then based on the ZooKeeper process described at http://zookeeper.apache.org/doc/trunk/recipes.html#sc_leaderElection[http://zookeeper.apache.org/doc/trunk/recipes.html#sc_leaderElection.].
|
||||
|
||||
|
@ -36,19 +40,69 @@ If a leader goes down, one of the other replicas is automatically elected as the
|
|||
|
||||
When a document is sent to a Solr node for indexing, the system first determines which Shard that document belongs to, and then which node is currently hosting the leader for that shard. The document is then forwarded to the current leader for indexing, and the leader forwards the update to all of the other replicas.
|
||||
|
||||
=== Types of Replicas
|
||||
|
||||
By default, all replicas are eligible to become leaders if their leader goes down. However, this comes at a cost: if all replicas could become a leader at any time, every replica must be in sync with its leader at all times. New documents added to the leader must be routed to the replicas, and each replica must do a commit. If a replica goes down, or is temporarily unavailable, and then rejoins the cluster, recovery may be slow if it has missed a large number of updates.
|
||||
|
||||
These issues are not a problem for most users. However, some use cases would perform better if the replicas behaved a bit more like the former model, either by not syncing in real-time or by not being eligible to become leaders at all.
|
||||
|
||||
Solr accomplishes this by allowing you to set the replica type when creating a new collection or when adding a replica. The available types are:
|
||||
|
||||
* *NRT*: This is the default. A NRT replica (NRT = NearRealTime) maintains a transaction log and writes new documents to it's indexes locally. Any replica of this type is eligible to become a leader. Traditionally, this was the only type supported by Solr.
|
||||
* *TLOG*: This type of replica maintains a transaction log but does not index document changes locally. This type helps speed up indexing since no commits need to occur in the replicas. When this type of replica needs to update its index, it does so by replicating the index from the leader. This type of replica is also eligible to become a shard leader; it would do so by first processing its transaction log. If it does become a leader, it will behave the same as if it was a NRT type of replica.
|
||||
* *PULL*: This type of replica does not maintain a transaction log nor index document changes locally. It only replicates the index from the shard leader. It is not eligible to become a shard leader and doesn't participate in shard leader election at all.
|
||||
|
||||
If you do not specify the type of replica when it is created, it will be NRT type.
|
||||
|
||||
=== Combining Replica Types in a Cluster
|
||||
|
||||
There are three combinations of replica types that are recommended:
|
||||
|
||||
* All NRT replicas
|
||||
* All PULL replicas
|
||||
* TLOG replicas with PULL replicas
|
||||
|
||||
==== All NRT Replicas
|
||||
|
||||
Use this for small to medium clusters, or even big clusters where the update (index) throughput is not too high. NRT is the only type of replica that supports soft-commits, so also use this combination when NearRealTime is needed.
|
||||
|
||||
==== All PULL Replicas
|
||||
|
||||
Use this combination if NearRealTime is not needed and the number of replicas per shard is high, but you still want all replicas to be able to handle update requests.
|
||||
|
||||
==== TLOG replicas plus PULL replicas
|
||||
|
||||
Use this combination if NearRealTime is not needed, the number of replicas per shard is high, and you want to increase availability of search queries over document updates even if that means temporarily serving outdated results.
|
||||
|
||||
==== Other Combinations of Replica Types
|
||||
|
||||
Other combinations of replica types are not recommended. If more than one replica in the shard is writing its own index instead of replicating from an NRT replica, a leader election can cause all replicas of the shard to become out of sync with the leader, and all would have to replicate the full index.
|
||||
|
||||
=== Recovery with PULL Replicas
|
||||
|
||||
If a PULL replica goes down or leaves the cluster, there are a few scenarios to consider.
|
||||
|
||||
If the PULL replica cannot sync to the leader because the leader is down, replication would not occur. However, it would continue to serve queries. Once it can connect to the leader again, replication would resume.
|
||||
|
||||
If the PULL replica cannot connect to ZooKeeper, it would be removed from the cluster and queries would not be routed to it from the cluster.
|
||||
|
||||
If the PULL replica dies or is unreachable for any other reason, it won't be query-able. When it rejoins the cluster, it would replicate from the leader and when that is complete, it would be ready to serve queries again.
|
||||
|
||||
== Document Routing
|
||||
|
||||
Solr offers the ability to specify the router implementation used by a collection by specifying the `router.name` parameter when <<collections-api.adoc#create,creating your collection>>.
|
||||
|
||||
If you use the (default) "```compositeId```" router, you can send documents with a prefix in the document ID which will be used to calculate the hash Solr uses to determine the shard a document is sent to for indexing. The prefix can be anything you'd like it to be (it doesn't have to be the shard name, for example), but it must be consistent so Solr behaves consistently. For example, if you wanted to co-locate documents for a customer, you could use the customer name or ID as the prefix. If your customer is "IBM", for example, with a document with the ID "12345", you would insert the prefix into the document id field: "IBM!12345". The exclamation mark ('!') is critical here, as it distinguishes the prefix used to determine which shard to direct the document to.
|
||||
If you use the `compositeId` router (the default), you can send documents with a prefix in the document ID which will be used to calculate the hash Solr uses to determine the shard a document is sent to for indexing. The prefix can be anything you'd like it to be (it doesn't have to be the shard name, for example), but it must be consistent so Solr behaves consistently.
|
||||
|
||||
For example, if you want to co-locate documents for a customer, you could use the customer name or ID as the prefix. If your customer is "IBM", for example, with a document with the ID "12345", you would insert the prefix into the document id field: "IBM!12345". The exclamation mark ('!') is critical here, as it distinguishes the prefix used to determine which shard to direct the document to.
|
||||
|
||||
Then at query time, you include the prefix(es) into your query with the `\_route_` parameter (i.e., `q=solr&_route_=IBM!`) to direct queries to specific shards. In some situations, this may improve query performance because it overcomes network latency when querying all the shards.
|
||||
|
||||
The `compositeId` router supports prefixes containing up to 2 levels of routing. For example: a prefix routing first by region, then by customer: "USA!IBM!12345"
|
||||
|
||||
Another use case could be if the customer "IBM" has a lot of documents and you want to spread it across multiple shards. The syntax for such a use case would be : "shard_key/num!document_id" where the /num is the number of bits from the shard key to use in the composite hash.
|
||||
Another use case could be if the customer "IBM" has a lot of documents and you want to spread it across multiple shards. The syntax for such a use case would be : `shard_key/num!document_id` where the `/num` is the number of bits from the shard key to use in the composite hash.
|
||||
|
||||
So "IBM/3!12345" will take 3 bits from the shard key and 29 bits from the unique doc id, spreading the tenant over 1/8th of the shards in the collection. Likewise if the num value was 2 it would spread the documents across 1/4th the number of shards. At query time, you include the prefix(es) along with the number of bits into your query with the `\_route_` parameter (i.e., `q=solr&_route_=IBM/3!`) to direct queries to specific shards.
|
||||
So `IBM/3!12345` will take 3 bits from the shard key and 29 bits from the unique doc id, spreading the tenant over 1/8th of the shards in the collection. Likewise if the num value was 2 it would spread the documents across 1/4th the number of shards. At query time, you include the prefix(es) along with the number of bits into your query with the `\_route_` parameter (i.e., `q=solr&_route_=IBM/3!`) to direct queries to specific shards.
|
||||
|
||||
If you do not want to influence how documents are stored, you don't need to specify a prefix in your document ID.
|
||||
|
||||
|
|
Loading…
Reference in New Issue