Make shard balancing deterministic if weights are identical
It happens to be the case that the iteration order of a HashMaps keyset might be different across runs. This can cause undeterministic results in shard balancing if weights are identical and multiple shards of the same index are eligable for relocation. This commit adds a tie-breaker based on the shard ID to prioritise the lowest shard ID. This also makes `AddIncrementallyTests#testAddNodesAndIndices` reproducible. Closes #4867
This commit is contained in:
parent
3158776438
commit
592a411b2c
|
@ -804,7 +804,10 @@ public class BalancedShardsAllocator extends AbstractComponent implements Shards
|
|||
if ((srcDecision = maxNode.removeShard(shard)) != null) {
|
||||
minNode.addShard(shard, srcDecision);
|
||||
final float delta = weight.weight(operation, this, minNode, idx) - weight.weight(operation, this, maxNode, idx);
|
||||
if (delta < minCost) {
|
||||
if (delta < minCost ||
|
||||
(candidate != null && delta == minCost && candidate.id() > shard.id())) {
|
||||
/* this last line is a tie-breaker to make the shard allocation alg deterministic
|
||||
* otherwise we rely on the iteration order of the index.getAllShards() which is a set.*/
|
||||
minCost = delta;
|
||||
candidate = shard;
|
||||
decision = new Decision.Multi().add(allocationDecision).add(rebalanceDecision);
|
||||
|
|
Loading…
Reference in New Issue