= Activation Sequence Tools :idprefix: :idseparator: - You can use the Artemis CLI to execute activation sequence maintenance/recovery tools for xref:ha.adoc#high-availability-and-failover[Pluggable Quorum Replication]. The 2 main commands are `activation list` and `activation set`, that can be used together to recover some disaster happened to local/coordinated activation sequences. Here is a disaster scenario built around the RI (using https://zookeeper.apache.org/[Apache ZooKeeper] and https://curator.apache.org/[Apache curator]) to demonstrate the usage of such commands. == Troubleshooting Case: ZooKeeper Cluster disaster A proper ZooKeeper cluster should use at least 3 nodes, but what happens if all these nodes crash loosing any activation state information required to run an Artemis replication cluster? During the disaster ie ZooKeeper nodes no longer reachable, brokers: * live ones shutdown (and if restarted by a script, should hang awaiting to connect to the ZooKeeper cluster again) * replicas become passive, awaiting to connect to the ZooKeeper cluster again Admin should: . Stop all Artemis brokers . Restart ZooKeeper cluster . Search brokers with the highest local activation sequence for their `NodeID`, by running this command from the `bin` folder of the broker: + [,bash] ---- $ ./artemis activation list --local Local activation sequence for NodeID=7debb3d1-0d4b-11ec-9704-ae9213b68ac4: 1 ---- . From the `bin` folder of the brokers with the highest local activation sequence + [,bash] ---- # assuming 1 to be the highest local activation sequence obtained at the previous step # for NodeID 7debb3d1-0d4b-11ec-9704-ae9213b68ac4 $ ./artemis activation set --remote --to 1 Forced coordinated activation sequence for NodeID=7debb3d1-0d4b-11ec-9704-ae9213b68ac4 from 0 to 1 ---- . Restart all brokers: previously live ones should be able to be live again The higher the number of ZooKeeper nodes are, the less the chance are that a disaster like this requires Admin intervention, because it allows the ZooKeeper cluster to tolerate more failures.