Control node Replacement Guide
Remove Control node from cluster
This guide only work for non-master controller.
Connect to controller
$ ssh [email protected]
Warning: Permanently added '192.168.1.x' (ECDSA) to the list of known hosts.
Password:
Remove node
Removing cc3 from the cluster
cc1> cluster
cc1:cluster> remove_node
1: compute01
2: cc3
3: compute02
4: cc1
5: cc2
Enter index: 2
this command is only applicable for compute or storage nodes
make sure its running instances have been properly terminated or migrated
shutdown the target host before proceeding
Enter 'YES' to confirm: YES
Backup cc3 policies
From your local pc terminal
$ scp -r root@cc3_IPADDRESS:/etc/policies Downloads/cc3_policy
Shutdown cc3
$ ssh [email protected]
Warning: Permanently added '192.168.1.103' (ECDSA) to the list of known hosts.
Password:
Welcome to the Cube Appliance
Enter "help" for a list of available commands
cc3> shutdown
Enter 'YES' to confirm: YES
Connection to 192.168.1.103 closed by remote host.
Connection to 192.168.1.103 closed.
Adding Control Host
Prepare a new node with CubeCOS installed
Configuration
Reconfigure a new cc3 node by following any options of the list below:
Connect to cc3
$ ssh [email protected]
Warning: Permanently added '192.168.1.103' (ECDSA) to the list of known hosts.
Password:
Check & Repair services
Welcome to the Cube Appliance
Enter "help" for a list of available commands
cc3> cluster
cc3:cluster> check
Service Status Report
ClusterLink ok [ link(v) clock(v) dns(v) ]
ClusterSys ok [ bootstrap(v) license(v) ]
ClusterSettings ok [ etcd(v) ]
HaCluster ok [ hacluster(v) ]
MsgQueue ok [ rabbitmq(v) ]
IaasDb ok [ mysql(v) ]
VirtualIp ok [ vip(v) haproxy_ha(v) ]
Storage ok [ ceph(v) ceph_mon(v) ceph_mgr(v) ceph_mds(v) ceph_osd(v) ceph_rgw(v) rbd_target(v) ]
ApiService ok [ haproxy(v) httpd(v) lmi(v) memcache(v) ]
SingleSignOn ok [ keycloak(v) ]
Compute ok [ nova(v) ]
Baremetal ok [ ironic(v) ]
Network ok [ neutron(v) ]
Image ok [ glance(v) ]
BlockStor ok [ cinder(v) ]
FileStor ok [ manila(v) ]
ObjectStor ok [ swift(v) ]
Orchestration ok [ heat(v) ]
LBaaS ok [ octavia(v) ]
DNSaaS ok [ designate(v) ]
K8SaaS ok [ k3s(v) rancher(v) ]
InstanceHa ok [ masakari(v) ]
DisasterRecovery ok [ freezer(v) ]
BusinessLogic ok [ mistral(v) murano(v) cloudkitty(v) senlin(v) watcher(v) ]
ApiManager ok [ tyk(v) redis(v) mongodb(v) ]
DataPipe ok [ zookeeper(v) kafka(v) ]
Metrics ok [ monasca(v) telegraf(v) grafana(v) ]
LogAnalytics ok [ filebeat(v) auditbeat(v) logstash(v) es(v) kibana(v) ]
Notifications ok [ influxdb(v) kapacitor(v) ]
cc3:cluster>
Check HA status
cc3:cluster> health HaCluster
[ HaCluster ]
hacluster:
Last updated: Fri Aug 7 16:52:30 2020 Last change: Fri Aug 7 13:11:58 2020 by hacluster via crmd on cc1
Stack: corosync
Current DC: cc1 (version 1.1.14-70404b0) - partition with quorum
5 nodes and 3 resources configured
Online: [ compute01 compute02 cc1 cc2 cc3 ]
Full list of resources:
vip (ocf::heartbeat:IPaddr2): Started cc3
haproxy (systemd:haproxy-ha): Started cc3
cinder-volume (systemd:cinder-volume): Started cc2