Skip to main content
Version: 2.5

Control node Replacement Guide

Remove Control node from cluster

This guide only work for non-master controller.

Connect to controller

$ ssh [email protected]
Warning: Permanently added '192.168.1.x' (ECDSA) to the list of known hosts.
Password:

Remove node

Removing control03 from the cluster

control01> cluster
control01:cluster> remove_node
1: compute01
2: control03
3: compute02
4: control01
5: control02
Enter index: 2
this command is only applicable for compute or storage nodes
make sure its running instances have been properly terminated or migrated
shutdown the target host before proceeding
Enter 'YES' to confirm: YES

Backup control03 policies

From your local pc terminal

$ scp -r root@control03_IPADDRESS:/etc/policies Downloads/control03_policy

Shutdown control03

$ ssh [email protected]
Warning: Permanently added '192.168.1.103' (ECDSA) to the list of known hosts.
Password:
Welcome to the Cube Appliance
Enter "help" for a list of available commands
control03> shutdown
Enter 'YES' to confirm: YES
Connection to 192.168.1.103 closed by remote host.
Connection to 192.168.1.103 closed.

Adding Control Host

Prepare a new node with Cube.COS installed

Configuration

Reconfigure a new control03 node by following any options of the list below:

Connect to control03

$ ssh [email protected]
Warning: Permanently added '192.168.1.103' (ECDSA) to the list of known hosts.
Password:

Check & Repair services

Welcome to the Cube Appliance
Enter "help" for a list of available commands
control03> cluster
control03:cluster> check
Service Status Report
ClusterLink ok [ link(v) clock(v) dns(v) ]
ClusterSys ok [ bootstrap(v) license(v) ]
ClusterSettings ok [ etcd(v) ]
HaCluster ok [ hacluster(v) ]
MsgQueue ok [ rabbitmq(v) ]
IaasDb ok [ mysql(v) ]
VirtualIp ok [ vip(v) haproxy_ha(v) ]
Storage ok [ ceph(v) ceph_mon(v) ceph_mgr(v) ceph_mds(v) ceph_osd(v) ceph_rgw(v) rbd_target(v) ]
ApiService ok [ haproxy(v) httpd(v) lmi(v) memcache(v) ]
SingleSignOn ok [ keycloak(v) ]
Compute ok [ nova(v) ]
Baremetal ok [ ironic(v) ]
Network ok [ neutron(v) ]
Image ok [ glance(v) ]
BlockStor ok [ cinder(v) ]
FileStor ok [ manila(v) ]
ObjectStor ok [ swift(v) ]
Orchestration ok [ heat(v) ]
LBaaS ok [ octavia(v) ]
DNSaaS ok [ designate(v) ]
K8SaaS ok [ k3s(v) rancher(v) ]
InstanceHa ok [ masakari(v) ]
DisasterRecovery ok [ freezer(v) ]
BusinessLogic ok [ mistral(v) murano(v) cloudkitty(v) senlin(v) watcher(v) ]
ApiManager ok [ tyk(v) redis(v) mongodb(v) ]
DataPipe ok [ zookeeper(v) kafka(v) ]
Metrics ok [ monasca(v) telegraf(v) grafana(v) ]
LogAnalytics ok [ filebeat(v) auditbeat(v) logstash(v) es(v) kibana(v) ]
Notifications ok [ influxdb(v) kapacitor(v) ]
control03:cluster>

Check HA status

control03:cluster> health HaCluster

[ HaCluster ]
hacluster:
Last updated: Fri Aug 7 16:52:30 2020 Last change: Fri Aug 7 13:11:58 2020 by hacluster via crmd on control01
Stack: corosync
Current DC: control01 (version 1.1.14-70404b0) - partition with quorum
5 nodes and 3 resources configured

Online: [ compute01 compute02 control01 control02 control03 ]

Full list of resources:

vip (ocf::heartbeat:IPaddr2): Started control03
haproxy (systemd:haproxy-ha): Started control03
cinder-volume (systemd:cinder-volume): Started control02