Skip to main content
Version: 2.4

Restore Cube.COS with snapshot

Backup Cube.COS Policies on the existing node

Cluster check

control1> cluster
control1:cluster> check
Service Status Report
ClusterLink ok [ link(v) clock(v) dns(v) ]
ClusterSys ok [ bootstrap(v) license(v) ]
ClusterSettings ok [ etcd(v) ]
HaCluster ok [ hacluster(v) ]
MsgQueue ok [ rabbitmq(v) ]
IaasDb ok [ mysql(v) ]
VirtualIp ok [ vip(v) haproxy_ha(v) ]
Storage ok [ ceph(v) ceph_mon(v) ceph_mgr(v) ceph_mds(v) ceph_osd(v) ceph_rgw(v) rbd_target(v) ]
ApiService ok [ haproxy(v) httpd(v) lmi(v) memcache(v) ]
SingleSignOn ok [ keycloak(v) ]
Compute ok [ nova(v) ]
Baremetal ok [ ironic(v) ]
Network ok [ neutron(v) ]
Image ok [ glance(v) ]
BlockStor ok [ cinder(v) ]
FileStor ok [ manila(v) ]
ObjectStor ok [ swift(v) ]
Orchestration ok [ heat(v) ]
LBaaS ok [ octavia(v) ]
DNSaaS ok [ designate(v) ]
K8SaaS ok [ k3s(v) rancher(v) ]
InstanceHa ok [ masakari(v) ]
DisasterRecovery ok [ freezer(v) ]
BusinessLogic ok [ mistral(v) murano(v) cloudkitty(v) senlin(v) watcher(v) ]
ApiManager ok [ tyk(v) redis(v) mongodb(v) ]
DataPipe ok [ zookeeper(v) kafka(v) ]
Metrics ok [ monasca(v) telegraf(v) grafana(v) ]
LogAnalytics ok [ filebeat(v) auditbeat(v) logstash(v) es(v) kibana(v) ]
Notifications ok [ influxdb(v) kapacitor(v) ]
control1:cluster>

[Optional] Remove disk

This action is required if only the node is included storage

control1:storage> remove_disk
index name size storage ids
--
1 /dev/sdb 238.5G 0 1
2 /dev/sdc 273.4G 2 3
--
Enter the index of disk to be removed: 1
Enter 'YES' to confirm: YES
Remove disk /dev/sdb successfully.
control1:storage> remove_disk
index name size storage ids
--
1 /dev/sdc 273.4G 2 3
--
Enter the index of disk to be removed: 1
Enter 'YES' to confirm: YES
Remove disk /dev/sdc successfully.

[Optional] Check the status

Ensure all associated osd are removed, remove it if any remaining

control1:storage> status
cluster:
id: c6e64c49-09cf-463b-9d1c-b6645b4b3b85
health: HEALTH_OK

services:
mon: 3 daemons, quorum control1,control3,control2 (age 12m)
mgr: control3(active, since 17m), standbys: control2, control1
mds: cephfs:1 {0=control3=up:active} 2 up:standby
osd: 9 osds: 8 up (since 69s), 8 in (since 70s)
rgw: 3 daemons active (control1, control2, control3)

task status:
scrub status:
mds.control1: idle

data:
pools: 23 pools, 720 pgs
objects: 635 objects, 638 MiB
usage: 9.0 GiB used, 1012 GiB / 1021 GiB avail
pgs: 720 active+clean

io:
client: 13 KiB/s rd, 15 op/s rd, 0 op/s wr
+----+-----------+-------+-------+--------+---------+--------+---------+-----------+
| id | host | used | avail | wr ops | wr data | rd ops | rd data | state |
+----+-----------+-------+-------+--------+---------+--------+---------+-----------+
| 3 | | 0 | 0 | 0 | 0 | 0 | 0 | exists |
| 4 | control3 | 1081M | 117G | 0 | 0 | 0 | 0 | exists,up |
| 5 | control2 | 1077M | 117G | 0 | 0 | 0 | 0 | exists,up |
| 6 | control3 | 1065M | 117G | 0 | 0 | 0 | 0 | exists,up |
| 7 | control2 | 1073M | 117G | 0 | 0 | 0 | 0 | exists,up |
| 8 | control3 | 1077M | 135G | 0 | 0 | 0 | 0 | exists,up |
| 9 | control2 | 1073M | 135G | 0 | 0 | 0 | 0 | exists,up |
| 10 | control3 | 1081M | 135G | 0 | 0 | 0 | 0 | exists,up |
| 11 | control2 | 1081M | 135G | 0 | 0 | 0 | 0 | exists,up |
+----+-----------+-------+-------+--------+---------+--------+---------+-----------+
control1:storage> remove_osd
Enter osd id to be removed:
1: osd.3 (hdd)
Enter index: 1
Enter 'YES' to confirm: YES
Remove osd.3 successfully.

Replace a new node

Choose Advanced option

First Time Setup Options:
1: Wizard
2: Advanced
Enter index: 2

[Optional] For Control node

Welcome to the Cube Appliance
Enter "help" for a list of available commands
unconfigured> first
unconfigured:first> control_rejoin
Set or clear control rejoin flag?
1: set
2: clear
Enter index: 1
Control rejoin markers set

Pull snapshot from media

unconfigured> snapshot
unconfigured:snapshot> pull
Select a media:
1: usb
2: nfs
Enter index: 1
Insert a USB drive into the USB port on the appliance.
Enter 'YES' to confirm: YES
1: CUBE_2.0.0_20201208-130432.765318_control1.snapshot
Enter index: 1
Copying...
Automatically generated on 2020-12-08 13:04:32
Copy complete. It is safe to remove the USB drive.

Apply the setting

unconfigured:snapshot> apply
1: CUBE_2.0.0_20201208-054308.133092_unconfigured.snapshot
2: CUBE_2.0.0_20201208-130432.765318_control1.snapshot
Enter index: 2
Automatically generated on 2020-12-08 13:04:32
Date/Time is important for applying changes to an unconfigured box.
Please confirm the current time is good.

* Local Time: 12/08/2020 00:46:58 EST

Enter 'YES' to confirm: YES
Policy snapshot file applied

Re-log as admin

unconfigured:snapshot> exit
control1 Login: admin
Password:
Welcome to the Cube Appliance
Enter "help" for a list of available commands
Notice: your license will expire in 29 days.
Please contact system administrator to renew the license.
control1>

Check and Repair services

control1> cluster
control1:cluster> check
Service Status Report
ClusterLink ok [ link(v) clock(v) dns(v) ]
ClusterSys ok [ bootstrap(v) license(v) ]
ClusterSettings ok [ etcd(v) ]
HaCluster ok [ hacluster(v) ]
MsgQueue ok [ rabbitmq(v) ]
IaasDb ok [ mysql(v) ]
VirtualIp ok [ vip(v) haproxy_ha(v) ]
Storage ok [ ceph(v) ceph_mon(v) ceph_mgr(v) ceph_mds(v) ceph_osd(v) ceph_rgw(v) rbd_target(v) ]
ApiService ok [ haproxy(v) httpd(v) lmi(v) memcache(v) ]
SingleSignOn ok [ keycloak(v) ]
Compute ok [ nova(v) ]
Baremetal ok [ ironic(v) ]
Network ok [ neutron(v) ]
Image ok [ glance(v) ]
BlockStor ok [ cinder(v) ]
FileStor ok [ manila(v) ]
ObjectStor ok [ swift(v) ]
Orchestration ok [ heat(v) ]
LBaaS ok [ octavia(v) ]
DNSaaS ok [ designate(v) ]
K8SaaS ok [ k3s(v) rancher(v) ]
InstanceHa ok [ masakari(v) ]
DisasterRecovery ok [ freezer(v) ]
BusinessLogic ok [ mistral(v) murano(v) cloudkitty(v) senlin(v) watcher(v) ]
ApiManager ok [ tyk(v) redis(v) mongodb(v) ]
DataPipe ok [ zookeeper(v) kafka(v) ]
Metrics ok [ monasca(v) telegraf(v) grafana(v) ]
LogAnalytics ok [ filebeat(v) auditbeat(v) logstash(v) es(v) kibana(v) ]
Notifications ok [ influxdb(v) kapacitor(v) ]
control1:cluster>