Skip to main content
Version: 2.0

Restore CubeOS with snapshot

Backup CubeOS Policies on the existing node#

Cluster check#

control1> clustercontrol1:cluster> check          Service  Status  Report      ClusterLink      ok  [ link(v) ]  ClusterSettings      ok  [ etcd(v) ]        HaCluster      ok  [ hacluster(v) ]         MsgQueue      ok  [ rabbitmq(v) ]           IaasDb      ok  [ mysql(v) ]        VirtualIp      ok  [ vip(v) haproxy_ha(v) ]          Storage      ok  [ ceph(v) ceph_mon(v) ceph_mgr(v) ceph_mds(v) ceph_osd(v) ceph_rgw(v) ]       ApiService      ok  [ haproxy(v) httpd(v) lmi(v) memcache(v) ]          Compute      ok  [ nova(v) ]          Network      ok  [ neutron(v) ]            Image      ok  [ glance(v) ]        BlockStor      ok  [ cinder(v) ]         FileStor      ok  [ manila(v) ]       ObjectStor      ok  [ swift(v) ]    Orchestration      ok  [ heat(v) ]            LBaaS      ok  [ octavia(v) ]           DNSaaS      ok  [ designate(v) ]           K8SaaS      ok  [ k3s(v) rancher(v) ]       InstanceHa      ok  [ masakari(v) ] DisasterRecovery      ok  [ freezer(v) ]         DataPipe      ok  [ zookeeper(v) kafka(v) ]          Metrics      ok  [ monasca(v) telegraf(v) grafana(v) ]     LogAnalytics      ok  [ filebeat(v) auditbeat(v) logstash(v) es(v) kibana(v) ]    Notifications      ok  [ influxdb(v) kapacitor(v) ]control1:cluster>

[Optional] Remove disk#

This action is required if only the node is included storage

control1:storage> remove_disk  index          name      size   storage ids--      1      /dev/sdb    238.5G           0 1      2      /dev/sdc    273.4G           2 3--Enter the index of disk to be removed: 1Enter 'YES' to confirm: YESRemove disk /dev/sdb successfully.control1:storage> remove_disk  index          name      size   storage ids--      1      /dev/sdc    273.4G           2 3--Enter the index of disk to be removed: 1Enter 'YES' to confirm: YESRemove disk /dev/sdc successfully.

[Optional] Check the status#

Ensure all associated osd are removed, remove it if any remaining

control1:storage> status  cluster:    id:     c6e64c49-09cf-463b-9d1c-b6645b4b3b85    health: HEALTH_OK
  services:    mon: 3 daemons, quorum control1,control3,control2 (age 12m)    mgr: control3(active, since 17m), standbys: control2, control1    mds: cephfs:1 {0=control3=up:active} 2 up:standby    osd: 9 osds: 8 up (since 69s), 8 in (since 70s)    rgw: 3 daemons active (control1, control2, control3)
  task status:    scrub status:        mds.control1: idle
  data:    pools:   23 pools, 720 pgs    objects: 635 objects, 638 MiB    usage:   9.0 GiB used, 1012 GiB / 1021 GiB avail    pgs:     720 active+clean
  io:    client:   13 KiB/s rd, 15 op/s rd, 0 op/s wr+----+-----------+-------+-------+--------+---------+--------+---------+-----------+| id | host      |  used | avail | wr ops | wr data | rd ops | rd data |   state   |+----+-----------+-------+-------+--------+---------+--------+---------+-----------+| 3  |           |    0  |    0  |    0   |     0   |    0   |     0   |   exists  || 4  | control3  | 1081M |  117G |    0   |     0   |    0   |     0   | exists,up || 5  | control2  | 1077M |  117G |    0   |     0   |    0   |     0   | exists,up || 6  | control3  | 1065M |  117G |    0   |     0   |    0   |     0   | exists,up || 7  | control2  | 1073M |  117G |    0   |     0   |    0   |     0   | exists,up || 8  | control3  | 1077M |  135G |    0   |     0   |    0   |     0   | exists,up || 9  | control2  | 1073M |  135G |    0   |     0   |    0   |     0   | exists,up || 10 | control3  | 1081M |  135G |    0   |     0   |    0   |     0   | exists,up || 11 | control2  | 1081M |  135G |    0   |     0   |    0   |     0   | exists,up |+----+-----------+-------+-------+--------+---------+--------+---------+-----------+control1:storage> remove_osdEnter osd id to be removed:1: osd.3 (hdd)Enter index: 1Enter 'YES' to confirm: YESRemove osd.3 successfully.

Replace a new node#

Choose Advanced option#

First Time Setup Options:1: Wizard2: AdvancedEnter index: 2

[Optional] For Control node#

Welcome to the Cube ApplianceEnter "help" for a list of available commandsunconfigured> firstunconfigured:first> control_rejoinSet or clear control rejoin flag?1: set2: clearEnter index: 1Control rejoin markers set

Pull snapshot from media#

unconfigured> snapshotunconfigured:snapshot> pullSelect a media:1: usb2: nfsEnter index: 1Insert a USB drive into the USB port on the appliance.Enter 'YES' to confirm: YES1: CUBE_2.0.0_20201208-130432.765318_control1.snapshotEnter index: 1Copying...Automatically generated on 2020-12-08 13:04:32Copy complete. It is safe to remove the USB drive.

Apply the setting#

unconfigured:snapshot> apply1: CUBE_2.0.0_20201208-054308.133092_unconfigured.snapshot2: CUBE_2.0.0_20201208-130432.765318_control1.snapshotEnter index: 2Automatically generated on 2020-12-08 13:04:32Date/Time is important for applying changes to an unconfigured box.Please confirm the current time is good.
  * Local Time: 12/08/2020 00:46:58 EST
Enter 'YES' to confirm: YESPolicy snapshot file applied

Re-log as admin#

unconfigured:snapshot> exitcontrol1 Login: adminPassword:Welcome to the Cube ApplianceEnter "help" for a list of available commandsNotice: your license will expire in 29 days.        Please contact system administrator to renew the license.control1>

Check and Repair services#

control1> clustercontrol1:cluster> check          Service  Status  Report      ClusterLink      ok  [ link(v) ]  ClusterSettings      ok  [ etcd(v) ]        HaCluster      ok  [ hacluster(v) ]         MsgQueue      ok  [ rabbitmq(v) ]           IaasDb      ok  [ mysql(v) ]        VirtualIp      ok  [ vip(v) haproxy_ha(v) ]          Storage      ok  [ ceph(v) ceph_mon(v) ceph_mgr(v) ceph_mds(v) ceph_osd(v) ceph_rgw(v) ]       ApiService      ok  [ haproxy(v) httpd(v) lmi(v) memcache(v) ]          Compute      ok  [ nova(v) ]          Network      ok  [ neutron(v) ]            Image      ok  [ glance(v) ]        BlockStor      ok  [ cinder(v) ]         FileStor      ok  [ manila(v) ]       ObjectStor      ok  [ swift(v) ]    Orchestration      ok  [ heat(v) ]            LBaaS      NG  [ octavia(8) ]           DNSaaS      ok  [ designate(v) ]           K8SaaS      ok  [ k3s(v) rancher(v) ]       InstanceHa      ok  [ masakari(v) ] DisasterRecovery      ok  [ freezer(v) ]         DataPipe      ok  [ zookeeper(v) kafka(v) ]          Metrics      ok  [ monasca(v) telegraf(v) grafana(v) ]     LogAnalytics      ok  [ filebeat(v) auditbeat(v) logstash(v) es(v) kibana(v) ]    Notifications      ok  [ influxdb(v) kapacitor(v) ]control1:cluster> check_repair LBaaS          Service  Status  Report            LBaaS      ok  [ octavia(v) ]control1:cluster>
Last updated on by Roy Tan