Update CubeOS
#
Update CubeOS#
USB drivecopy CUBE_1.3.10_20200725-0932_a4d7645.pkg
into usb drive (format: fat or exfat)
Plug the usb to the server
#
SCPupload CUBE_1.3.10_20200725-0932_a4d7645.pkg
to your server over scp, directory: /var/update
$ scp CUBE_1.3.10_20200725-0932_a4d7645.pkg [email protected]:/var/update/
#
Connect to console$ ssh [email protected]Warning: Permanently added '192.168.1.x' (ECDSA) to the list of known hosts.Password:
#
Run Health CheckCLI: cluster> health
mas> clustermas:cluster> health
[ ClusterLink ]link:Ping 10.32.2.223 ... OK
[ ClusterSettings ]etcd:member 30de40f11f33c57 is healthy: got healthy result from http://10.32.2.223:2379cluster is healthy
[ HaCluster ]hacluster:Last updated: Mon Jul 27 20:29:36 2020 Last change: Mon Jul 27 16:36:03 2020 by root via cibadmin on masStack: corosyncCurrent DC: mas (version 1.1.14-70404b0) - partition with quorum1 node and 0 resources configured
Online: [ mas ]
Full list of resources:
[ MsgQueue ]rabbitmq:Cluster status of node rabbit@mas ...[{nodes,[{disc,[rabbit@mas]}]},{running_nodes,[rabbit@mas]},{cluster_name,<<"rabbit@localhost">>},{partitions,[]}]
Listing queues ...central <[email protected]>central.mas <[email protected]>central_fanout_09af33c096b14ed3adc647a04e4468b4 <[email protected]>cinder-backup <[email protected]>cinder-backup.mas <[email protected]>
[ IaasDb ]mysql:wsrep_cluster_status Disconnectedwsrep_cluster_size 0
[ VirtualIp ]vip:non-HAhaproxy_ha:non-HA
[ Storage ]ceph:cluster: id: c6e64c49-09cf-463b-9d1c-b6645b4b3b85 health: HEALTH_OK
services: mon: 1 daemons, quorum mas mgr: mas(active) mds: cephfs-1/1/1 up {0=mas=up:active} osd: 6 osds: 6 up, 6 in rgw: 1 daemon active
data: pools: 22 pools, 1420 pgs objects: 743 objects, 723MiB usage: 6.39GiB used, 504GiB / 511GiB avail pgs: 1420 active+clean
[ ApiService ]haproxy:(mas)# pxname svname check_statusopenstack_horizon localhost L4OKgrafana_backend localhost L4OKkibana_backend localhost L4OKcube_lmi localhost L4OKapache2:+------------+--------------+------------+| Status | API Endpoint | Error Code |+------------+--------------+------------+| healthy | available | 0 |+------------+--------------+------------+lmi:+------------+--------------+------------+| Status | API Endpoint | Error Code |+------------+--------------+------------+| healthy | available | 0 |+------------+--------------+------------+memcache:+------------+------------+| Status | Error Code |+------------+------------+| healthy | 0 |+------------+------------+
[ Compute ]nova:+----+------------------+------+----------+---------+-------+----------------------------+| ID | Binary | Host | Zone | Status | State | Updated At |+----+------------------+------+----------+---------+-------+----------------------------+| 1 | nova-consoleauth | mas | internal | enabled | up | 2020-07-27T12:29:48.000000 || 2 | nova-conductor | mas | internal | enabled | up | 2020-07-27T12:29:40.000000 || 3 | nova-scheduler | mas | internal | enabled | up | 2020-07-27T12:29:41.000000 || 4 | nova-compute | mas | nova | enabled | up | 2020-07-27T12:29:47.000000 |+----+------------------+------+----------+---------+-------+----------------------------+
[ Network ]neutron:+--------------------------------------+--------------------+------+-------------------+-------+-------+---------------------------+| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |+--------------------------------------+--------------------+------+-------------------+-------+-------+---------------------------+| 485c190c-a74c-4988-9653-6f37e116f55f | Metadata agent | mas | None | :-) | UP | neutron-metadata-agent || 751c58d4-ecb9-49d3-8e9a-7ca3aff6bdbf | Linux bridge agent | mas | None | :-) | UP | neutron-linuxbridge-agent || f9801373-ad30-4b62-9a0e-8fea0aab1bcd | DHCP agent | mas | nova | :-) | UP | neutron-dhcp-agent || fcc0f77f-491f-48c3-b20d-9d55c80a5a10 | L3 agent | mas | nova | :-) | UP | neutron-l3-agent |+--------------------------------------+--------------------+------+-------------------+-------+-------+---------------------------+
[ Image ]glance:+------------+--------------+------------+| Status | API Endpoint | Images |+------------+--------------+------------+| unhealthy | unavailable | N/A |+------------+--------------+------------+
[ BlockStor ]cinder:+------------------+-----------+------+---------+-------+----------------------------+| Binary | Host | Zone | Status | State | Updated At |+------------------+-----------+------+---------+-------+----------------------------+| cinder-backup | mas | nova | enabled | up | 2020-07-27T12:29:55.000000 || cinder-volume | cube@ceph | nova | enabled | up | 2020-07-27T12:29:56.000000 || cinder-scheduler | mas | nova | enabled | up | 2020-07-27T12:29:48.000000 |+------------------+-----------+------+---------+-------+----------------------------+
[ FileStor ]manila:+----+------------------+------------------+------+---------+-------+----------------------------+| Id | Binary | Host | Zone | Status | State | Updated_at |+----+------------------+------------------+------+---------+-------+----------------------------+| 1 | manila-share | mas@cephfsnative | nova | enabled | up | 2020-07-27T12:29:56.000000 || 2 | manila-scheduler | mas | nova | enabled | up | 2020-07-27T12:29:56.000000 || 3 | manila-share | mas@generic | nova | enabled | up | 2020-07-27T12:29:56.000000 |+----+------------------+------------------+------+---------+-------+----------------------------+
[ ObjectStor ]swift:+------------+--------------+--------+------------+------------+| Status | API Endpoint | Tenant | Containers | Objects |+------------+--------------+--------+------------+------------+| healthy | available | admin | 0 | 0 |+------------+--------------+--------+------------+------------+
[ Orchestration ]heat:+------------+--------------+------------------+| Status | API Endpoint | Engine (Up/Down) |+------------+--------------+------------------+| unhealthy | unavailable | 0/0 |+------------+--------------+------------------+
[ LBaaS ]octavia:+------------+--------------+------------+| Status | API Endpoint | Error Code |+------------+--------------+------------+| unhealthy | unavailable | 1 |+------------+--------------+------------+
[ DNSaaS ]designate:+------------+--------------+------------+| Status | API Endpoint | Error Code |+------------+--------------+------------+| unhealthy | unavailable | 1 |+------------+--------------+------------+
[ InstanceHa ]masakari:+------------+------------+| Status | Error Code |+------------+------------+| healthy | 0 |+------------+------------+
[ DisasterRecovery ]freezer:+------------+--------------+------------+| Status | API Endpoint | Error Code |+------------+--------------+------------+| unhealthy | unavailable | 1 |+------------+--------------+------------+es235:+------------+------------+| Status | Error Code |+------------+------------+| healthy | 0 |+------------+------------+
[ DataPipe ]zookeeper:+------------+------------+| Status | Error Code |+------------+------------+| unhealthy | 1 |+------------+------------+kafka:+------------+------------+| Status | Error Code |+------------+------------+| healthy | 0 |+------------+------------+
[ Metrics ]ceilometer:+------------+------------+| Status | Error Code |+------------+------------+| healthy | 0 |+------------+------------+monasca:+------------+------------+| Status | Error Code |+------------+------------+| unhealthy | 1 |+------------+------------+telegraf:+------------+------------+| Status | Error Code |+------------+------------+| unhealthy | 1 |+------------+------------+grafana:+------------+--------------+------------+| Status | API Endpoint | Error Code |+------------+--------------+------------+| healthy | available | 0 |+------------+--------------+------------+
[ LogAnalytics ]filebeat:+------------+------------+| Status | Error Code |+------------+------------+| healthy | 0 |+------------+------------+auditbeat:+------------+------------+| Status | Error Code |+------------+------------+| healthy | 0 |+------------+------------+logstash:+------------+------------+| Status | Error Code |+------------+------------+| healthy | 0 |+------------+------------+es:+------------+------------+| Status | Error Code |+------------+------------+| healthy | 0 |+------------+------------+kibana:+------------+--------------+------------+| Status | API Endpoint | Error Code |+------------+--------------+------------+| healthy | available | 0 |+------------+--------------+------------+
[ Notifications ]influxdb:+------------+--------------+------------+| Status | API Endpoint | Error Code |+------------+--------------+------------+| healthy | available | 0 |+------------+--------------+------------+kapacitor:+------------+--------------+------------+| Status | API Endpoint | Error Code |+------------+--------------+------------+| healthy | available | 0 |+------------+--------------+------------+
[ Node ]node:+------------+-------+---------------+---------------+| | CPU | Disk | Memory |+------------+-------+-------+-------+-------+-------+| Host | Usage | Usage | Avail | Usage | Avail |+------------+-------+-------+-------+-------+-------+| mas | 16.3% | 10% | 78G | 57% | 9.4G |+------------+-------+-------+-------+-------+-------+mas:cluster>
#
Stop the clustermas:cluster> stop role map------------------------------------------------------------ all 192.168.1.201 control 192.168.1.201 network 192.168.1.201 compute 192.168.1.201 storage 192.168.1.201------------------------------------------------------------Enter 'YES' to confirm: YESmark control host 192.168.1.201 down
#
Update CubeOSmas> updatemas:update> update1: local2: usb3: serverEnter index: 11: CUBE_1.3.10_20200725-0932_a4d7645.pkgEnter index: 1Firmware update will require an appliance reboot.Enter 'YES' to confirm: YESFormatting partition 2Installing CUBE_1.3.10_20200725-0932_a4d7645Running install scriptInstalling postinstall scriptFinished updating. Please reboot appliance.
#
Rebootmas:update> rebootEnter 'YES' to confirm: YESConnection to 192.168.1.x closed by remote host.Connection to 192.168.1.x closed.
#
Run bootstrap_cubeWelcome to the Cube ApplianceEnter "help" for a list of available commandsmas> bootmas:boot> bootstrap_cubebootstraping cube...bootstrap successfullyif this is a single node start, just run "boot> cluster_sync"if this is a cluster start, wait until bootstrap_done is done in all nodes and run "cluster> start" to sync cluster data for all nodes
#
Run cluster_syncmas:boot> cluster_synccluster_sync successfully
#
Check CubeOS versionmas:boot> backmas> updatemas:update> showCurrent: 1.3.10Rollback: 1.3.5mas:update> backmas> firmwaremas:firmware> get_info1: CUBE_1.3.5_20200426-0942_6c7ef1e2: CUBE_1.3.10_20200725-0932_a4d7645Enter index: 2Firmware Version: Cube Appliance 1.3.10Installation Date: Jul 27, 2020 08:35:34 PMInstallation Type: UpgradeLast Boot: Jul 27, 2020 08:48:54 PMmas:firmware>
#
Run Service check & repairmas> clustermas:cluster> check_repair Service Status Report ClusterLink ok [ link(v) ] ClusterSettings ok [ etcd(v) ] HaCluster ok [ hacluster(v) ] MsgQueue ok [ rabbitmq(v) ] IaasDb ok [ mysql(v) ] VirtualIp ok [ vip(v) haproxy_ha(v) ] Storage ok [ ceph(v) ceph_mon(v) ceph_mgr(v) ceph_mds(v) ceph_osd(v) ceph_rgw(v) ] ApiService ok [ haproxy(v) apache2(v) lmi(v) memcache(v) ] Compute ok [ nova(v) ] Network ok [ neutron(v) ] Image ok [ glance(v) ] BlockStor ok [ cinder(v) ] FileStor ok [ manila(v) ] ObjectStor ok [ swift(v) ] Orchestration ok [ heat(v) ] LBaaS ok [ octavia(v) ] DNSaaS ok [ designate(v) ] InstanceHa ok [ masakari(v) ] DisasterRecovery ok [ freezer(v) es235(v) ] DataPipe ok [ zookeeper(v) kafka(v) ] Metrics ok [ ceilometer(v) monasca(v) telegraf(v) grafana(v) ] LogAnalytics ok [ filebeat(v) auditbeat(v) logstash(v) es(v) kibana(v) ] Notifications ok [ influxdb(v) kapacitor(v) ]mas:cluster>