Skip to main content
Version: 2.5

Cluster Component Info

Information of Cluster Components

CLI: cluster > check

cc1:cluster> check
Service Status Report
IaasDb ok [ mysql(v) ]
Baremetal ok [ ironic(v) ]
ClusterSettings ok [ etcd(v) nodelist(v) ]
K8SaaS ok [ rancher(v) ]
Image ok [ glance(v) ]
ClusterLink ok [ link(v) clock(v) dns(v) ]
Notifications ok [ influxdb(v) kapacitor(v) ]
SingleSignOn ok [ k3s(v) keycloak(v) ]
InstanceHa ok [ masakari(v) ]
HaCluster ok [ hacluster(v) ]
LBaaS ok [ octavia(v) ]
VirtualIp ok [ vip(v) haproxy_ha(v) ]
MsgQueue ok [ rabbitmq(v) ]
Metrics ok [ monasca(v) telegraf(v) grafana(v) ]
FileStor ok [ manila(v) ]
ObjectStor ok [ swift(v) ]
DataPipe ok [ zookeeper(v) kafka(v) ]
BlockStor ok [ cinder(v) ]
ApiService ok [ haproxy(v) httpd(v) skyline(v) lmi(v) memcache(v) ]
LogAnalytics ok [ filebeat(v) auditbeat(v) logstash(v) opensearch(v) opensearch-dashboards(v) ]
Orchestration ok [ heat(v) ]
Compute ok [ nova(v) cyborg(v) ]
DNSaaS ok [ designate(v) ]
BusinessLogic ok [ senlin(v) watcher(v) ]
Storage ok [ ceph(v) ceph_mon(v) ceph_mgr(v) ceph_mds(v) ceph_osd(v) ceph_rgw(v) rbd_target(v) ]
ClusterSys ok [ bootstrap(v) license(v) ]
Network ok [ neutron(v) ]

For more information about the disks and OSDs,

CLI: storage > list_osd

cc1:storage> list_osd
OSD STATE HOST DEV SERIAL POWER_ON USE REMARK
0 ok cc1 sda3 BTWA632602ZU800HGN 2753 days 1%
1 ok cc1 sda4 BTWA632602ZU800HGN 2753 days 3%
2 ok cc1 sdb3 BTWA632601X9800HGN 2753 days 3%
3 ok cc1 sdb4 BTWA632601X9800HGN 2753 days 2%
4 ok cc1 sdc3 BTWA632601RW800HGN 2753 days 1%
5 ok cc1 sdc4 BTWA632601RW800HGN 2753 days 3%
6 ok cc1 sdd3 BTWA6326038Z800HGN 2753 days 2%
7 ok cc1 sdd4 BTWA6326038Z800HGN 2753 days 6%
8 ok cc1 sde3 BTWA632605GP800HGN 2753 days 3%
9 ok cc1 sde4 BTWA632605GP800HGN 2753 days 2%
10 ok cc1 sdg3 BTWA632601U9800HGN 2753 days 3%
11 ok cc1 sdg4 BTWA632601U9800HGN 2753 days 2%
12 ok cc1 sdh3 BTWA632604RJ800HGN 2753 days 2%
13 ok cc1 sdh4 BTWA632604RJ800HGN 2753 days 2%
14 ok cc1 sdi3 BTWA632602Q3800HGN 2753 days 1%
15 ok cc1 sdi4 BTWA632602Q3800HGN 2753 days 2%
16 ok cc1 sdj3 BTWA63260373800HGN 2753 days 1%
17 ok cc1 sdj4 BTWA63260373800HGN 2753 days 3%
18 ok cc1 sdk3 BTWA632605EV800HGN 2753 days 1%
19 ok cc1 sdk4 BTWA632605EV800HGN 2753 days 1%
20 ok cc1 sdl3 BTWA63250476800HGN 2753 days 1%
21 ok cc1 sdl4 BTWA63250476800HGN 2753 days 2%
22 ok cc1 sdm3 BTWA6326047A800HGN 2753 days 3%
23 ok cc1 sdm4 BTWA6326047A800HGN 2753 days 3%
24 ok cc1 sdn3 BTWA632602Q0800HGN 2753 days 2%
25 ok cc1 sdn4 BTWA632602Q0800HGN 2753 days 3%
26 ok cc1 sdo3 BTWA632605EV800HGN 2753 days 1%
27 ok cc1 sdo4 BTWA632605EV800HGN 2753 days 4%

Error Codes

To interpret the error codes shown in the information,

CLI: cluster > errcode_dump

cc1:cluster> errcode_dump
SERVICE ERR DESCRIPTION
auditbeat 1 daemon down
bootstrap 1 service unready
bootstrap 2 settings misconfigured
ceph 1 health warning
ceph 2 health error
ceph_mds 1 not all online
ceph_mds 2 cephfs not all mounted
ceph_mds 3 nfs-ganesha fail to mount
ceph_mds 4 nfs-ganesha fail to write
ceph_mds 5 nfs-ganesha not all up
ceph_mgr 1 not all online
ceph_mgr 2 InfluxDB list/create fail
ceph_mgr 3 InfluxDB fail
ceph_mgr 4 module devicehealth fail
ceph_mgr 5 dashboard unavailable
ceph_mgr 6 mem high
ceph_mon 1 msgr2 not enabled
ceph_mon 2 not all online
ceph_mon 3 ops slow
ceph_osd 1 not all up
ceph_osd 2 not all in
ceph_osd 3 disk failing
ceph_rgw 1 not all online
cinder 1 endpoint unreachable
cinder 2 api timeout
cinder 3 scheduler down
cinder 4 volume down
cinder 5 backup down
clock 1 time unsync
cyborg 3 api down
cyborg 4 conductor down
cyborg 5 agent down
designate 1 endpoint unreachable
designate 10 central not all up
designate 11 worker not all up
designate 12 producer not all up
designate 13 mdns not all up
designate 2 api timeout
designate 3 api down
designate 4 central down
designate 5 worker down
designate 6 producer down
designate 7 mdns down
designate 8 named down
designate 9 api not all up
dns 1 lookup timeout
etcd 1 daemon down
etcd 2 status offline
filebeat 1 daemon down
glance 1 endpoint unreachable
glance 3 api down
grafana 1 daemon down
grafana 2 port not responding
hacluster 1 control corosync down
hacluster 10 compute not all online
hacluster 11 compute offline
hacluster 2 control pacemaker down
hacluster 3 control pcsd down
hacluster 4 control not all online
hacluster 5 control offline
hacluster 6 cinder-volume down
hacluster 7 ovndb_servers fail
hacluster 8 compute pcsd down
hacluster 9 compute pacemaker_remote down
haproxy 1 control haproxy down
haproxy_ha 1 active control haproxy-ha down
heat 1 endpoint unreachable
heat 2 api timeout
heat 3 engine down
httpd 1 control httpd down
httpd 2 port not responding
influxdb 1 daemon down
influxdb 2 port not responding
ironic 2 api timeout
ironic 3 api down
ironic 4 conductor down
ironic 5 inspector down
ironic 6 not all online
k3s 1 pods fewer than expected
kafka 1 daemon down
kafka 2 failed to get sys/host metrics
kafka 3 failed to get instance metrics
kafka 4 queue has no leader
kafka 5 queue has no coordinator
kafka 6 built-in queues missing
kapacitor 1 daemon down
kapacitor 2 port not responding
keycloak 1 pods fewer than expected
license 1 check fail
link 1 ping fail
lmi 1 service down
lmi 2 service not responding
logstash 1 daemon down
manila 2 api timeout
manila 3 scheduler down
manila 4 share down
masakari 1 endpoint unreachable
masakari 3 api down
masakari 4 engine down
masakari 5 processmonitor down
masakari 6 hostmonitor down
masakari 7 instancemonitor down
memcache 1 daemon down
monasca 3 persister down
monasca 4 collector down
monasca 5 forwarder down
monasca 6 statsd down
mysql 1 disconnected
mysql 2 not all online
mysql 3 state unsync
neutron 2 api timeout
neutron 3 metadata not all up
neutron 4 vpn not all up
neutron 5 control not all up
neutron 6 port create fail
nova 1 endpoint unreachable
nova 2 api timeout
nova 3 scheduler down
nova 4 conductor down
nova 5 compute down
octavia 1 endpoint unreachable
octavia 3 api down
octavia 4 housekeeping down
octavia 5 octavia-hm0 port missing
octavia 6 octavia-hm0 link missng
octavia 7 octavia-hm0 route missing
octavia 8 worker down
octavia 9 health-manager down
opensearch 1 daemon down
opensearch 2 not all online
opensearch 3 status not green
opensearch_dashboards 1 daemon down
opensearch_dashboards 2 port not responding
rabbitmq 1 daemon down
rabbitmq 2 not all online
rabbitmq 3 partitions exist
rancher 1 pods fewer than expected
rbd_target 1 api down
rbd_target 2 gw down
senlin 2 api timeout
senlin 3 engine down
senlin 4 conductor down
senlin 5 health-manager down
swift 1 endpoint unreachable
swift 2 api timeout
swift 3 objects missing
telegraf 1 daemon down
vip 1 active control IP down
vip 2 inactive control IP active
watcher 2 api timeout
watcher 3 applier down
watcher 4 engine down
zookeeper 1 daemon down
zookeeper 2 not all online