Skip to main content
Version: 1.3

Startup

Single node startup

boot> bootstrap_cube

$ ssh admin@IPADDRESS
Warning: Permanently added '192.168.X.X' (ECDSA) to the list of known hosts.
Password:
Welcome to the Cube Appliance
Enter "help" for a list of available commands
controller> boot
controller:boot> bootstrap_cube
bootstraping cube...
if this is a single node start, just run "boot> cluster_sync"
if this is a cluster start, wait until bootstrap_done is done in all nodes
and run "cluster> start" to sync cluster data for all nodes
controller:boot>

boot> cluster_sync

controller:boot> cluster_sync
cluster_sync successfully

Run services checking & repair

CLI: cluster> check_repair

controller> cluster
controller:cluster> check_repair
Service Status Report
ClusterLink ok [ link(v) ]
ClusterSettings ok [ etcd(v) ]
HaCluster FIXING [ hacluster(3) ]
ok [ hacluster(f) ]
MsgQueue ok [ rabbitmq(v) ]
IaasDb ok [ mysql(v) ]
VirtualIp ok [ vip(v) haproxy_ha(v) ]
Storage ok [ ceph(v) ceph_mon(v) ceph_mgr(v) ceph_mds(v) ceph_osd(v) ceph_rgw(v) ]
ApiService ok [ haproxy(v) apache2(v) lmi(v) memcache(v) ]
Compute ok [ nova(v) ]
Network ok [ neutron(v) ]
Image ok [ glance(v) ]
BlockStor ok [ cinder(v) ]
FileStor ok [ manila(v) ]
ObjectStor ok [ swift(v) ]
Orchestration ok [ heat(v) ]
LBaaS ok [ octavia(v) ]
DNSaaS ok [ designate(v) ]
InstanceHa ok [ masakari(v) ]
DisasterRecovery ok [ freezer(v) es235(v) ]
DataPipe ok [ zookeeper(v) kafka(v) ]
Metrics ok [ ceilometer(v) monasca(v) telegraf(v) grafana(v) ]
LogAnalytics ok [ filebeat(v) auditbeat(v) logstash(v) es(v) kibana(v) ]
Notifications ok [ influxdb(v) kapacitor(v) ]
controller:cluster>

Cluster startup

Control Node

Order by: control_node1 > control_node2 > control_node3

Connect to control_node1 with ssh/ikvm/console

run CLI boot> link_check to ensure all network links are connected

$ ssh admin@CONTROL1
Warning: Permanently added '192.168.X.X' (ECDSA) to the list of known hosts.
Password:
Welcome to the Cube Appliance
Enter "help" for a list of available commands
CONTROL1> boot
CONTROL1:boot> link_check
Ping 192.168.1.211 ... OK
Ping 192.168.1.212 ... OK
Ping 192.168.1.213 ... OK
Ping 192.168.10.211 ... OK
Ping 192.168.10.212 ... OK
Ping 192.168.10.213 ... OK

Run bootstrap_cube

run CLI boot > bootstrap_cube on the control_node1 , wait for the Master node to finish the CLI boot > bootstrap_cube and repeat the same CLI boot > bootstrap_cube on control_node2 then control_node3

CONTROL1> boot
CONTROL1:boot> bootstrap_cube
bootstraping cube...
if this is a single node start, just run "boot> cluster_sync"
if this is a cluster start, wait until bootstrap_done is done in all nodes
and run "cluster> start" to sync cluster data for all nodes
CONTROL1:boot>

Compute Node

Run bootstrap_cube

Order by: compute_node1 > compute_node2 > compute_node3

Instruction: After 3 control nodes are complete CLI boot > bootstrap_cube , then you can run the CLI boot > bootstrap_cube on the compute nodes 1 by 1 Connect to compute_node1 with ssh/ikvm/console

$ ssh admin@COMPUTE1
Warning: Permanently added '192.168.X.X' (ECDSA) to the list of known hosts.
Password:
Welcome to the Cube Appliance
Enter "help" for a list of available commands
COMPUTE1> boot
COMPUTE1:boot> bootstrap_cube
bootstraping cube...
if this is a single node start, just run "boot> cluster_sync"
if this is a cluster start, wait until bootstrap_done is done in all nodes
and run "cluster> start" to sync cluster data for all nodes
COMPUTE1:boot>

Run cluster> start

Once all nodes are complete CLI boot > bootstrap_cube , We need to run cluster> start on the one of the control nodes

$ ssh admin@CONTROL1
Warning: Permanently added '192.168.X.X' (ECDSA) to the list of known hosts.
Password:
Welcome to the Cube Appliance
Enter "help" for a list of available commands
CONTROL1> cluster
CONTROL1:cluster> start
role map
------------------------------------------------------------
all 192.168.1.211,192.168.1.212,192.168.1.213
control 192.168.1.211,192.168.1.212,192.168.1.213
network 192.168.1.211,192.168.1.212,192.168.1.213
compute 192.168.1.211,192.168.1.212,192.168.1.213
storage 192.168.1.211,192.168.1.212,192.168.1.213
------------------------------------------------------------
mark host 192.168.1.211 up
mark host 192.168.1.212 up
mark host 192.168.1.213 up
cluster start successfully

Run services checking & repair

CLI: cluster> check_repair

CONTROL1> cluster
CONTROL1:cluster> check_repair
Service Status Report
ClusterLink ok [ link(v) ]
ClusterSettings ok [ etcd(v) ]
HaCluster FIXING [ hacluster(3) ]
ok [ hacluster(f) ]
MsgQueue ok [ rabbitmq(v) ]
IaasDb ok [ mysql(v) ]
VirtualIp ok [ vip(v) haproxy_ha(v) ]
Storage ok [ ceph(v) ceph_mon(v) ceph_mgr(v) ceph_mds(v) ceph_osd(v) ceph_rgw(v) ]
ApiService ok [ haproxy(v) apache2(v) lmi(v) memcache(v) ]
Compute ok [ nova(v) ]
Network ok [ neutron(v) ]
Image ok [ glance(v) ]
BlockStor ok [ cinder(v) ]
FileStor ok [ manila(v) ]
ObjectStor ok [ swift(v) ]
Orchestration ok [ heat(v) ]
LBaaS ok [ octavia(v) ]
DNSaaS ok [ designate(v) ]
InstanceHa ok [ masakari(v) ]
DisasterRecovery ok [ freezer(v) es235(v) ]
DataPipe ok [ zookeeper(v) kafka(v) ]
Metrics ok [ ceilometer(v) monasca(v) telegraf(v) grafana(v) ]
LogAnalytics ok [ filebeat(v) auditbeat(v) logstash(v) es(v) kibana(v) ]
Notifications ok [ influxdb(v) kapacitor(v) ]
CONTROL1:cluster>