Installation of Three node OpenStack Queens Cluster – Part Five

Posted on 135 views

Installing Openstack Nova Compute on another node

In this fifth sequel, we shall indulge in the Installation of Nova Compute on another node. It shall have Nova Compute, Libvirt, L2 Agent, and Open vSwitch.

Step 1: Install KVM.

[[email protected] ~]#  yum -y install qemu-kvm libvirt virt-install bridge-utils
 Loaded plugins: fastestmirror
 Loading mirror speeds from cached hostfile

Check its status

[[email protected] ~]# systemctl status libvirtd 
 ● libvirtd.service - Virtualization daemon
    Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
    Active: active (running) since Thu 2019-03-07 10:38:25 EST; 3 days ago
      Docs: man:libvirtd(8)
            https://libvirt.org
  Main PID: 78751 (libvirtd)
     Tasks: 19 (limit: 32768)
    CGroup: /system.slice/libvirtd.service
            ├─78751 /usr/sbin/libvirtd
            ├─78859 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-s…
            └─78860 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-s…

Step 2: Install Install openstack-nova-compute:

[[email protected] ~]# yum --enablerepo=centos-openstack-queens,epel -y install openstack-nova-compute
 Loaded plugins: fastestmirror
 Loading mirror speeds from cached hostfile
 epel/x86_64/metalink                                                                      |  55 kB  00:00:00     
 base: mirror.ucu.ac.ug
 centos-qemu-ev: mirror.ucu.ac.ug
 epel: mirror.layeronline.com
 extras: mirror.ucu.ac.ug
 updates: mirror.ucu.ac.ug

Step 3: Configure Openstack Nova Compute

Back up the original Nova file and configure a new one as shown below:

[[email protected] ~]# mv /etc/nova/nova.conf /etc/nova/nova.conf.bak
[[email protected] ~]# vim /etc/nova/nova.conf 
#new file
 [DEFAULT]
 define own IP address of the new node
 my_ip = 192.168.122.132
 state_path = /var/lib/nova
 enabled_apis = osapi_compute,metadata
 log_dir = /var/log/nova
 RabbitMQ connection info set in controller node
 transport_url = rabbit://openstack:[email protected]
 [api]
 auth_strategy = keystone
 enable VNC
 [vnc]
 enabled = True
 server_listen = 0.0.0.0
 server_proxyclient_address = $my_ip
 novncproxy_base_url = http://192.168.122.130:6080/vnc_auto.html 
 Glance connection info configured in controller node
 [glance]
 api_servers = http://192.168.122.130:9292
 [oslo_concurrency]
 lock_path = $state_path/tmp
 Keystone auth info configured in controller node
 [keystone_authtoken]
 www_authenticate_uri = http://192.168.122.130:5000
 auth_url = http://192.168.122.130:5000
 memcached_servers = 192.168.122.130:11211
 auth_type = password
 project_domain_name = default
 user_domain_name = default
 project_name = service
 username = nova
 Do not forget the password set when creating nova service
 password = pepe123
 [placement]
 configured in controller node
 auth_url = http://192.168.122.130:5000
 os_region_name = RegionOne
 auth_type = password
 project_domain_name = default
 user_domain_name = default
 project_name = service
 username = placement
 Do not forget the password set when creating placement service
 password = pepe1234
 [wsgi]
 api_paste_config = /etc/nova/api-paste.ini

Change permission settings and group for the file created above

[[email protected] ~]# chmod 640 /etc/nova/nova.conf 
[[email protected] ~]# chgrp nova /etc/nova/nova.conf

Add relevant ports to firewall then start and enable nova-compute

[[email protected] ~]# firewall-cmd --add-port=5900-5999/tcp --permanent 
 success
 [[email protected] ~]# firewall-cmd --reload
 success
[[email protected] ~]#  systemctl start openstack-nova-compute
 [[email protected] ~]#  systemctl enable openstack-nova-compute
 Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-compute.service to /usr/lib/systemd/system/openstack-nova-compute.service.

Now we go over to the interesting part where we see whether our newly configured node has been discovered by the controller.

[[email protected] ~(keystone)]# su -s /bin/bash nova -c "nova-manage cell_v2 discover_hosts"
[[email protected] ~(keystone)]# openstack compute service list                              
 +----+------------------+------------+----------+---------+-------+----------------------------+
 | ID | Binary           | Host       | Zone     | Status  | State | Updated At                 |
 +----+------------------+------------+----------+---------+-------+----------------------------+
 |  3 | nova-consoleauth | controller | internal | enabled | up    | 2019-03-11T10:35:04.000000 |
 |  4 | nova-conductor   | controller | internal | enabled | up    | 2019-03-11T10:35:11.000000 |
 |  5 | nova-scheduler   | controller | internal | enabled | up    | 2019-03-11T10:35:05.000000 |
 |  6 | nova-compute     | node01     | nova     | enabled | up    | 2019-03-11T10:35:10.000000 |
 +----+------------------+------------+----------+---------+-------+----------------------------+

There we go guys, stay tuned for our next sequel in this great quest of having our three node cluster up and running. Below are previous links to the first parts of this sequel

Installation of Openstack three Node Cluster on CentOS 7 Part One

Installation of Three node OpenStack Queens Cluster – Part Two

Installation of Three node OpenStack Queens Cluster – Part Three

Installation of Three node OpenStack Queens Cluster – Part Four

Below is the Next guide

Installation of Three node OpenStack Queens Cluster – Part Six

coffee

Gravatar Image
A systems engineer with excellent skills in systems administration, cloud computing, systems deployment, virtualization, containers, and a certified ethical hacker.