The Cluster
Below is our cluster setup. Please note that we are constrained by the devices we have and the service provider we are using. Your configuration might be different. Different network topologies might require some changes in the following instructions. Please be aware of what you are doing.
CentOS 7 Installation
- Install CentOS 7 with the following configuration on the head node of your cluster:
You need a minimal version of CentOS and you can download the .iso file here (https://www.centos.org/download/).
hostname: controller
password: YOURPASSWORD
choose “manually configure partition”, delete all the existing partitions, and then click “automatically generate partitions”. Adjust the amount of capacity assigned to the root and make it as large as possible. You can remove /home partition if you are not going to use it at all and allocate its space to /root. - Do the same for all other nodes in the cluster and set the hostnames as follows:
hostname: compute2,compute3, compute4, compute5, compute6, compute7
password: YOURPASSWORD
For partitioning choose “Use All Space” and check “Review and modify partitioning layout” then you can remove lv_home (/home) partition and add all the free space to lv_root (/).
Network Configuration
In our scenario, controller node has two interfaces, interface 1 (eno1) is connected to the public network and interface 2 (eno2) is connected to a local switch that connects all the nodes in the cluster.
1. Controller (compute1 and gateway):
- Login with root username and password
- Stop the first network interface (eno1) from being managed by the NetworkManager daemon
vi /etc/sysconfig/network-scripts/ifcfg-eno1 NM_CONTROLLED=no
save and exit.
- Set a static private IP address for the controller (192.168.0.1)
vi /etc/sysconfig/network-scripts/ifcfg-eno2 BOOTPROTO=static IPADDR=192.168.0.1 METMASK=255.255.255.0 ONBOOT=yes NM_CONTROLLED=no
save and exit.
- Restart the network service.
systemctl restart network
- Check if you have the internet connection is working!
ping www.google.com
- Update your repository and install openssh-server openssh-clients nano and wget
yum -y update yum install -y openssh-server openssh-clients nano wget net-tools
- Change the state of SELINUX to permissive:
nano /etc/selinux/config SELINUX=permissive
- Set the domain name for compute nodes.
nano /etc/hosts 192.168.0.1 controller compute1 gateway 192.168.0.2 compute2 192.168.0.3 compute3 ...
- Disable Network Manager and firewall to avoid conflicts with OpenStack
systemctl stop firewalld systemctl disable firewalld systemctl stop NetworkManager systemctl disable NetworkManager systemctl enable network systemctl restart network
2. NAT Configuration on the Controller Node:
To provide Internet access to other machines in the cluster, you should enable NAT. If all machines in the cluster are getting public IPs by default you can skip this step.
- Enable the NAT forwarding from iptables to give Internet access to compute hosts by executing the following commands:
yum install -y iptables-services chkconfig iptables on iptables -F iptables -t nat -F iptables -t mangle -F iptables -t nat -A POSTROUTING -o eno1 -j MASQUERADE iptables -A FORWARD -i eno2 -j ACCEPT iptables -A FORWARD -o eno2 -j ACCEPT service iptables save service iptables restart
- Check if iptable has been properly configured:
iptables -S
The output should include these:
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-A FORWARD -i eno2 -j ACCEPT
-A FORWARD -o eno2 -j ACCEPT - To make sure you do not lose iptables configuration do the following:
vi /etc/sysconfig/iptables-config IPTABLES_SAVE_ON_RESTART="yes" IPTABLES_SAVE_ON_STOP="yes"
service iptables restart
- Enable forwarding
nano /etc/sysctl.conf net.ipv4.ip_forward=1
- Reboot the controller machine and make sure the changes are persistent.
3. Compute Nodes
- Login with root username and password
- Set a static private IP address for each node
vi /etc/sysconfig/network-scripts/ifcfg-eno2 NM_CONTROLLED=no BOOTPROTO=static IPADDR=192.168.0.2 (192.168.0.3) METMASK=255.255.255.0 GATEWAY=192.168.0.1 ONBOOT=yes
- Define some nameservers for your compute nodes
vi /etc/resolv.conf nameserver 128.250.66.5 #this is our first private DNS server nameserver 128.250.201.5 #this is our second private DNS server nameserver 8.8.8.8
- Restart your network service.
service network restart
- Update your repository and install openssh-server openssh-clients nano and wget
yum -y update yum install -y openssh-server openssh-clients nano wget net-tools
- Change the state of SELINUX to permissive:
nano /etc/selinux/config SELINUX=permissive
- Set the domain name for compute nodes.
nano /etc/hosts 192.168.0.1 controller compute1 gateway 192.168.0.2 compute2 192.168.0.3 compute3 ...
- Disable Network Manager and firewall to avoid conflicts with OpenStack Networking Service.
systemctl stop firewalld systemctl disable firewalld systemctl stop NetworkManager systemctl disable NetworkManager systemctl enable network systemctl restart network
- Reboot all machines to make sure the changes are persistent.
OpenStack Installation
Make sure all nodes (controller, compute2, compute3, …) are already configured and ready. Please refer to: https://www.rdoproject.org/install/quickstart/ if you have not sure about previous steps for your cluster setup.
- Make sure your
/etc/environment
is populated: -
vi /etc/environment
LANG=en_US.utf-8 LC_ALL=en_US.utf-8 - Install RDO release:
yum install -y https://www.rdoproject.org/repos/rdo-release.rpm yum update -y
- Install openstack-packstack, a set of scripts to install all peaces of OpenStack, and generate the default settings for packstack:
yum install -y openstack-packstack packstack --gen-answer-file=~/answers.cfg
- Export these environment variables
export OS_USERNAME=admin export OS_PASSWORD=YOURPASSWORD
- Edit answers.cfg based on your requirements, make sure following setting is done.
-
CONFIG_NTP_SERVERS=ntp1.unimelb.edu.au,ntp2.unimelb.edu.au #these are our private ntp servers, use yours. CONFIG_CONTROLLER_HOST=192.168.0.1 CONFIG_NETWORK_HOSTS=192.168.0.1 CONFIG_AMQP_HOST=192.168.0.1 # change the IP address of the controller to 192.168.0.1 CONFIG_COMPUTE_HOSTS=192.168.0.1,192.168.0.2,192.168.0.3,192.168.0.4,192.168.0.5,192.168.0.6,192.168.0.7 # Add IP addresses of all compute nodes. CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan,flat,vlan CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch CONFIG_NEUTRON_L2_AGENT=openvswitch CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=extnet:br-ex CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-ex:eno2 #Pay attention here!! CONFIG_CINDER_VOLUMES_SIZE=100G CONFIG_KEYSTONE_ADMIN_PW=YOURPASSWORD CONFIG_PROVISION_DEMO=n CONFIG_CINDER_VOLUMES_SIZE=100G
- Install packstack based on your config.
packstack --answer-file=~/answers.cfg
- Source the keystonerc_admin before using command line for openstack commands. You can see the admin user and password for accessing the dashboard in this file.
source keystonerc_admin
- If you have a domain name for your public IP address and you want to access your dashboard with domain name follow this instruction.
vi /etc/httpd/conf.d/15-horizon_vhost.conf ServerAlias YOURDOMAINAME #for example iaas.clouds.com
- Automate OpenStack environments sourcing on startup
echo "source /root/keystonerc_admin" >> ~/.bashrc
- Now open OpenStack dashboard on your browser http://YOURDAMIN/dashboard for example http://iaas.clouds.com/dashboard/You can skip this step if you have aleardy set CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-ex:eno2. If external bridge is not properly created and you have network issues you can do it manually as explained below. Make sure you set CONFIG_NEUTRON_OVS_BRIDGE_IFACES=. first, you should create a bridge.
vi /etc/sysconfig/network-scripts/ifcfg-br-ex NAME=br-ex DEVICE=br-ex DEVICETYPE=ovs TYPE=OVSBridge BOOTPROTO=static IPADDR=192.168.0.1 NETMASK=255.255.255.0 GATEWAY=192.168.0.1 DNS1=8.8.8.8 DNS2=128.250.201.5 ONBOOT=yes DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=no
Note that you are allocating the IP address of the controller to the bridge now.
Now, you introduce controller node as a port to this bridge.catvi /etc/sysconfig/network-scripts/ifcfg-eno2 TYPE=OVSPort BOOTPROTO=none NAME=eno2 IPV6INIT=no DEVICE=eno2 ONBOOT=yes NM_CONTROLLED=no DEVICETYPE=ovs OVS_BRIDGE=br-ex
Restart your network to see everything’s working fine.
service network restart
Virtual Network in OpenStack
For the network setup in OpenStack follow the steps in this clip.
Note that you need to create some images before perfoming these steps.