Installing OpenStack on a small cluster using CentOS and RDO

The Cluster

Below is our cluster setup. Please note that we are constrained by the devices we have and the service provider we are using. Your configuration might be different. Different network topologies might require some changes in the following instructions. Please be aware of what you are doing.

openstack

CentOS 7 Installation

  1. Install CentOS 7 with the following configuration on the head node of your cluster:
    You need a minimal version of CentOS and you can download the .iso file here (https://www.centos.org/download/).
           hostname: controller
           password: YOURPASSWORD
    choose “manually configure partition”, delete all the existing partitions, and then click “automatically generate partitions”. Adjust the amount of capacity assigned to the root and make it as large as possible. You can remove /home partition if you are not going to use it at all and allocate its space to /root.
  2. Do the same for all other nodes in the cluster and set the hostnames as follows:
          hostname: compute2,compute3, compute4, compute5, compute6, compute7
          password: YOURPASSWORD
    For partitioning choose “Use All Space” and check “Review and modify partitioning layout” then you can remove lv_home (/home) partition and add all the free space to lv_root (/).

Network Configuration

In our scenario, controller node has two interfaces, interface 1 (eno1) is connected to the public network and interface 2 (eno2)  is connected to a local switch that connects all the nodes in the cluster.

1. Controller (compute1 and gateway):

  1. Login with root username and password
  2. Stop the first network interface (eno1) from being managed by the NetworkManager daemon
    vi /etc/sysconfig/network-scripts/ifcfg-eno1
    
    NM_CONTROLLED=no

    save and exit.

  3. Set a static private IP address for the controller (192.168.0.1)
    vi /etc/sysconfig/network-scripts/ifcfg-eno2
    
    BOOTPROTO=static
    IPADDR=192.168.0.1
    METMASK=255.255.255.0
    ONBOOT=yes
    NM_CONTROLLED=no

    save and exit.

  4.  Restart the network service.
    systemctl restart network
  5. Check if you have the internet connection is working!
    ping www.google.com
  6. Update your repository and install openssh-server openssh-clients nano and wget
    yum -y update
    yum install -y openssh-server openssh-clients nano wget net-tools
  7. Change the state of SELINUX to permissive:
    nano /etc/selinux/config
    SELINUX=permissive
  8. Set the domain name for compute nodes.
    nano /etc/hosts
    192.168.0.1 controller compute1 gateway
    192.168.0.2 compute2
    192.168.0.3 compute3
    ...
  9. Disable Network Manager and firewall to avoid conflicts with OpenStack
    systemctl stop firewalld
    systemctl disable firewalld
    systemctl stop NetworkManager
    systemctl disable NetworkManager
    systemctl enable network
    systemctl restart network

 

2. NAT Configuration on the Controller Node:

To provide Internet access to other machines in the cluster, you should enable NAT. If all machines in the cluster are getting public IPs by default you can skip this step.

  1. Enable the NAT forwarding from iptables to give Internet access to compute hosts by executing the following commands:
    yum install -y iptables-services
    
    chkconfig iptables on
    
    iptables -F
    iptables -t nat -F
    iptables -t mangle -F
    iptables -t nat -A POSTROUTING -o eno1 -j MASQUERADE
    iptables -A FORWARD -i eno2 -j ACCEPT
    iptables -A FORWARD -o eno2 -j ACCEPT
    service iptables save
    service iptables restart
  2. Check if iptable has been properly configured:
    iptables -S

    The output should include these:
    -P INPUT ACCEPT
    -P FORWARD ACCEPT
    -P OUTPUT ACCEPT
    -A FORWARD -i eno2 -j ACCEPT
    -A FORWARD -o eno2 -j ACCEPT

  3.   To make sure you do not lose iptables configuration do the following:
    vi /etc/sysconfig/iptables-config
    
    IPTABLES_SAVE_ON_RESTART="yes"
    IPTABLES_SAVE_ON_STOP="yes"
    service iptables restart
  4.  Enable forwarding
    nano /etc/sysctl.conf
    
    net.ipv4.ip_forward=1
  5. Reboot the controller machine and make sure the changes are persistent.

3. Compute Nodes

  1. Login with root username and password
  2. Set a static private IP address for each node
    vi /etc/sysconfig/network-scripts/ifcfg-eno2
    
    NM_CONTROLLED=no
    BOOTPROTO=static
    IPADDR=192.168.0.2 (192.168.0.3)
    METMASK=255.255.255.0
    GATEWAY=192.168.0.1
    ONBOOT=yes
  3. Define some nameservers for your compute nodes
    vi /etc/resolv.conf
    
    nameserver 128.250.66.5 #this is our first private DNS server
    nameserver 128.250.201.5 #this is our second private DNS server
    nameserver 8.8.8.8
  4. Restart your network service.
    service network restart
  5. Update your repository and install openssh-server openssh-clients nano and wget
    yum -y update
    yum install -y openssh-server openssh-clients nano wget net-tools
  6. Change the state of SELINUX to permissive:
    nano /etc/selinux/config
    SELINUX=permissive
  7. Set the domain name for compute nodes.
    nano /etc/hosts
    192.168.0.1 controller compute1 gateway
    192.168.0.2 compute2
    192.168.0.3 compute3
    ...
  8. Disable Network Manager and firewall to avoid conflicts with OpenStack Networking Service.
    systemctl stop firewalld
    systemctl disable firewalld
    systemctl stop NetworkManager
    systemctl disable NetworkManager
    systemctl enable network
    systemctl restart network
  9. Reboot all machines to make sure the changes are persistent.

OpenStack Installation

Make sure all nodes (controller, compute2, compute3, …) are already configured and ready. Please refer to: https://www.rdoproject.org/install/quickstart/ if you have not sure about previous steps for your cluster setup.

  1. Make sure your /etc/environment is populated:
  2. vi /etc/environment
    
    LANG=en_US.utf-8 LC_ALL=en_US.utf-8
  3.  Install RDO release:
    yum install -y https://www.rdoproject.org/repos/rdo-release.rpm
    yum update -y
  4. Install openstack-packstack, a set of scripts to install all peaces of OpenStack, and generate the default settings for packstack:
    yum install -y openstack-packstack
    packstack --gen-answer-file=~/answers.cfg
  5. Export these environment variables
    export OS_USERNAME=admin
    export OS_PASSWORD=YOURPASSWORD
  6. Edit answers.cfg based on your requirements, make sure following setting is done.
  7. CONFIG_NTP_SERVERS=ntp1.unimelb.edu.au,ntp2.unimelb.edu.au #these are our private ntp servers, use yours.
    CONFIG_CONTROLLER_HOST=192.168.0.1
    CONFIG_NETWORK_HOSTS=192.168.0.1
    CONFIG_AMQP_HOST=192.168.0.1
    # change the IP address of the controller to 192.168.0.1
    CONFIG_COMPUTE_HOSTS=192.168.0.1,192.168.0.2,192.168.0.3,192.168.0.4,192.168.0.5,192.168.0.6,192.168.0.7 # Add IP addresses of all compute nodes.
    CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan,flat,vlan
    CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
    CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
    CONFIG_NEUTRON_L2_AGENT=openvswitch
    CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=extnet:br-ex
    CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-ex:eno2 #Pay attention here!!
    CONFIG_CINDER_VOLUMES_SIZE=100G
    CONFIG_KEYSTONE_ADMIN_PW=YOURPASSWORD
    CONFIG_PROVISION_DEMO=n
    CONFIG_CINDER_VOLUMES_SIZE=100G
  8.  Install packstack based on your config.
    packstack --answer-file=~/answers.cfg
  9. Source the keystonerc_admin before using command line for openstack commands. You can see the admin user and password for accessing the dashboard in this file.
    source keystonerc_admin
  10. If you have a domain name for your public IP address and you want to access your dashboard with domain name follow this instruction.
    vi /etc/httpd/conf.d/15-horizon_vhost.conf
    
    ServerAlias YOURDOMAINAME
    #for example iaas.clouds.com
  11. Automate OpenStack environments sourcing on startup
    echo "source /root/keystonerc_admin" >> ~/.bashrc
  12. Now open OpenStack dashboard on your browser http://YOURDAMIN/dashboard for example http://iaas.clouds.com/dashboard/You can skip this step if you have aleardy set CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-ex:eno2. If external bridge is not properly created and you have network issues you can do it manually as explained below. Make sure you set CONFIG_NEUTRON_OVS_BRIDGE_IFACES=. first, you should create a bridge.
    vi /etc/sysconfig/network-scripts/ifcfg-br-ex
    
    NAME=br-ex
    DEVICE=br-ex
    DEVICETYPE=ovs
    TYPE=OVSBridge
    BOOTPROTO=static
    IPADDR=192.168.0.1
    NETMASK=255.255.255.0
    GATEWAY=192.168.0.1
    DNS1=8.8.8.8
    DNS2=128.250.201.5
    ONBOOT=yes
    DEFROUTE=yes
    IPV4_FAILURE_FATAL=no
    IPV6INIT=no

    Note that you are allocating the IP address of the controller to the bridge now.
    Now, you introduce controller node as a port to this bridge.cat

    vi /etc/sysconfig/network-scripts/ifcfg-eno2
    
    TYPE=OVSPort
    BOOTPROTO=none
    NAME=eno2
    IPV6INIT=no
    DEVICE=eno2
    ONBOOT=yes
    NM_CONTROLLED=no
    DEVICETYPE=ovs
    OVS_BRIDGE=br-ex

    Restart your network to see everything’s working fine.

    service network restart

 

Virtual Network in OpenStack

For the network setup in OpenStack follow the steps in this clip.
Note that you need to create some images before perfoming these steps.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s