OpenStack Havana Installation on CentOS 6.4/RHEL 6.4 – 2 Nodes

In this guide, we are going to create OpenStack cloud based on two nodes. One node will host most of OpenStack services and the ‘nova’ node will be used for computing and running our test virtual machines.

Node names:

cloud-stack.linxsol.com  eth0 192.168.170.60/24 eth1 dhcp

service-stack.linxsol.com eth0 192.168.170.61/24 eth1 172.16.0.21/24


Use of NetworkManager is not encourged as per official documentation of OpenStack.

service NetworkManager stop

service network start

chkconfig NetworkManager off

chkconfig network on

Lets configure /etc/hosts for name resulation (on both hosts). Your final /etc/hosts should look like the one below:

vi /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.170.60 cloud-stack cloud-stack.linxsol.com

192.168.170.61 service-stack  service-stack.linxsol.com

###########################################################

Configuring MySQL Database

OpenStack requires a database to store all information, metadata and authentication details. Various database servers are supported but we will be using MySQL. We will install it on service-stack.linxsol.com :

yum install mysql mysql-server MySQL-python

Configure mysql to bind on internal IP address of service-stack 192.168.170.61:

vi /etc/my.cnf

bind-address = 192.168.170.61

service mysqld start

chkconfig mysqld on

Now run the following command on service-stack to setup the root password and answer ‘yes’ to all of the questions when prompted:

mysql_secure_installation

On Other Nodes:

We will install MySQL client on all the additional nodes with MySQL Python library. On cloud-stack.linxsol.com run the following command:

yum install mysql MySQL-python

###########################################

Configuring Network Time Protocol (NTP)

Install ntp on service-stack linxsol.com:

yum install ntp

service ntpd start

chkconfig ntpd on

Ideally all the additonal nodes should be syncorized with ntp server on service-stack. You can configure a cron job by creating an executeable file as fellows and add the following lines:

vi /etc/cron.daily/ntpdate

ntpdate service-stack.linxsol.com

hwclock -w

chmod a+x /etc/cron.daily/ntpdate

###################################################

Installing Additional Repositories for OpenStack

This setup should be configured on both nodes. We will install RDO repostiry and EPEL in order to obtain OpenStack packages.

yum install http://repos.fedorapeople.org/repos/openstack/openstack-havana/rdo-release-havana-6.noarch.rpm

yum install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

Now we will install OpenStack utility package which provides different utility program to make installation and configuration much easier:

yum install openstack-utils

######################################################

Now install Qpid on serveice-stack, Qpid is just another messaging queue server:

yum install qpid-cpp-server memcached

Edit /etc/qpidd.conf to dis able Qpid authentication:

vi /etc/qpidd.conf

auth = no

service qpidd start

chkconfig qpidd on

Install the identity service with its dependency on service-stack:

yum install openstack-keystone python-keystoneclient

Configure identity service to use MySQL database to store its relevant information:

openstack-config –set /etc/keystone/keystone.con sql connection mysql://keystone:[email protected]/keystone

You can replace KeyStone_Password with password of your choice. Now we will create a user, the database and tables.

openstack-db –init –service keystone –password KeyStone_Password

After this, we will used openssl to generate an authorization toeken and we will save it in a configuration file. We need to define this (a shared secret or an authorization token) for Identity Service and other OpenStack services. Run the following commands in termianl:

ADMIN_TOKEN=$(openssl rand -hex 10) #openssl will generate a random token and store it in a variable ADMIN_TOKEN

openstack-config –set /etc/keystone/keystone.conf DEFAULT admin_token $ADMIN_TOKEN

echo $ADMIN_TOKEN

76d80c8cea1a4c20955f #We need this later

We will generate PKI signing keys and certificates for Keystone:

keystone-manage pki_setup –keystone-user keystone –keystone-group keystone

chown -R keystone:keystone /etc/keystone/* /var/log/keystone/keystone.log

Lets start the Identity Service:

service openstack-keystone start

chkconfig openstack-keystone on

We will specify two environment variables as follows:

export OS_SERVICE_TOKEN=76d80c8cea1a4c20955f

export OS_SERVICE_ENDPOINT=http://service-stack.linxsol.com:35357/v2.0

Now create two tenants one to use for OpenStack services and one for administrative purposes:

keystone tenant-create –name=admin –description=”Admin Tenant”

+————-+———————————-+

|   Property  |              Value               |

+————-+———————————-+

| description |           Admin Tenant           |

|   enabled   |               True               |

|      id     | 63b01741bf5f475a89ba60be4daaeb5b |

|     name    |              admin               |

+————-+———————————-+

keystone tenant-create –name=service –description=”Service Tenant”

+————-+———————————-+

|   Property  |              Value               |

+————-+———————————-+

| description |          Service Tenant          |

|   enabled   |               True               |

|      id     | c3ba7c7f8d3643bc9d390d0654c15af3 |

|     name    |             service              |

+————-+———————————-+

Create an admin user, an administrative role of the user and a user role:

keystone user-create –name=admin –pass=MY_PASSWORD –[email protected]

+———-+———————————-+

| Property |              Value               |

+———-+———————————-+

|  email   |      [email protected]       |

| enabled  |               True               |

|    id    | b221431dde3049ea82eafbb63cf0a027 |

|   name   |              admin               |

+———-+———————————-+

keystone role-create –name=admin

keystone user-role-add –user=admin –tenant=admin –role=admin

All roles we have created above should be mapped according to policy.json file. The /etc/[SERVICE_CODENAME]/policy.json file controls what users are allowed to do for a given service. For example, /etc/nova/policy.json specifies the access policy for the Compute service, /etc/glance/policy.json specifies the access policy for the Image service, and /etc/keystone/policy.json specifies the access policy for the Identity service.

Now we will install Identity Service using keystone service-create command. Whenver we need to create a service in OpenStack we use service-create command.

keystone service-create –name=keystone –type=identity –description=”Keystone Identity Service”

We will use keystone endpoint-create to specify an API endpoint for the service we have just created using its id.

keystone endpoint-create –service-id=35b87d89cf4b48dcbb81b4c4ff72302f –publicurl=http://service-stack.linxsol.com:5000/v2.0 –internalurl=http://service-stack.linxsol.com:5000/v2.0 –adminurl=http://service-stack.linxsol.com:35357/v2.0

Here we have defined three urls for public API, internal API and admin API respectively. At this stage we can verify if the Identity Service is working. Before we proceed we should un register OS_SERVICE_TOKEN and OS_SERVICE_ENDPOINT.

unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT

Now we can check if we can successfully auhtenticate and get a token from our service-stack based on user name and password and we will also verify it on a tenant (second command) as follows:

keystone –os-username=admin –os-password=MY_PASSWORD –os-auth-url=http://service-stack.linxsol.com:35357/v2.0 token-get

keystone –os-username=admin –os-password=MY_PASSWORD –os-tenant-name=admin –os-auth-url=http://service-stack.linxsol.com:35357/v2.0 token-get

To avoid typing it again and again lets create a small configuration file and source it to shell:

vi keystonerc

export OS_USERNAME=admin

export OS_PASSWORD=MY_PASSWORD

export OS_TENANT_NAME=admin

export OS_AUTH_URL=http://service-stack.linxsol.com:35357/v2.0

Save the above file and source it to shell:

source keystonerc

keystone token-get

If the above last command runs successfully you will get a token and an ID of the specified tenant. Lets verify if our admin account has authorization to perform all the tasks:

keystone user-list

+———————————-+——-+———+———————–+

|                id                |  name | enabled |         email         |

+———————————-+——-+———+———————–+

| b221431dde3049ea82eafbb63cf0a027 | admin |   True  | [email protected] |

+———————————-+——-+———+———————–+

######################################################################

Installing and Configuring the Image Service (GLANCE)

The Imanage service is responsible for storing and registering all the virtual disk images in a database. It is used for adding or manuplating images including taking snapshots of running virtual machines. Type following on service-stack.linxsol.com :

yum install openstack-glance

After installing we need to edit two configuration files of Glance to specify location of database. Replace MY_PASSWORD with a password of your choice for Glance.

openstack-config –set /etc/glance/glance-api.conf DEFAULT sql_connection mysql://glance:[email protected]/glance

openstack-config –set /etc/glance/glance-registery.conf DEFAULT sql_connection mysql://glance:[email protected]/glance

To create database and all the required tables for Glance Image Service run following:

openstack-db –init –service glance –password MY_PASSWORD

To create a user called glance that the Image Service can use for authenticating against the Identity Service and also assign admin role to the glance user against tenant service :

keystone user-create –name=glance –pass=MY_PASSWORD –[email protected]

keystone user-role-add –user=glance –tenant=service –role=admin

To add credentials to the Image Service’s configuration files:

openstack-config –set /etc/glance/glance-api.conf keystone_authtoken auth_host service-stack.linxsol.com

openstack-config –set /etc/glance/glance-api.conf keystone_authtoken admin_user glance

openstack-config –set /etc/glance/glance-api.conf keystone_authtoken

admin_tenant_name service

openstack-config –set /etc/glance/glance-api.conf keystone_authtoken

admin_password MY_PASSWORD

openstack-config –set /etc/glance/glance-registry.conf

keystone_authtoken auth_host service-stack.linxsol.com

openstack-config –set /etc/glance/glance-registry.conf

keystone_authtoken admin_user glance

openstack-config –set /etc/glance/glance-registry.conf

keystone_authtoken admin_tenant_name service

openstack-config –set /etc/glance/glance-registry.conf

keystone_authtoken admin_password MY_PASSWORD

Copy the glance-api-paste.ini and glance-registry-paste.ini into /etc/glance:

cp /usr/share/glance/glance-api-dist-paste.ini /etc/glance/glance-api-paste.ini

cp /usr/share/glance/glance-registry-dist-paste.ini /etc/glance/glance-registry-paste.ini

vi /etc/glance/glance-api-paste.ini

auth_host=service-stack.linxsol.com

admin_user=glance

admin_tenant_name=service

admin_password=MY_PASSWORD

vi /etc/glance/glance-registry-paste.ini

auth_host=service-stack.linxsol.com

admin_user=glance

admin_tenant_name=service

admin_password=MY_PASSWORD

Use keystone to register the Imanage Service with Identity Service for location search:

keystone service-create –name=glance –type=image –description=”Glance Image Service”

Copy the service id returned by the above command to use it in endpont:

keystone endpoint-create –service-id=the_serivce_id_copied –publicurl=http://service-stack.linxsol.com:9292 –internalurl=http://service-stack.linxsol.com:9292 –adminurl=http://service-stack.linxsol.com:9292

service openstack-glance-api start

service openstack-glance-registry start

chkconfig openstack-glance-api on

chkconfig openstack-glance-registry on

Verify the Image Service

First we download a virtual machine image from CirOS and then we will upload it to Image Service:

wget http://cdn.download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img

glance image-create –name=CirrOS –disk-format=qcow2 –container-format=bare –is-public=true < cirros-0.3.1-x86_64-disk.img

glance image-list

The last command will show us any registerd images with Image Service.

####################################################################

Configuring and Installing Nove Controller Services

On service-stack.linxsol.com install the openstack-nove meta-package. This package installs different OpenStack Compute packages:

yum install openstack-nova python-novaclient

We will prepare MySQL for Compute Service to store information in the database:

openstack-config –set /etc/nova/nova.conf database connection mysql://nova:MY_PASSWORD@[email protected]/nova

Create db user, database and tables for Compute Service:

openstack-db –init –service nova –password MY_PASSWORD

We also need to setup vncserver_listen, vncserver_proxyclient_address and my_ip to the internal IP address of service-stack node:

openstack-config –set /etc/nova/nova.conf DEFAULT my_ip 192.168.170.61

openstack-config –set /etc/nova/nova.conf DEFAULT vncserver_listen 192.168.170.61

openstack-config –set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 192.168.170.61

Note:

In case vnc doesn’t work for your add the following lines in your /etc/nova/nova.conf:

novncproxy_base_url=http://192.168.170.61:6080/vnc_auto.html

novncproxy_host=0.0.0.0

Also add the novncproxy_base_url line into /etc/nova/nova.conf on cloud-stack.linxsol.com.

keystone user-create –name=nova –pass=MY_PASSWORD –[email protected]

keystone user-role-add –user=nova –tenant=service –role=admin

The above two command create a user nova for Compute Service to authenticate against the Identity Service and assigns admin role for the service tenant. We will modify /etc/nova/nova.conf  and /etc/nova/api-paste.ini files so Compute Service can use the updated credentials. You should also make sure that api_paste_config=/etc/nova/api-paste.ini exists in /etc/nova/nova.conf:

openstack-config –set /etc/nova/nova.conf DEFAULT auth_strategy keystone

openstack-config –set /etc/nova/nova.conf DEFAULT auth_host service-stack.linxsol.com

openstack-config –set /etc/nova/nova.conf DEFAULT admin_user nova

openstack-config –set /etc/nova/nova.conf DEFAULT admin_tenant_name service

openstack-config –set /etc/nova/nova.conf DEFAULT admin_password MY_PASSWORD

Make sure the following options are set in /etc/nova/api-paste.ini :

[filter:authtoken]

paste.filter_factory=keystoneclient.middleware.auth_token:filter_factory

auth_host=service-stack.linxsol.com

auth_uri=http://service-stack.linxsol.com:5000

admin_tenant_name=service

admin_user=nova

admin_password=MY_PASSWORD

We will register and specify endpoint for our Compute Service now with the Identity Service for location search and also configure it to use Qpid message broker:

keystone service-create  –name=nova –type=compute –description=”Nova Compute Service”

keystone endpoint-create –service-id=the_service_id_returned_by_above_command –publicurl=http://service-stack.linxsol.com:8774/v2/%\(tenant_id\)s –internalurl=http://service-stack.linxsol.com:8774/v2/%\(tenant_id\)s –adminurl=http://service-stack.linxsol.com:8774/v2/%\(tenant_id\)s

openstack-config –set /etc/nova/nova.conf DEFAULT rpc_backend nova.openstack.common.rpc.impl_qpid

openstack-config –set /etc/nova/nova.conf DEFAULT qpid_hostname service-stack.linxsol.com

Start all the services:

service openstack-nova-api start

service openstack-nova-cert start

service openstack-nova-consoleauth start

service openstack-nova-scheduler start

service openstack-nova-conductor start

service openstack-nova-novncproxy start

chkconfig openstack-nova-api on

chkconfig openstack-nova-cert on

chkconfig openstack-nova-consoleauth on

chkconfig openstack-nova-scheduler on

chkconfig openstack-nova-conductor on

chkconfig openstack-nova-novncproxy on

To verify that everything is configured correctly, use the nova image-list to get a list o available images. The output is similar to the output of glance image-list.

nova image-list

######################################################

Compute Node Configuration

For Compute Node (cloud-stack.linxsol.com) we are going to use KVM in this article. We have already configured networking and host files for this machine in start. The eth1 will be assigned by networking component of OpenStack. Make sure the following file exists:

vi /etc/cron.daily/ntpdate

ntpdate service-stack.linxsol.com

hwclock -w

chmod a+x /etc/cron.daily/ntpdate

Install MySQL client libraries required by OpenStack:

yum install mysql MySQL-python

we will now install the packages required by OpenStack compute installation:

yum install openstack-nova-compute

Copy /etc/nova/nova.conf from service-stack.linxsol.com to cloud-stack.linxsol.com:

scp [email protected]:/etc/nova/nova.conf /etc/nova/

Set the configuration keys my_ip, vncserver_listen, and vncserver_proxyclient_address to the IP address of the compute node on internal network:

openstack-config –set /etc/nova/nova.conf DEFAULT my_ip 192.168.170.60

openstack-config –set /etc/nova/nova.conf DEFAULT vncserver_listen 192.168.170.60

openstack-config –set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 192.168.170.60

Specify the host running the Image Service:

openstack-config –set /etc/nova/nova.conf DEFAULT glance_host service-stack.linxsol.com

Copy /etc/nova/api-paste.ini from service-stack.linxsol.com to cloud-stack.linxsol.com:

scp [email protected]:/etc/nova/api-paste.ini /etc/nova/

service libvirtd start

service messagebus start

chkconfig libvirtd on

chkconfig messagebus on

service openstack-nova-compute start

chkconfig opensfatack-nova-compute on

service libvirtd start

chkconfig libvirtd on

Installing network stack on cloud-stack.linxsol.com :

yum install openstack-nova-network

Set the options:

openstack-config –set /etc/nova/nova.conf DEFAULT network_manager nova.network.manager.FlatDHCPManager

openstack-config –set /etc/nova/nova.conf DEFAULT  firewall_driver nova.virt.libvirt.firewall.IptablesFirewallDriver

openstack-config –set /etc/nova/nova.conf DEFAULT network_size 254

openstack-config –set /etc/nova/nova.conf DEFAULT allow_same_net_traffic False

openstack-config –set /etc/nova/nova.conf DEFAULT multi_host True

openstack-config –set /etc/nova/nova.conf DEFAULT send_arp_for_ha True

openstack-config –set /etc/nova/nova.conf DEFAULT share_dhcp_address True

openstack-config –set /etc/nova/nova.conf DEFAULT force_dhcp_release True

openstack-config –set /etc/nova/nova.conf DEFAULT flat_interface eth1

openstack-config –set /etc/nova/nova.conf DEFAULT flat_network_bridge br100

openstack-config –set /etc/nova/nova.conf DEFAULT public_interface eth1

Open nova.conf and search for [database] and make sure the two uncomment lines are set to service-stack.linxsol.com:

vi /etc/nova/nova.conf

[database]

connection = mysql://nova:[email protected]/nova

#

# Options defined in nova.openstack.common.db.api

#

# The backend to use for db (string value)

#backend=sqlalchemy

# Enable the experimental use of thread pooling for all DB API

# calls (boolean value)

#use_tpool=false

#

# Options defined in nova.openstack.common.db.sqlalchemy.session

#

# The SQLAlchemy connection string used to connect to the

# database (string value)

connection=mysql://nova:[email protected]/nova

Provide a local metadata service that will be reachable from instances on this compute node. This step is only necessary on compute nodes that do not run the nova-api service.

yum install openstack-nova-api

service openstack-nova-metadata-api start

chkconfig openstack-nova-metadata-api on

service openstack-nova-network restart

chkconfig openstack-nova-network on

Finally, you have to create a network that virtual machines can use. You only need to do this once for the entire installation, not for each compute node. Run the nova networkcreate command anywhere your admin user credentials are loaded.

scp [email protected]:/root/keystonerc /root

source keystonerc

nova network-create vmnet –fixed-range-v4=172.16.0.0/24 –bridge-interface=br100 –multi-host=T

Launching an Image

Generate a keypair consisting of a private key and a public key to be able to

launch instances on OpenStack.

ssh-keygen

cd .ssh

nova keypair-add –pub_key id_rsa.pub novakey

Check if the added key pair is saved:

nova keypair-list

+———+————————————————-+

| Name    | Fingerprint                                     |

+———+————————————————-+

| novakey | 7e:ee:27:7a:1e:ec:8e:b3:fb:20:11:80:ef:27:21:71 |

+———+————————————————-+

To launch an instance using OpenStack, you must specify the ID for the flavor you want to use for the instance. A flavor is a resource allocation profile. For example, it specifies how many virtual CPUs and how much RAM your instance will get. To see a list of the available profiles, run the nova flavor-list command.

nova flavor-list

Now we will use CirrOS we have already added to our Image Service, and to get the ID of CirrOS:

nova image-list

Setup the security group rules for Ping and SSH:

nova secgroup-add-rule default tcp 22 22 0.0.0.0/0

nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0

Now create a vitural machine using nova boot:

nova boot –flavor 1 –key_name novakey –image 667fdbe1-f2a4-4fcc-b563-d89606847d07 –security_group default cirrOS

+————————————–+————————————–+

| Property                             | Value                                |

+————————————–+————————————–+

| OS-EXT-STS:task_state                | scheduling                           |

| image                                | CirrOS                               |

| OS-EXT-STS:vm_state                  | building                             |

| OS-EXT-SRV-ATTR:instance_name        | instance-00000001                    |

| OS-SRV-USG:launched_at               | None                                 |

| flavor                               | m1.tiny                              |

| id                                   | fa05000a-82d1-4685-8e70-71ff45017eb8 |

| security_groups                      | [{u’name’: u’default’}]              |

| user_id                              | b221431dde3049ea82eafbb63cf0a027     |

| OS-DCF:diskConfig                    | MANUAL                               |

| accessIPv4                           |                                      |

| accessIPv6                           |                                      |

| progress                             | 0                                    |

| OS-EXT-STS:power_state               | 0                                    |

| OS-EXT-AZ:availability_zone          | nova                                 |

| config_drive                         |                                      |

| status                               | BUILD                                |

| updated                              | 2013-11-05T16:38:13Z                 |

| hostId                               |                                      |

| OS-EXT-SRV-ATTR:host                 | None                                 |

| OS-SRV-USG:terminated_at             | None                                 |

| key_name                             | novakey                              |

| OS-EXT-SRV-ATTR:hypervisor_hostname  | None                                 |

| name                                 | cirrOS                               |

| adminPass                            | FgqMoS3A5hrA                         |

| tenant_id                            | 63b01741bf5f475a89ba60be4daaeb5b     |

| created                              | 2013-11-05T16:38:12Z                 |

| os-extended-volumes:volumes_attached | []                                   |

| metadata                             | {}                                   |

+————————————–+————————————–+

You can see the status of your Virtual Machine using nova list:

+————————————–+——–+——–+————+————-+——————+

| ID                                   | Name   | Status | Task State | Power State | Networks         |

+————————————–+——–+——–+————+————-+——————+

| fa05000a-82d1-4685-8e70-71ff45017eb8 | cirrOS | ACTIVE | None       | Running     | vmnet=172.16.0.2 |

+————————————–+——–+——–+————+————-+——————+

To see details:

nova show fa05000a-82d1-4685-8e70-71ff45017eb8

Try to ssh into new machine:

ssh [email protected]

##################################################################################################################

Leave a Reply

Your email address will not be published. Required fields are marked *

For Inspirations, Special Offers and Much More

© 2008 - 2022. All Rights Reserved