OpenNebula KVM on Centos 8

With a current work project, we're after servers with high CPU clock speeds to enable us to do a large amount of alorithmic operations as quickly as possible.

There are not many cloud providers out there that provide high CPU clock speeds (other than Vultr) at reasonable prices.

Instead, we decided to rent our own servers and, by utilising OpenNebula (KVM behind the scenes) to spin up Virtual Machines.

As with anything new, there's quite a steep learning curve, and lots of gotchas on the way, so here's my guide to how to build up a cluster of VMs that are performant, and have external IP addresses.

This guide is more for me to look back on again when I build more of these, but I hope it's useful to some.

1

Add the OpenNebula repo to your host:

cat << "EOT" > /etc/yum.repos.d/opennebula.repo
[opennebula]
name=opennebula
baseurl=https://downloads.opennebula.org/repo/5.10/CentOS/8/$basearch
enabled=1
gpgkey=https://downloads.opennebula.org/repo/repo.key
gpgcheck=1
repo_gpgcheck=1
EOT

2

Add the EPEL Release repo too, and install all the necessary OpenNebula software

yum install epel-release
yum install opennebula-server opennebula-sunstone opennebula-ruby opennebula-gate opennebula-flow

3

Install MySQL Server (which stores all the OpenNebula config), and secure it

yum install mysql-server
chkconfig mysqld on
service mysqld start
mysql_secure_installation

And add a user

CREATE USER 'oneadmin'@'%' IDENTIFIED BY 'PASSWORD';
GRANT ALL PRIVILEGES ON *.* TO 'oneadmin'@'%';
SET GLOBAL TRANSACTION ISOLATION LEVEL READ COMMITTED;

4

Configure OpenNebula to use this MySQL connection:

vi /etc/one/oned.conf

DB = [ backend = "mysql",
       server  = "localhost",
       port    = 0,
       user    = "oneadmin",
       passwd  = "PASSWORD",
       db_name = "opennebula" ]

5

We need to edit some default options to make sure that KVM will be performant for both disk IO and network IO

/etc/one/oned.conf

DEFAULT_DEVICE_PREFIX       = "hd"

/etc/one/vmm_exec/vmm_exec_kvm.conf

FEATURES = [ PAE = "no", ACPI = "yes", APIC = "no", HYPERV = "no", GUEST_AGENT = "yes", VIRTIO_SCSI_QUEUES = "0" ]

/etc/one/vmm_exec/vmm_exec_kvm.conf

DISK     = [ driver = "raw" , cache = "none" , io = "native" , discard = "unmap" ]
NIC     = [ model="virtio" ]

6

Let's now start up OpenNebula (the credentials to log in are stored @ /var/lib/one/.one/one_auth)

systemctl start opennebula
systemctl start opennebula-sunstone
systemctl enable opennebula
systemctl enable opennebula-sunstone

Let's check that everything's configured OK

oneuser show

You should see some user information returned:

USER 0 INFORMATION
ID              : 0
NAME            : oneadmin
GROUP           : oneadmin
PASSWORD        : fd59628069a3e8f5af7720e0ea9358ceb69be192070c27367c032f4e2d4bf1f3
AUTH_DRIVER     : core
ENABLED         : Yes

7

The front-end is now built, let's build KVM

yum install opennebula-node-kvm
yum install centos-release-qemu-ev
systemctl restart libvirtd

8

Now add servers that you wish to use for host nodes to "known_hosts". Here I'm just using a single server:

ssh-keyscan localhost SERVERNAME >> /var/lib/one/.ssh/known_hosts

9

We need to setup python so OpenNebula can use it to run VNC.
We need to do two things - install Python, and also add a symlink to it:

yum install python36
ln -s /usr/bin/python3.6 /usr/bin/python

10

(trickiest part)
By default, OpenNebula uses a virtual bridge (virbr0) and then uses NAT so that your VMs have internet access.
The problem with this is that your VMs are shielded from the Internet, and you cannot connect directly.

If you need direct connections, you will need to amend your Ethernet configuration, and create a network bridge that OpenNebula can use.
Sometimes it takes a bit of trial and error to get everything perfect here - and if it's misconfigured you're likely to lose network connectivity, so make sure you have a Remote Management interface you can use to recover things!

Your network configuration scripts are held @ /etc/sysconfig/network-script

In our example, we have our Ethernet configured as eno1. You need to amend this config to say that it's going to connect to a bridge, and add a new bridge config to take on this config's network details.

Instead of trying to explain it all, I'll just include a before and after view so you can see what we changed. And please be careful, the contents here are case-sensitive!

cat ifcfg-eno1

DEVICE=eno1
BOOTPROTO=none
ONBOOT=yes
PREFIX=26
IPADDR=23.106.XX.YYY
GATEWAY=23.106.XX.YYY
DOMAIN=dedi.XXYYY.net
DNS2=81.17.XX.YYY
DNS3=8.8.8.8
DEFROUTE=yes
cat ifcfg-eno1

DEVICE=eno1
BOOTPROTO=none
ONBOOT=yes
TYPE=Ethernet
BRIDGE=br0

cat ifcfg-br0

DEVICE=br0
TYPE=Bridge
BOOTPROTO=none
ONBOOT=yes
PREFIX=26
IPADDR=23.106.XX.YYY
GATEWAY=23.106.XX.YYY
DOMAIN=dedi.XXYYY.net
DNS2=81.17.XX.YYY
DNS3=8.8.8.8
DEFROUTE=yes

11

With everything working, it's now time to connect to OpenNebula (http://SERVER:9869) and start to create some VMs.
You may need to open up your firewall to enable access to this port and VNC if you wish to use it.

First you need to configure your NIC. Go to Network -> Virtual Networks and create a new Network.
You'll need "BRIDGE" set to "br0" and the DNS and Gateway set as appropriate.
In the Addresses section you'll need to add in the IP range you wish to allocate.

Next you need to add some VM templates. Go to Storage -> MarketPlaces and download any templates you want.
Lots of the templates have SSH keys added in for authentication. If you want to set a root password go to Templates -> VMs and in the Template section add in a context of "PASSWORD" with a value as you wish.

Finally you need to create your VM. Go to Instances -> VMs and add a new VM. Pick the template you wish to use and configure as necessary. Make sure to pick the network interface.

If you're lucky, everything should build and run without issues.

I like to remotely access my VMs, so in the VM I had to amend /etc/ssh/sshd_config to allow remote root login:

PasswordAuthentication yes
PermitRootLogin yes

You should now have external Internet connectivity and you should be happy. There are logs available @ /var/log/one/ to help diagnose any issues.

12

Finally, let's test performance. It's useful to test between the host server and a guest.
Here we'll be testing CPU, Memory, Disk IO and Internet connectivity.

yum install sysbench

sysbench --test=cpu run
sysbench memory run
sysbench fileio prepare --file-num=10 --file-total-size=1G --file-extra-flags=direct

wget -O speedtest-cli https://raw.githubusercontent.com/sivel/speedtest-cli/master/speedtest.py
chmod +x speedtest-cli 
./speedtest-cli

Adding nodes

If you're looking to add nodes to your cluster, it's particularly easy:

cat << "EOT" > /etc/yum.repos.d/opennebula.repo
[opennebula]
name=opennebula
baseurl=https://downloads.opennebula.org/repo/5.10/CentOS/8/$basearch
enabled=1
gpgkey=https://downloads.opennebula.org/repo/repo.key
gpgcheck=1
repo_gpgcheck=1
EOT

yum install epel-release
yum install opennebula-node-kvm
systemctl restart libvirtd

ssh-keyscan HOSTNAME >> /var/lib/one/.ssh/known_hosts
scp -rp /var/lib/one/.ssh HOSTNAME:/var/lib/one/

chown -R oneadmin:oneadmin /var/lib/one/.ssh



Want to get in touch?