Building infrastructure locally on a Private Cloud

In this post we will cover a number of ways to create a private cloud. The term private cloud is a term used to describe a number of servers owned by an individual or a business to run their own software systems. Its resources are not usually shared with other businesses like in a ‘public cloud’ such as AWS, Google or Azure.

A private cloud could be as small as a single server in your home, or multiple servers rented from a third party internet provider. The approaches we will cover in this post can be applied to a PC or laptop to configure it to work as a server or applied to dedicated servers in a server room at your place of work. In our examples, we will focus on working on a single server.

We will configure a PC into a working server running Ubuntu Server. This home server is an ideal place to create and learn many of the items we cover in this post. If you are applying any of the steps we take in this post to a PC in your home, don’t forget to back up any important information first, as these steps will replace whatever is currently on the PC.

In this post, we will cover the following topics:

  • Manually provisioning physical servers
  • Managing provisioning of physical servers using maas.io
  • Deploying virtual machines on your physical servers using KVM
  • Deploying Docker containers on your physical servers
  • Managing containers using Portainer.io

Manually provisioning physical servers

Provisioning a server is the process of installing an operating system to configure the server, to make it ready for production use. Creating and managing physical servers manually can be a time-consuming process and includes the following steps:

  1. The first step is to download an ISO (ISO 9660) file for the operating system you are planning to install. An ISO file is an archive of files intended to be copied to a CD or Universal Serial Bus (USB) device.
  2. The next step is to copy this ISO image to a CD or USB drive.
  3. Once a CD or USB device is ready, it is plugged in to a target computer to install your new operating system.
  4. When installing the operating system, some manual input is needed to answer questions related to drive partitions, users, software packages to install and more.

In our examples, we will install Ubuntu Server 20.04 LTS and its ISO can be downloaded here https://ubuntu.com/download/server

If you prefer to use a different version of Linux or use a Microsoft Windows operating system the steps will be roughly the same to install.

If you want to work with an alternative to Ubuntu, this site lists a number of different Linux distributions and links to download their ISO files: https://www.linuxlookup.com/linux_iso

If you want to work with Windows, Microsoft provide a number of ISOs available here https://www.microsoft.com/en-gb/software-download for different versions of their operating systems.

In the sections that follow, we will work with Ubuntu Server. We will follow a number of steps to create a boot-able USB device. This USB device will install the Ubuntu Server and make it ready for our next steps of creating a web server.

Creating a bootable Ubuntu Server USB

The Ubuntu.com website provides up to date guides on creating a bootable USB device for installing Ubuntu Server. There is a suitable guide if you are using Linux, Windows or Mac, with detailed step by step instructions and screenshots to show you how to download an ISO, copy it to a USB device that can be used as a bootable device to install Ubuntu:

Create a bootable USB divide from Ubuntu: https://ubuntu.com/tutorials/tutorial-create-a-usb-stick-on-ubuntu#1-overview

Create a bootable USB device from a Mac: https://ubuntu.com/tutorials/tutorial-create-a-usb-stick-on-macos#1-overview

Create a bootable USB device from Windows: https://ubuntu.com/tutorials/tutorial-create-a-usb-stick-on-windows#10-installation-complete

Now that we have created a bootable USB drive, our next step is to use a laptop, desktop or dedicated server to boot from this disk and create a working server in our next section.

Installing Ubuntu Server

Once you have created your bootable USB or CD from the previous section, you are ready to begin installing Ubuntu Server on a system that you would like to turn into a server.

Ubuntu.com also provides a detailed step by step guide on installing Ubuntu Server here: https://ubuntu.com/tutorials/tutorial-install-ubuntu-server#1-overview

As you follow the steps, you will see that the steps are straightforward and help you to get a working Ubuntu Server system. However, the steps are also time consuming to follow, especially if it was necessary for you to create 10 or 100 servers for your business. Later in this post we will look at ways to speed up this process.

Once Ubuntu Server is provisioned, your server is ready to configure. The configuration steps you need to perform will depend on the purpose of your server. We can’t include all potential configurations in one book so we will focus on one popular use of a server, which is to configure it to be a webserver to host one or more websites.

In our next section, we will configure a LAMP (Linux, Apache, MySQL PHP) web server

Configuring a LAMP (Linux, Apache, MySQL, PHP) web server

In our previous sections we created a bootable USB or CD and installed Ubuntu Server on our server. In this section we will perform a number of steps to configure this server into a working web server to host one or more websites. Using the following commands in our terminal, we can

sudo apt update

This apt update command uses apt, Ubuntu’s package manager. The command updates apt to make it aware of the newest versions of packages and their dependencies.

sudo apt install php apache2 mariadb-server -y

This command uses apt to install PHP, a popular scripting language. Apache2 a popular web server and finally Maria DB, a popular relational database. The -y command is short for ‘Yes’, to continue to perform the download without asking to confirm.

If you open the IP address of your new server in your web browser, you will see the default homepage of Apache web server.

If you are unsure how to see the IP address of your server, you can run the following command on the server to determine its current IP address:

ifconfig | grep inet

You will see an output similar to the following:

inet 192.168.1.132 netmask 255.255.255.0 broadcast 192.168.1.255

The first IP address of 192.168.1.132 is the IP address of your server. Open this IP address in your web browser to see the Apache homepage.

Congratulations, you have provisioned a new server to use Ubuntu Server and configured it to become a web server.

If you wanted to begin to host a website from this address, you could need to upload files to the server to a folder located at /var/www/html/.

In this section, we created one single web server. In our next section we will use MAAS, a tool which will help us to provision multiple servers over a network connection.

Managing provisioning of physical servers using maas.io

Maas.io is a service developed by Canonical, the company that is also responsible for creating Ubuntu. Machine-As-A-Service (MAAS) once installed, operates as a controller to help you to deploy an operating system onto one or more servers over your network.

Earlier we covered the process of downloading ISO images, copying them to a USB device and then taking the time to create a working server from scratch, one at a time. With Maas, the process is a little different. The Maas software can be installed on a server acting as a controller or on your own PC or laptop. Once installed, you will be asked a small number of questions such as a region name and then Maas is ready to provision servers.

If you are using Ubuntu already on your PC, MAAS is quick and easy to install as shown in the following command:

sudo snap install maas

sudo maas init

MAAS will ask a short number of questions to set itself up, as follows:

  1. Choose all and accept the default URL
  2. Add a username, a password and add an email address
  3. Skip the import key step for now unless you have a key to import

Once the steps have been completed, you can open up the Maas dashboard at:

http://127.0.0.1:5240/MAAS/#

Once you open the Maas dashboard, you will be asked one or two remaining questions.

  • Set a ‘region’ name such as ‘home’ or ‘office’
  • Choose a DNS forwarder, such as Googles DNS addresses of 8.8.8.8 8.8.4.4 as shown on the screen
  • Choose one or more ISO images you would like Mass to download to its own library to install on a server later ( this process can be slow depending on your internet connection, so for now choose just 1 ISO such as Ubuntu 20.04)
  • Press Update Selection to save the Image choices
  • Press continue
  • Import your SSH public key
    • Choose Upload
    • Then paste in your local public key
    • If you are unsure of your local public key, open a terminal and type the following cat ~/.ssh/id_rsa.pub
    • Copy the output and paste it into to field called Public key in the maas set up screen
    • Press Import
    • Press Go to dashboard

You now have maas.io installed. If you have access to other physical servers in your home or place of work you can boot them up and instead of using a CD or USB with an ISO installed, you can set the machine to boot over the local network (sometimes called Preboot Execution Environment (PXE).

Once you have one or more physical servers start up in this way, they will connect to the maas controller. The servers will appear in the maas user interface, ready to provision. This process allows you to quickly and easily install a number of different operating systems on your physical devices locally or remotely. Once a server has been provisioned with an operating system, it is ready to be configured in a webserver for any purpose.

An alternative service that operates in a similar way to Maas.io is Digital Rebar, available at https://rebar.digital/. Digtal Rebar provides the same functionality as Maas.io however it also

supports orchestration tools such as Ansible and Terraform which we will cover in our next post for creating and provisioning cloud infrastructure

Deploying virtual machines on your physical servers using KVM

KVM stands for Kernel-based virtual machine. In the previous section, we provisioned a physical server and configured it into a LAMP web server. While this process works well, it is also possible to further “divide” a server into multiple different virtual servers that can even run different operating systems.

KVM is just one method of creating virtual machines and in this section we will look at how to create one or more virtual machines using KVM on your existing server.

Use the following command in your terminal to install KVM and some necessary dependencies:

sudo apt install qemu-kvm libvirt-bin virtinst bridge-utils cpu-checker

Once it has been installed, you can verify it using the command:

kvm-ok

You will see an output similar to the following:

INFO: /dev/kvm exists

KVM acceleration can be used

Now we can create a new VM using the following:

virt-install \
–name ubuntu \
–description “Our first virtual machine” \
–ram 1024 \
–disk path=/var/lib/libvirt/images/ubuntu-1.img,size=10 \
–vcpus 2 \
–virt-type kvm \
–os-type linux \
–os-variant ubuntu18.04 \
–graphics none \
–location ‘http://archive.ubuntu.com/ubuntu/dists/bionic/main/installer-amd64/’ \
–extra-args console=ttyS0

When you arrive at the Software selection screen, make sure to install OpenSSH Server. This will allow you to connect to the VM using SSH when it is finished installing as shown in the following image:

Figure 3.1- Software selection

To see a list of virtual machines running on your server, you can use:

virsh list

You will see an output similar to the following:

Id Name State

—————————————————-

1 ubuntu running

To SSH into the server, we need to find out its IP address. Each virtual machine has some XML metadata which includes the details of the virtual machine, such as its name, its description and most importantly its network information.

To export the meta data of a virtual machine, use the following command, using the ‘Name’ of the virtual machine from the virsh list command:

virsh dumpxml ubuntu

(where ubuntu is the name of the virtual machine)

This will output some XML to your screen. The XML can be over 100 lines long, however the part we are interested is in the Mac Address as shown in the following snippet:

<interface type=’user’>
<mac address=’52:54:00:66:65:51’/>
<model type=’virtio’/>
<alias name=’net0’/>
<address type=’pci’ domain=’0x0000′ bus=’0x01′ slot=’0x00′ function=’0x0’/>
</interface>

Within the XML, look for a field called ‘MAC address’. A MAC address is an identification number that identifies each device on a network. The Mac address will look something like 52:54:00:65:27:b7

We can now use another command called arp to see the IP address of the device using that Mac address. ARP stands for Address Resolution Protocol. This protocol is used by network nodes to match IP addresses to MAC addresses.

In your terminal, use the following command to look up the MAC address:

arp -an | grep 52:54:00:65:27:b7

This command will show an output similar to:

? (192.168.122.23) at 52:54:00:65:27:b7 [ether] on virbr0

In the round brackets, you will see an IP address similar to 192.168.122.23, this is the IP address of the virtual machine you have created.

To SSH into the server (make sure you installed OpenSSH when installing Ubuntu on the virtual machine) you can use the following command:

Ssh {username}@192.168.122.23 ( where username is that name you supplied when installing Ubuntu)

You will be prompted for the password you provided during the installation also and then you will be logged in successfully to your virtual machine.

Once logged in, you could run the following commands like earlier in this post to turn the new VM into a webserver:

Sudo apt update
sudo apt install php apache2 mariadb-server -y

The virsh program is the main interface for managing virtual machines. The program can be used to create, pause, and shutdown machines. If you would like to see more commands use:

virsh –help

While this process is useful to make a virtual machine, it is still a slow process as you will need to answer questions during the installation. In a later section named Creating custom images for KVM using Packer we will look at creating customized virtual machine images. This means performing the configuration first and then booting up a completed virtual machine, ready to use.

Cloning your virtual machine

Creating multiple physical or virtual machines takes time. If you have taken the time to create and configure a virtual machine in the last section, you can use this as a template for creating additional virtual machines.

Cloning a virtual machine is the process of taking a copy of an existing virtual machine and creating a new copy of it with a different name, leaving the original virtual machine in place unchanged.

Before cloning an existing virtual machine, it needs to be shut down first using the following command:

virsh shutdown ubuntu

To clone, or copy of the existing virtual machine, use the following command to copy the ubuntu virtual machine and create a new ubuntu-’ machine:

sudo virt-clone –original ubuntu –name ubuntu-2 –file /var/lib/libvirt/images/ubuntu-2.img

You will see the following output:

Allocating ‘ubuntu-2.img’ | 10 GB 00:00:25

Clone ‘ubuntu-2’ created successfully.

You can now start your old and new VMs using the following commands:

virsh start ubuntu

virsh start ubuntu-2

You can see both machines running using:

Virsh list

The number of virtual machines you can run depends on the available CPU, memory and disk space on your server.

A useful command to keep an eye on your available disk space is:

df -h

This will show the size used and available space on each drive on your server. The name for your disk drive might change but it is likely to be called /dev/sda1.

A useful command-line program to monitor CPU and memory is called htop, which can be installed using:

Sudo apt install htop -y

And then running htop with:

htop

The following image shows,

Figure 3.2 –

This will show you a live screen of running processes, memory and CPU being used. You can press q to quit.

Now that we have created a virtual machine, is it available to use and learn. In our next section, we will show how to delete the virtual machines if you no longer need them.

Deleting your virtual machine

If you would like to delete your virtual machines, you can use the following command:

virsh destroy ubuntu

(where ubuntu is the name of your virtual machine from using virsh list)

Once the virtual machine is removed, its image will remain in the /var/lib/libvirt/images/ folder and might take up a lot of space depending on the size of the disk you told it to create.

To save on disk space, delete the image using the following command:

sudo rm /var/lib/libvirt/images/ubuntu.img

This section showed us how to create virtual machine images using commands in our terminal. In our next section we will use Packer, a piece of software developed by Hashicorp to create images which can be saved and used repeatedly.

Creating custom images for KVM using Packer

Packer is a tool created by a company called Hashicorp. It is a tool that allows you to describe a server in a JSON format and then build that server into a number of different image types. A packer file could create an AWS EC2 AMI, while another packer file could create a QEMU Copy On Write version 2 (QCOW2) image, suitable for virtual machines using KVM which we will use here.

Packer is a tool that can help to automate building of virtual machine images. Its JSON files can be committed to a repository such as Github as part of working towards Infrastructure as Code, an approach to infrastructure development which we will cover in the next post.

To install packer, follow the instructions on the Packer website for instructions suitable for Windows, Linux or Mac:

https://www.packer.io/intro/getting-started/install.html

Once packer has been installed, we can begin to create a new packer file. In the following example we will create a CentOs image instead of an Ubuntu image mainly as it is a smaller file and easier to learn, compared to an Ubuntu packer build. You can see an example of an Ubuntu packer build in the source code for this post.

Create a new file called centos.json, with the following content:

{
“variables”:{
“centos_password”:”centos”,
“version”:”2003″
},

This variables block allows you to declare some variables to use later in the build. In this case we are creating a password that will be used to SSH into the server if needed and a version of CentOs to download and use.

Next, we add the following

“builders”:[
{
“Vm_name”:”centos-packer.qcow2″,
“output_directory”:”output-centos”,
“iso_urls”:[
“iso/CentOS-7-x86_64-Minimal-{{ user `version` }}.iso”,
“http://mirror.de.leaseweb.net/centos/7/isos/x86_64/CentOS-7-x86_64-Minimal-{{ user `version` }}.iso”
],

The Vm_name field is used to control the name of the output file when the build is finished and the output directory controls the folder to use. In this case it is a qcow2 file that is supported by KVM.

The fields labelled iso_* are used to describe where to get the ISO file to install and a checksum va

Next, we add:

“iso_checksum_url”:”http://mirror.de.leaseweb.net/centos/7/isos/x86_64/sha256sum.txt”,
“iso_checksum_type”:”sha256″,
“iso_target_path”:”iso”,
“ssh_username”:”centos”,
“ssh_password”:”{{ user `centos_password` }}”,
“Ssh_wait_timeout”:”20m”,

The ssh_* fields controls the username, password and timeout values to use when connecting to the virtual machine. The SSH timeout here needs to be enough time for Packer to SSH into the server and configure it to make it 10 minutes at least.

Next, we add

“http_directory”:”http”,
“boot_command”:[
“<tab> text ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/ks.cfg<enter><wait>”
],
“boot_wait”:”2s”,
“shutdown_command”:”echo ‘{{ user `centos_password` }}’ | sudo -S /sbin/halt -h -p”,

The http_directory is a folder that is shared with the host operating system and the virtual machine being created. The boot_command uploads a kickstart file called ks.cfg to this folder so that the virtual machine can read it. The kickstart file is a configuration file that can have answers already written out for questions that are asked during installing the operating system to automate the process. The boot_wait allows a few seconds for the VM to start up and the shutdown command gives the command to shut down the machine to complete the build.

Next, add:

“type”:”qemu”,
“headless”:true,
“memory”:2048,
“Cpus”:2

The type key has a value of qemu. It is the type needed to create a virtual machine image that KVM can use. Memory and CPU describe the memory and CPU given to the virtual machine. The key ‘Headless’ with a value of ‘‘true’ is needed to run the build without a visual display on a desktop. We can finish the file by closing the brackets as follows:

}
]
}

The full file content can be viewed on GitHub at: https://github.com/gordonmurray/learn_sysops/blob/master/chapter_3/packer/centos/centos.json

Once the file is created, we can perform the following command to validate the file:

Packer validate centos.json

And once the file is ok, we can run the following command to build the image:

Packer build centos.json

The build will take a few minutes to run as it downloads the ISO, verifies it, installs CentOS and answers the questions for you.

Once the image has been built successfully using Packer, we can launch the virtual machine using the following command, similar to creating a new virtual machine earlier in this post:

virt-install –import \
–name centos \
–description “Our centos virtual machine” \
–ram 1024 \
–disk centos-packer.qcow2,format=qcow2,bus=virtio \
–vcpus 2 \
–virt-type kvm \
–os-type linux \
–os-variant=rhel7.5 \
–graphics none \
–check all=off \

Once the virtual machine has been created, you will be able to see it by running virsh list again to see the new virtual machine.

If you SSH into the virtual machine you’ll see the operating system is already installed and ready to configure.

If you would like to take the packer build further, you can add optional ‘provisioners’. We will cover Provisioners in more detail in a later post called Infrastructure as Code. Provisioners can be used to run commands such as installing software during the packer build so that you don’t need to install the same software after you launch the virtual machine.

In our next section, we will leave virtual machines behind and look at an alternative, Docker containers.

Deploying Docker containers on your physical servers

So far we have looked at creating a new physical server and configuring it to become a webserver. We then learned how to create virtual machines and create custom images using Packer. Another popular approach to deploy software to a server is to use Docker containers. When compared to virtual machines, containers can be smaller in disk size, easier to create and take up less resources to run, making them a great option for packaging and deploying modern software and use server resources more efficiently.

In an earlier post in ‘Local Containers’, we introduced the concept of Docker containers and in this section we will build and deploy a working container on to our server.

In your editor, create a new file called Dockerfile with the following contents:

FROM ubuntu:18.04
RUN apt-get update && apt-get install -y apache2
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
ENV APACHE_RUN_DIR /var/www/html/
EXPOSE 8080
CMD [“/usr/sbin/apache2”, “-D”, “FOREGROUND”]

To build an image from this Dockerfile, use the following command in your terminal:

docker build –pull –rm -f “Dockerfile” -t ubuntu:latest “Docker”

This process will take a few minutes to build an image. Once the process has finished, you can see a list of available Docker images using the command:

Docker images

You will see an output similar to:

REPOSITORY TAG IMAGE ID CREATED SIZE

ubuntu latest fbdb9b18aa5a 22 seconds ago 189MB

To run this image so it becomes a working container, use the following command:

docker run –rm -d -p 8080:8080/tcp ubuntu:latest

To see a list of running containers use the following command:

Docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

f608b6672159 ubuntu:latest “/usr/sbin/apache2 -…” 8 seconds ago Up 6 seconds 0.0.0.0:8080->8080/tcp happy_cerf

To connect to the running container use the following command:

docker exec -it a9fa0e376b95 /bin/bash

(where a9fa0e376b95 is the container ID from the last command).

Earlier in this post we created a virtual machine and then configured it to become a web server. With Containers this process usually is to do the work ‘up front’ in the Dockerfile, so that when an image is built, it already has all items configured, ready to run, so that you don’t need to connect to the container and install any software.

If you would like to take Docker further, it is possible to store your Docker images in Dockerhub, which is very similar to storing source code in Github. Dockerhub accounts are free to set up and you can tag and push your images to store in Dockerhub and deploy to your servers using a number of deployment practices and third parties tools that we will cover later in this book.

Docker Compose

Docker compose is a process whereby a number of different container images are described in a single file, often called a docker-compose.yml. When instructed, Docker Compose will build or retrieve all necessary images and start up a number of running containers.

Docker compose can be installed locally, on a virtual machine or run in a cloud environment such as AWS.

Using docker compose allows you to coordinate running a number of container images. Often when developing a web application you will need a number of different elements such as your main web application, a database, some storage for files or even a cache such as redis.

Docker allows you to create your own container images for all of those needs and docker compose allows you to instruct your server to run each of those container images so they can all run and communicate with one another to make a complete web application.

In our next example we will use Docker Compose to create 2 containers, a web server and a MariaDB database server.

First we need to install docker compose on linux for which you can use the command:

sudo apt install docker-compose

For installing Docker compose on other operating systems such as Mac or windows, you can find instructions on the Docker website at https://docs.docker.com/compose/install/

Create a file called docker-compose in the same folder as your earlier Dockerfile, with the following content:

Docker-compose.yml

version: ‘3’
services:
web:
build: .
ports:
– “8080:80”
database:
image: “mariadb:latest”

In this file we are declaring 2 services, web and database. In web we are telling docker compose to build the Dockerfile in the current directory by using the value of `.`.

In the database section, we are telling docker to get the latest version of MariaDB, a popular relational database.

Next we can perform the following command to start our containers:

docker-compose -f docker-compose.yml up -d –build

Once the web service has been built and the database service downloaded, you will see an output similar to:

Creating docker_database_1 … done

Creating docker_web_1 … done

We can view our running containers using docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

5e7b1e9990d7 mariadb:latest “docker-entrypoint.s…” 2 minutes ago Up 2 minutes 3306/tcp docker_database_1

cdeab94c8b24 docker_web “/usr/sbin/apache2 -…” 7 minutes ago Up 7 minutes 8080/tcp, 0.0.0.0:8080->80/tcp docker_web_1

In this section, we used docker-compose to create 2 images. We declared a web image and instructed docker to build the Dockerfile in the current directory.

We then created a second database image and instructed docker to get a premade mariadb iamge from Dockerhub.

In our next section we will look at Portainer, a piece of software that provides a user-friendly interface for managing docker images.

Managing containers using Portainer.io

Portainer is a web application that you can run to provide an easy to use user interface for your containers and related actions.

If you are running on a linux system is it very quick to install and get up and running using Portainer using 2 Docker commands:

docker volume create portainer_data

docker run -d -p 8000:8000 -p 9000:9000 –name=portainer –restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer

The first command creates a volume to store portainer data and the second command runs the portainer container. Once installed, open your web browser at http://localhost:9000, you will be prompted to supply a password and you will be then logged in.

If your containers are still running from earlier in this book, you will see them listed in the ‘Containers’ section of Portainer.

To gain more experience running servers from containers, Portainer can provide an ideal user interface to get used to creating and deploying working environments.

Portainer provides App Templates which is a list of common services such as databases or blog services, so that you can deploy easily to learn or to be used by a web application.

Summary

In this post we covered a number of ways to create and use a physical server. These same tools and approaches can be used to create simple home server for learning, or used in a work environment to create servers and configure them to provide useful services to staff and customers.

In a future post, we will explore other tools and technologies such as Ansible that can be used to further develop or automate some of the steps we have taken in this post to configure a new server, to create virtual machines and even create and deploy docker containers.

Further reading

Leave a Reply

Your email address will not be published. Required fields are marked *