An introduction to Infrastructure as Code with Terraform Cloudformation Ansible and Pulumi

In this post we will learn about Infrastructure as Code (IaC). IaC is the process of writing code that will take on the task of creating and maintaining your cloudbased infrastructure. This includes the items that make up your web application such as servers, security groups, network resources, and more. 

Some of the same tools and practices that you might use to develop and deploy application code changes into production, such as committing code changes to Github for testing and deployment, can also be used for Infrastructure. 

Using IaC tools, infrastructure requirements can be written in your code editor, just like normal application code. This work can be committed to a source code repository such as GitHub and undergo reviews from coworkers as well as run any tests before being deployed. 

A number of tools exist to help you to describe and develop the necessary infrastructure required for your software application. In this post, you will be introduced to working with a number of popular IaC tools, including: 

  • Hashicorp Terraform 
  • AWS CloudFormation 
  • Redhat Ansible  
  • Pulumi 

Hashicorp Terraform 

Terraform, created by a company called Hashicorp was released in 2014. It is a configuration language allowing you to describe the infrastructure items you would like to create using code. This code can then be stored in a repository to collaborate with a team and executed to create or modify the infrastructure. 

Terraform includes a concept known as Providers to allow Terraform to connect with cloud provider APIs such as AWS, Google, Azure, and many other cloud providers. 

Terraform is open source and uses a custom language called Hashicorp Configuration language (HCL) which is similar to JSON. 

Terraform code is Declarative, meaning you do not need to tell it the specifics of how to create an EC2 instance or a Security group, instead you tell it the EC2 instances or Security groups you want in place and it will create them as needed. If you run your Terraform code over and over again, it will create or modify your infrastructure as needed to get the desired result. It will not duplicate your EC2 instances or Security groups over and over again. 

If you modify your infrastructure independently of Terraform, such as resizing an EC2 instance manually in the AWS console, Terraform will be unaware of the change until you perform a terraform apply instruction. Terraform creates a ‘state’ file, which is a list of the items it manages so when you run Terraform again, it will compare its state file to their corresponding items in your account and it will plan out any additions, updates, or deletes that it needs to perform to bring the infrastructure back to the way you described it in your HCL code. 

If you begin to use Terraform, or any other Infrastructure as Code solution, it is best to commit to using it fully for the infrastructure you are creating. If you use both Terraform and also make changes manually in your cloud provider, Terraform will likely try to revert any changes you have made. In some cases this can be good, such as closing security group ports that are open however in some cases it might perform some actions you don’t like, such as resizing a server or database back down to a smaller size before you intentionally changed it manually. 

If you have an existing infrastructure that you have created before using Terraform, it is possible to import those items so that you can manage them with Terraform in the future. 

If you are importing an EC2 instance for example, you would need to write the HCL code first as if you are creating the instance for the first time, and then before running a terraform apply command, you would run a terraform import command. This would connect the EC2 instance you specify to the HCL code you have written and when it successfully imports, the information will be added to the state file and that EC2 instance can be modified using Terraform from that point onwards. 

When creating or importing any resources in Terraform such as an EC2 instance, you will need to know the required or optional arguments that Terraform will need so it can create a resource. The Terraform site provides detailed documentation for each AWS resource it can create on their site at https://www.terraform.io/docs/index.html 

Hashicorp, the creators of Terraform also provide a Learning resource available here https://learn.hashicorp.com/ and it includes a guide on how to install Terraform on Windows, Mac, or Linux based machines https://learn.hashicorp.com/terraform/getting-started/install.html. Follow the installation information and in the next section we will begin using Terraform to create a basic web server on AWS. 

We will use Terraform to create a key pair, a security group, an Ubuntu EC2 instance, and install Apache web browser and upload a simple Hello World HTML page. 

Using Terraform to create a simple web server on AWS 

To use Terraform we will create a small number of files. These file names don’t matter too much as long as they end with the .tf extension. These files will describe the items that we want to create in our AWS account and when we run Terraform, it will read each of the .tf files and evaluate which items need to be created and in what order to create them. 

We will use Terraform to create a simple web server, just like we used the AWS CLI in an earlier post. We will create an EC2 instance, a security group with one or more access rules, and a key pair to allow us to SSH into the server. 

We will create the following files: 

  1. main.tf – This is the main terraform file. It includes a Provider which tells Terraform we are working with AWS, the region to use, and the location of our credentials. 
  2. ec2.tf – this file will contain the instructions to create an EC2 instance using Ubuntu. 
  3. key_pair.tf – this will contain the instructions to create a Key Pair from your public key 
  4. security_group.tf – this will contain the instructions to create a security group and attach it to the EC2 instance 
  5. security_group_rules.tf – This will contain the instructions to open up port 80 so we can access the website on our EC2 server 
  6. variables.tf – This is where you define any variable types you’d like to use in your other terraform files 
  7. terraform.tfvars – this is where you store variable values that can be used in other terraform files 
  8. output.tf – this will contain the EC2 address and the security group ID that we have created so that we can connect to the server once it has been created. 
  9. files/index.html – a simple html page with the content ‘Hello World!’  

Next, we will look at each file in detail to show the content required for each file and describe its purpose when creating our web server on AWS 

main.tf 

This is the main terraform file. It includes a Provider which tells Terraform we are working with AWS, the region to use, and the location of our credentials. 

provider “aws” {
  region             
= “us-east-1”
 
shared_credentials_file = “~/.aws/credentials”
  profile            
= “example”
}
 

Ec2.tf 

This file contains the instructions to create an EC2 instance using Ubuntu. The Data block performs a search for an Ubuntu AMI on the AWS marketplace that can be used by the aws_instance block later in the file. 

data “aws_ami” “ubuntu” {
 
most_recent = true  filter {
name   = “name”

values = [“ubuntu/images/
hvm-ssd/ubuntu-bionic-18.04-amd64-server-*”]
 
}  filter {
name   = “virtualization-type”

values = [“
hvm“]
  }

  owners = [“099720109477”] # Canonical

}
 

 # Create EC2 instance
resource “
aws_instance” “example” {
 
ami                = “${data.aws_ami.ubuntu.id}”
 
instance_type      = “t3.micro
 
vpc_security_group_ids = [“${aws_security_group.example.id}”]
 
key_name           = “${aws_key_pair.pem-key.key_name}”

  tags = {
Name = “terraform-webserver”

  }

}
 

key_pair.tf 

This file contains the instructions to create a Key Pair from your public key. It assumes you have a public key file on your PC located at ~/home/.ssh/id_rsa.pub. 

resourceaws_key_pair” “pem-key” {
 
key_name   = “terraform-webserver-key”
 
public_key = “${file(“~/.ssh/id_rsa.pub”)}”
}
 

security_group.tf 

This file contains the instructions to create a security group and attach it to the EC2 instance. We do not open ports to allow traffic in to the EC2 instance until the next file. 

resource “aws_security_group” “example” {
  name   
= “example”
  description = “example security group”

  tags = {
Name = “terraform-webserver”
  }

}
 

security_group_rules.tf 

This file contains the instructions to open port 80, so we can access the website on our EC2 server from our web browser. This file also contains a rule to open port 22 so that we can SSH into the server if needed. 

Finally, an egress rule is added to allow all traffic out of the EC2 instance. 

resource “aws_security_group_rule” “http” {
  type         
= “ingress”
 
from_port     = 80
 
to_port       = 80
  protocol     
= “tcp
 
cidr_blocks   = [“0.0.0.0/0”]
 
security_group_id = “${aws_security_group.example.id}”
  description  
= “Public HTTP”
}
 

resource “aws_security_group_rule” “ssh” {
  type         
= “ingress”
 
from_port     = 22
 
to_port       = 22
  protocol     
= “tcp
 
cidr_blocks   = [“${var.my_ip_address}/32″]
 
security_group_id = “${aws_security_group.example.id}”
  description  
= “SSH access”
}
 

resource “aws_security_group_rule” “example_egress” {
  type         
= “egress”
 
from_port     = 0
 
to_port       = 0
  protocol     
= “all”
 
cidr_blocks   = [“0.0.0.0/0”]
 
security_group_id = “${aws_security_group.example.id}”
  description  
= “Allow all out”
}
 

variables.tf 

This is where you define any variable types you’d like to use in your other terraform files. They are like variables exported from a function in a program to be used elsewhere in your  code 

# Variables used
variable “
my_ip_address” {
  // read from
terraform.tfvars
  type = “string”
} 

terraform.tfvars 

This file is where you create any variable values that can be used in other terraform files. For now, we just need to include one variable called my_ip_address, which should contain your own IP address so that your IP address will be allowed to SSH in to the resulting EC2 instance. 

# Your current IP address
my_ip_address = “xxx.xxx.xxx.xxx 

output.tf 

This will contain the EC2 address and the security group ID that we have created so that we can connect to the server once it has been created. 

output “public_dns” {
  description = “The EC2 instance DNS”

  value  
= “${aws_instance.example.public_dns}”
}
 

output “security_group_id” {
  description = “The security group ID”

  value  
= “${aws_security_group.example.id}”
}
 

Files/index.html 

Hello World! 

Here we will create a simple html file with the content of  ‘Hello World!’ or whatever content you would like to see when we have created our first web server  and access it via our web browser to confirm it is responding with our chosen content. 

Running our Terraform files  

Now that we have created out files and we have learned a little bit about the purpose of each file, it is time to use Terraform review the plan of the items it will create and then process to create our infrastrucutre. 

Before we being running Terraform, may sure you have edited the terraform.tfvars file to include your current IP address. This will allow you to SSH into the server later and to run a Provisioner to customize the server. 

Once the files are in place, run the following command to initialize Terraform. 

terraform init 

This step will download any necessary providers, in this case it is an AWS provider so that Terraform can communicate with the AWS API. 

You can use the following command to view the steps that Terraform plans to take, without making any actual changes: 

terraform plan 

If there are any issues with your *.tf files, you will begin to see those errors here in the output. The error output should show you the file with the problem and the line number to help you to debug any issues. 

If there are no issues, the output will show you the changes it plans to make. The changes are broken down into three categories, those items to add, change, or destroy. Since this is our first run, it should only have items to add. 

Plan: 5 to add, 0 to change, 0 to destroy. 

If you are satisfied with the output, we can use the following command to tell Terraform to go ahead and apply the changes to your AWS account 

terraform apply 

When you run this command, the changes will not be made straight away. Terraform will review its State file and compare it to the current changes to make and it will show the changes it plans to make again, just like the terraform plan command. 

This time it will prompt you to ask you if you’d like to go ahead. Type yes and press enter to continue to let Terraform make the changes. Since this is a small example, it should only take a minute or less to apply. 

Once it has completed, it will show you a small output based on the instructions we added to the output.tf file. It will show the public DNS name of the new server that has been created. 

Congratulations, you just used Terraform to create a server on AWS. If you log in to your AWS console in your web browser, you’ll see a new EC2 instance called terraform-webserver running in the us-east-1 region. 

So that Terraform can remember the items it has created, it creates a file called terraform.tfstate. You can open the file and take a look. It stores information on the items it has created so far in a JSON format. Don’t delete or alter this file. If you delete the file, then Terraform will no longer remember the items it has created in your AWS account and if you run terraform apply, it will try to create all of the items again. 

If you try running terraform plan or terraform apply once again, you’ll see that Terraform doesn’t want to make any changes. This is because Terraform has made the necessary changes and the items now in your AWS account match the information stored in the local Terraform state file. 

In its current state, this new server isn’t of much use. It is online and working, however it isn’t doing anything. Let’s go ahead and use Terraform to install Apache web server and display a ‘Hello World’ web page on the server. 

Using Terraform to configure our webserver 

To do this, we will use a Provisioner. This will allow us to run one or more commands on the EC2 instances on its first run only. 

In the ec2.tf file, add the following content within the aws_instance resource, just after the tags block:  

resource “null_resource” “webserver” {
  provisioner “file” {

// upload the index.html file

source = “files/index.html”

destination = “/home/ubuntu/index.html”

  }
 

provisioner “remote-exec” {
inline = [

 
sudo apt update”,
 
sudo apt install apache2 -y”,
 
sudo mv /home/ubuntu/index.html /var/www/html/index.html”
]

  }
 

connection {
// connect over
ssh
type   
= “ssh
user   
= “ubuntu”
private_key
= “${file(“~/.ssh/id_rsa“)}”
timeout
= “1m”
port   
= “22”
host   
= “${aws_instance.example.public_dns}”
  }

}
 

This file will use a null_resource to upload our index.html file and then use a remote-exec provisioner to run an apt update, install Apache, and move the index.html file into place. 

The connection block tells Terraform to connect to the server using SSH and provides the username, private key, port, and the host to connect to. 

If we perform a terraform apply command now, nothing will happen as our EC2 instance has already been created from our earlier step. 

We can instruct Terraform to replace the existing EC2 instance and create a new one by tainting the instance. This will mark the instance for replacement the next time Terraform is applied. 

Use the following command to taint our existing EC2 instance: 

terraform taint aws_instance.example  

It should show an output as follows:  

Resource instance aws_instance.example has been marked as tainted. 

Now, we can perform an apply to recreate this instance and it will run the above provisioner as part of its creation. 

terraform apply 

Once this has been completed, open up the Public DNS in your browser and you should see our ‘Hello World!’ HTML file.

In this example, we have used a Provisioner to customize our Web server, to install Apache and upload our HTML file. In a Production environment, using a Provisioner is not recommended as it can add uncertainty and complexity to the process. Hashicorp recommends running configuration management software instead, such as Packer. Packer allows you to create a server image first, customize and save it as an AMI, and then Terraform can take this saved image and use it to create an already configured EC2 instance. We will cover using Packer later in this book. 

The files for this webserver created using Terraform are available here: https://github.com/gordonmurray/terraform_webserver 

If you would like to remove the items that Terraform has created in your AWS account, you can use the following command: 

terraform destroy 

You will be prompted again to add yes and press enter to continue. Terraform will then go ahead and remove any items it created. Once it has finished, you will see the output similar to: 

Destroy complete! Resources: 5 destroyed. 

This webserver example is a basic working example, using the Hashicorp documentation available at https://www.terraform.io/docs/index.html you can improve on the existing files to customize them to your needs. 

As a suggested next step, try to create a static IP address and attach it to your EC2 instance, so that the IP address of the server don’t change any time you change or replace the server. Use the following documentation page as a guide to get started https://www.terraform.io/docs/providers/aws/r/eip.html 

AWS CloudFormation 

CloudFormation was launched by AWS in 2011 It is a service which is specific to AWS. CloudFormation can only be used to create AWS resources, unlike Terraform which can use Providers to work with many different cloud providers. 

Using CloudFormation itself is free, though the resources it creates on AWS have their respective costs. CloudFormation has a useful feature built in to estimate your AWS costs. When you are preparing to create the ‘stack’ it can estimate the cost by linking to the AWS calculator and pre-populating it the items from your CloudFormation stack, so you can see the estimated cost before you proceed. 

Using AWS CloudFormation to create a simple webserver on AWS 

CloudFormation gives you a number of options to begin to create your Stack. You can upload or import a template from S3, use a premade template, or create one using the Designer interface in the browser. CloudFormation supports using either JSON or YAML and in our examples we will use YAML. 

YAML stands for YAML Ain’t Markup Language. We will use it in this book as it is more human readable when compared to JSON. YAML is also a common configuration language used by multiple tools, such as Ansible or Kubernetes, so we will see more YAML as we progress through this book. Comments can also be added to YAML files, unlike JSON files. 

CloudFormation cannot create an AWS Key Pair directly. Instead it asks to use an existing Key Pair. If you don’t already have a Key Pair in place in your AWS account, you can log in to the AWS console to create one, or use the following CLI command to create one in the us-east-1 region:  

aws ec2 create-key-pair –key-name example –query ‘KeyMaterial‘ –output text example.pem  —region us-east-1 

Next, we will begin creating the files that will contain the instructions we want to give to CloudFormation. First, we will initialize our template and set some parameters that will be needed by CloudFormation to create our stack. 

The AWSTemplateFormatVersion field at the beginning of the file is optional, it identifies the version of the CloudFormation template so it can be interpreted correctly by CloudFormation. 

We use the Parameters to tell CloudFormation that we will need a KeyName, a VPCID, and a SubnetID. CloudFormation will ask you to fill in those three values before it begins to create the stack. 

Create a file called webserver.yaml with the following content: 

AWSTemplateFormatVersion: “2010-09-09”
Description: Creates an EC2 instance and a security group

Parameters:

KeyName
:
   
Description: “Name of an existing EC2 KeyPair to enable SSH access to the instance”
   
Type: “AWS::EC2::KeyPair::KeyName
   
ConstraintDescription: “must be the name of an existing EC2 KeyPair
VPCID:

   
Description: “VPC IDs”
   
Type: “AWS::EC2::VPC::Id”
SubnetIDs
:
   
Description: “Subnet IDs”
   
Type: “AWS::EC2::Subnet::Id” 

Next, we will use a required Resources section to create an AWS Security group. This security group will open Port 22 for SSH access and HTTP port 80 for web traffic. 

The optional Tags section allows you to add one or more tags to identify your security group. 

Resources:
InstanceSecurityGroup
:
   
Type: “AWS::EC2::SecurityGroup
   
Properties:
       
Tags:
           
– Key: “Name”
             
Value: “cloudformation-webserver”
       
VpcId: !Ref VPCIPsVPCID
       
GroupDescription: “Webserver security group”
       
SecurityGroupIngress:
           
CidrIp: “0.0.0.0/0”
             
Description: “Allowed from anywhere”
             
FromPort: “22”
             
ToPort: “22”
             
IpProtocol: “tcp
           
CidrIp: “0.0.0.0/0”
             
Description: “HTTP access from anywhere”
             
FromPort: “80”
             
ToPort: “80”
             
IpProtocol: “tcp 

Here, we will append to the existing Resources section to create our EC2 webserver instance, below our existing Security group text. It will use an AMI called ami-02df9ea15c1778c9c, which is a freely available Ubuntu instance. 

This EC2 instance will use the security group we created already earlier in our CloudFormation stack.  

The SecurityGroupIds section is being told to get the Resource called InstanceSecurityGroup and its value of GroupId. 

Again, we have included optional Tags section to identify out EC2 instance 

Finally, we use the UserData section to embed a simple bash script to run on our EC2 instance when it is created for the first time. 

In this example, the user data will perform an apt update to update the Ubuntu instance, it will install Apache Web server and place a HTML file with the content of Hello World! in the /var/www/html folder for Apache to show. 

EC2Instance:
   
Type: “AWS::EC2::Instance”
   
Properties:
       
ImageId: “ami-02df9ea15c1778c9c”
       
InstanceType: “t2.micro
       
SecurityGroupIds:
           
– !GetAtt InstanceSecurityGroup.GroupId
       
KeyName: !Ref KeyName
       
SubnetId: !Ref SubnetIDs
       
Tags:
           
– Key: “Name”
             
Value: “cloudformation-webserver”
       
UserData:
           
Fn::Base64: !Sub |
               
#!/bin/bash –xe
               
sudo apt update
               
sudo apt install apache2 -y
               
echo “Hello World!” > /var/www/html/index.html 

AWS provide a way to validate your CloudFormation YAML files. Using the AWS CLI, which we used earlier in the book, you can run the following command to validate your webserver.yaml file: 

aws cloudformation validate-template –template-body file:///home/webserver.yaml 

Now that we have written the necessary file and used the command line to validate it, we are now ready to use the AWS console to create our CloudFormation Stack. 

If you are not logged in to your AWS console, log in now and we will create our CloudFormation Stack in the next section 

Running CloudFormation using the AWS console 

Log in to the AWS console, search for CloudFormation, open it and click on Create Stack. 

You will progress through several screens as follows: 

  1. Select the Template is ready option from the Prerequisite section 
  2. Select the Upload a template file from the Specify template section and then Click on Choose file and select your webserver.yaml from above
  3. Upload your webserver.yaml file and click on Next. 
  4. Enter a Stack name, such as Webserver. 
  5. Choose a KeyName from the dropdown menu. 
  6. Choose a SubnetID from the dropdown menu. 
  7. Choose a VPCID from the dropdown menu and click on Next.
  8. Add some optional Tags and choose an IAM role with permissions to create EC2 instances. Leave the IAM role as cloudformation so that CloudFormation has permission to create the resources then click Next.
  9. Use this Review Webserver page to ensure your choices from previous steps are OK, and then click on Create Stack at the bottom of the page.
  10. Your stack should now show a Status of CREATE_IN_PROGRESS
  11. If the initial deployment of the stack fails, you will need to delete the Stack and start again. Once the Stack has built successfully, you can perform any updates without deleting and restarting.
  12. Once the Stack has deployed fully, you can click on the Resources tab to see the items it created. 

Congratulations, you have now used CloudFormation to create a simple web server! 

The source code for the above is available here : https://github.com/gordonmurray/cloudformation_webserver. 

If you wish to delete your Stack afterwards, you can select the radio button next to our webserver stack name and then click Delete. It will remove the items created as part of the Stack deployment.

When deleting a Stack, CloudFormation will delete every Resource described in your Stack. This includes the EC2 instance and its Security Group. 

Running CloudFormation using the AWS CLI 

Before using the AWS CLI to create the Stack, you will need to know the values of each of the following that you would like to use.

  1. VPC ID 
  2. Subnet ID 
  3. Key Pair name 

The following command will use the AWS CLI to create a CloudFormation stack along with parameters to include our template file, KeyName value, SubnetID value, and VPCID value in the AWS region of us-east-1. 

aws cloudformation create-stack –stack-name webserver-cli –template-body file://webserver.yaml –parameters ParameterKey=KeyName,ParameterValue=[key pair name here] ParameterKey=SubnetID,ParameterValue=[ subnet value here ] ParameterKey=VPCID,ParameterValue=[ VPC ID value here] –region us-east-1 

If the command has executed properly, you will receive an output like the following: 

{
StackId“: “arn:aws:cloudformation:us-east-1:123456789:stack/webserver-cli/12345-1111-2222-3333-1234567890″
} 

If you would like to delete the stack afterwards, you can use the following 

aws cloudformation delete-stack –stack-name webserver-cli –region us-east-1 

In this section, we have created a webserver stack using AWS CloudFormation. We prepared the webserver YAML file so that CloudFormation knew the resources it needed to create. 

We used both the AWS console to create the stack step by step as well as gained experience using the AWS CLI to create a stack also. 

In our next section, we will use Ansible to create and configure a server on AWS. You will see that Ansible also uses YAML to create and configure resources. 

Ansible 

Ansible was originally released in 2012 and in 2015 was acquired by Red Hat. Ansible is primarily a configuration management tool that can be used to configure both Windows and Linux environments. 

Using additional modules, Ansible can also manage infrastructure on cloud providers such as AWS. It can be a good choice for creating both infrastructure and then also applying any configuration to servers you create within the infrastructure, allowing you to use the one tool to do both tasks. 

Ansible also includes Ansible Vault. Vault allows you to encrypt files containing sensitive content such as passwords or API keys, so that they can be committed to a source code repository without being stored in plain text. Ansible uses Advanced Encryption Standard (AES) to encrypt the data and you will need a password to edit the data. 

You can store your password to decrypt the files as an environment variable so that they can be read by Ansible during a build or deployment process. We will cover build/deployment processes in our next post. 

If you already use Ansible to configure items within your infrastructure, it can be an attractive choice to also use Ansible to build infrastructure. Since you already know the language, you don’t need to learn anything new, however there are a couple of items to know in advance. 

When creating infrastructure, Ansible doesn’t record any State. This can sometimes lead to difficulties when changing your infrastructure as it doesn’t know of a previous state to compare to. If you terminate an EC2 instance for example and run your Playbook to re-create a new EC2 instance of the same name, it will fail to build as the previous server name still exists for a short time until it has fully terminated. It is possible to work around it with unique server names, though you will need to take additional steps to do so. 

Finally, if you want to remove some infrastructure, Ansible doesn’t have a built-in process to taint or remove existing resources such as EC2 instances. You will need to write additional code to remove those items specifically. 

Using Ansible to create and configure a webserver on AWS 

To install Ansible, use the Ansible documentation here to install a version of Ansible for your system: https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html#installing-the-control-node. 

The following examples use Ansible version 2.8.3 and Python version 3.7.5 and will create an Ubuntu 18.04 LTS server in the US region.  

If you would like to change to another Ubuntu distribution, you can find its AMI here: https://cloud-images.ubuntu.com/locator/ec2/. 

The following steps assume there is an existing VPC and at least one subnet which are available by default in a new AWS account. 

To use the EC2 module, we will also need to install a small number of dependencies. We need to install Python as Ansible is written using Python and needs Python installed locally to run 

sudo apt install python 

We install python-pip, which is a package manager for Python to install additional modules

sudo apt install python-pip

We install boto, a python SDK to help Python to interact with AWS resources. 

pip install boto boto3 

Now that we have installed the necessary dependencies, let’s go ahead and  create two main Ansible playbooks. The first will create a security group and an EC2 instance and add tags to both. The tags are important for the next step of configuring the server. The second playbook will connect to the EC2 instance and install Apache and upload a simple index.html file, just like our other examples in this book such as creating a webserver using AWS CloudFormation. 

We will create the files and folders shown in the following screenshot. These files and folders follow the recommended structure of an Ansible Role. Organizing playbooks with Roles allow you to reuse code in other Ansible Playbooks in the future.  

Before we begin using Ansible to create resources in AWS, we will need an AWS key and secret so that Ansible has permission to create resources. 

If you have the AWS CLI already installed, you can either use your existing AWS Key and secret usually stored in ~/.aws/credentials, or create a new API key for this example. 

To use the AWS CLI to create a new access key and secret, use the following command: 

aws iam create-access-key 

You will see a JSON output, take note of the AccessKeyId and SecretAccessKey values. 

In your command line, type the following to get Ansible Vault to create a secure file: 

ansible-vault create group_vars/all/pass.yml 

You will be asked to supply a password and to confirm the password to make sure it is correct. 

This command will create a new empty file and the file will be opened up to begin adding content. Once the file is open, add the following: 

group_vars/all/pass.yml 

ec2_access_key: [ place your AccessKeyId here ]
ec2_secret_key: [ place your
SecretAccessKey here ] 

If you’d like to edit this file at any time, use the following Ansible-vault command: 

ansible-vault edit group_vars/all/pass.yml 

Next, lets look at each of the files shown in the previous image that Ansible will need to create our resources 

Infrastructure.yml 

This Infrastructure playbook file creates an EC2 instance and a Security Group. Items are Tagged so that our next Playbook can find and configure them 


– hosts: localhost

  connection: local

 
gather_facts: false
vars:

    region: eu-west-1

    image: ami-04c58523038d79132 # Ubuntu 18.04 LTS

roles:

    – keypair

    – ec2

    –
securitygroup

Webserver.yml 

Our webserver playbook configures an EC2 instance to become a webserver. The EC2 instance was Tagged so that this Playbook can configure the server 


– hosts: all

 
gather_facts: no
 
become: yes
  user: ubuntu

  roles:

    – apache

    – php
 

Src/index.php 

A simple file to show the webserver is operating and showing files. 

<h1>Hello World from Ansible!</h1> 

aws_ec2.yml 

This is an Ansible plugin. It is a Dynamic inventory source. It can query your EC2 account for Tagged instances, instead of hard coding instance IP addresses. 

plugin: aws_ec2
regions:

  – eu-west-1

filters:

 
tag:Name: Webserver
aws_access_key_id
: [ your aws access key ]
aws_secret_access_key
: [ your aws secret key ] 

In the highlighted code above, make sure to include your own AWS Key and Secret values  

group_vars/all/pass.yml 

This file holds AWS credentials so Ansible can connect to your AWS account 

ec2_access_key: [ your aws access key ]
ec2_secret_key: [ your
aws secret key ] 

Roles/apache/files/webserver.conf 

This is an Apache virtual host config file. It tells Apache where to look for your webserver files. 

<VirtualHost *:80>
    
ServerAdmin webmaster@localhost
   
DocumentRoot /var/www/html
    
ErrorLog ${APACHE_LOG_DIR}/error.log
   
CustomLog ${APACHE_LOG_DIR}/access.log combined
     <Directory /var/www/html>

        Options Indexes
FollowSymLinks
       
AllowOverride All
        Require all granted

    </Directory>

</
VirtualHost> 

Roles/apache/tasks/main.yml 

This is an Ansible task within a Role, to install Apache. APT is used to install Apache and some common steps are added to remove the default Apache homepage and disable the default Apache virtual host. We then add our own Apache virtual host and restart Apache for the changes to take effect. 

– name: Install Apache
  vars:

    packages:

      – apache2

  apt:

    pkg: “
{{ packages }}”
   
update_cache: yes
    state: latest

– name: Delete the default apache landing page
  file:

    path: /var/www/html/index.html

    state: absent

– name: Disable apache2 default vhost
  file:

    path: /
etc/apache2/sites-enabled/000-default.conf
    state: absent
 

– name: Copy our virtual host
  copy:

   
src: webserver.conf
   
dest: /etc/apache2/sites-available/webserver.conf
  register:
host_uploaded 

– name: Enable our virtual host
  shell: /
usr/sbin/a2ensite webserver.conf
  when:
host_uploaded.changed 

– name: Restart Apache
  service:

    name: apache2

    state: restarted

  when:
host_uploaded.changed 

Roles/deploy/tasks/main.yml 

This is an Ansible task within a Role, to upload our /src folder to the webserver. Any files that we add to the /src folder, such as the pages necessary for a website, will be uploaded to the /var/www/html folder for Apache to serve.  

– name: Copy our src directly to /var/www/html
  copy:
    src: ./src/
   
dest: /var/www/html/ 

Roles/ec2/tasks/main.yml 

This is an Ansible task within a Role, to create an EC2 instance and tag it. We provide a key pair name, the instance type, and security group ID to use when creating the EC2 instance. We also tag the instance so that Ansible can find the instance later to configure it. 

– name: Provision an EC2 instance
  ec2:

   
aws_access_key: “{{ ec2_access_key }}”
   
aws_secret_key: “{{ ec2_secret_key }}”
   
key_name: “ansible-webserver”
    id: “ansible-web-server”

   
group_id: “{{ security_group.group_id }}”
    image: “
{{ image }}”
   
instance_type: t2.micro
    region: “
{{ region }}”
   
wait: true
    count: 1

  register: webserver
 

– name: Tag the webserver EC2 instance
  ec2_tag:

   
aws_access_key: “{{ ec2_access_key }}”
   
aws_secret_key: “{{ ec2_secret_key }}”
    region: “
{{ region }}”
    resource: “
{{ webserver.instances[0].id }}”
    state: present

    tags:

      Name: Webserver
 

Roles/keypair/tasks/main.yml 

This is an Ansible task within a Role, to create and AWS Key Pair. We give the key pair a name and tell it to read our public key located at ~/.ssh/id_rsa.pub so that we can successfully connect to the webserver using SSH later.  

– name: Upload public key to AWS
  ec2_key:

    name: “ansible-webserver”

   
key_material: “{{ lookup(‘file’, ‘~/.ssh/id_rsa.pub’) }}”
    region: “
{{ region }}”
   
aws_access_key: “{{ ec2_access_key }}”
   
aws_secret_key: “{{ ec2_secret_key }}” 

Roles/php/tasks/main.yml 

This is an Ansible task within a Role, to install PHP. Additional packages could be added to the list here if needed, such as php-curl. This tells the server to install each package using APT. 

– name: Install PHP and some related packages
  vars:

    packages:

      – php

  apt:

    pkg: “
{{ packages }}”
   
update_cache: yes
    state: latest
 

Roles/securitygroup/tasks/main.yml 

This is an Ansible task within a Role, to create a security group and tag it. We give the security group a name, a description, and open up ports 22 for SSH access to the server and port 80 for HTTP access to the webserver. We also open an egress (outbound) rule to allow traffic out of the instance. 

– name: Create a security group
  ec2_group:

    name: “webserver-security-group”

    description: “Security group for Webserver”

    region: “
{{ region }}”
   
aws_access_key: “{{ ec2_access_key }}”
   
aws_secret_key: “{{ ec2_secret_key }}”
    rules:

      – proto:
tcp
        ports:

          – 22

       
cidr_ip: 0.0.0.0/0
       
rule_desc: allow all on ssh port
      – proto:
tcp
        ports:

          – 80

       
cidr_ip: 0.0.0.0/0
       
rule_desc: allow all on HTTP port
   
rules_egress:
      – proto: “all”

       
cidr_ip: “0.0.0.0/0”
  register:
security_group

– name: Tag the Security group
  ec2_tag:

   
aws_access_key: “{{ ec2_access_key }}”
   
aws_secret_key: “{{ ec2_secret_key }}”
    region: “
{{ region }}”
    resource: “
{{ security_group.group_id }}”
    state: present

    tags:

      Name: Webserver  
 

The above files can be found at: https://github.com/gordonmurray/ansible_webserver_aws 

To create the AWS infrastructure items, use the following command in your terminal in the same folder as the Ansible files we have created earlier. 

ansible-playbook infrastructure.yml –ask-vault-pass 

To configure the webserver once it has been created, use the following command:  

ansible-playbook –i aws_ec2.yml webserver.yml –ask-vault-pass 

You will be asked to supply your password each time so that Ansible can decrypt the pass.yml file. 

In the Ansible output, you will see the public DNS address of the new EC2 server, it will be in the format of ec2-111-111-11.11.eu-west-1.compute.amazonaws.com. 

Open this address in your browser, and you should see Hello World! 

Congratulations, you just used Ansible to create and configure a server on AWS. 

In our next section, we will look at Pulumi. When compared to CloudFormation, Terraform, or Ansible, Pulumi is relatively new to the industry of infrastructure as code. Its approach is different to the declarative nature of the tools we have used so far which makes it especially interesting to look at. 

Pulumi 

Pulumi is very new compared to the tools we have used so far. It was launched in June of 2018. It is free to use via their community edition. Pulumi’s code is open source and available on their GitHub site at https://github.com/pulumi/pulumi. 

In the same way that Terraform has providers to connect to other online cloud services, Pulumi also includes providers, with a growing list of services listed here on their site https://www.pulumi.com/docs/intro/cloud-providers/. 

Pulumi also includes a resource called Cross Guard. Cross Guard is a way to enforce business rules or policy through code. A business might allow their developers to create and manage their infrastructure but may wish for certain rules to apply to the items the developers are creating. 

As an example, a business may like to apply limits such as allowed EC2 instance sizes to manage costs, or ensure some security policies are in place at all times in the infrastructure.  

There is great potential in Pulumi, it can help bring developers, operations, and even compliance teams closer together by sharing a codebase in a true DevOps fashion.  

In our next section, we will use Pulumi with Python to create a Webserver on AWS. 

Using Pulumi to create a webserver on AWS 

One of the things that makes Pulumi unique when compared to CloudFormation, Terraform, or Ansible is that with Pulumi you can use a programming language you may already be familiar with to create and manage your infrastructure.  

At the time of writing this, Pulumi supports the following runtime languages:

  • Node. js – JavaScript, TypeScript, or any other Node. js compatible language. 
  • Python – Python 3.6 or greater. 
  • . NET Core – C#, F#, and Visual Basic on . NET Core 3.1 or greater. 
  • Go – statically compiled Go binaries. 

Pulumi also works with several core cloud providers including AWS, Google Cloud, and Azure.  

Creating a Pulumi Stack using Python 

Earlier in this post, we saw that AWS CloudFormation uses the term stack to describe the infrastructure items that CloudFormation will create. Pulumi also uses this same term to describe an infrastructure project. 

The first step is to download and install Pulumi. There is a great Getting Started guide on their website at https://www.pulumi.com/docs/get-started/, which will guide you on how to install Pulumi for your platform and choose the language runtime. 

In the steps that follow, we will use AWS, running from a Linux platform using Python. 

If you have the AWS CLI installed from previous sections of this book, you’re good to go. If you don’t have the AWS CLI installed, then take a moment to go to the AWS website and follow the steps to install the CLI https://aws.amazon.com/cli/. 

We need to make sure our system has the necessary Python version and packages to continue, so we will run the following command in our Terminal first: 

sudo apt install python-dev -y
sudo
apt install python3-pip -y
sudo
apt install python3-venv -y 

python-dev contains everything Python needs to compile python extension modules. python3pip makes sure we have the PIP, a Python Package manager installed for Python version 3, and python3-venv provides support for creating lightweight virtual environments. Each virtual environment has its own Python binary and can have its own independent set of installed Python packages in its site directories. 

Once we have installed the necessary Python packages, we can create our first Pulumi Stack: 

First, we will make a folder to store our project: 

mkdir webserver-pulumi && cd webserver-pulumi 

Next, we will run Pulumi with a number of parameters to set up our project: 

pulumi new aws-python –name webserver-pulumi –stack webserver 

This tells Pulumi to create a new project structure, using AWS and Python. We use the –name parameter to give our project a name of webserver-pulumi and we use –stack to give the stack a name of webserver. 

When creating a new project, Pulumi will ask you a short number of questions in your terminal 

  • Project description – This allows you to give your project a short description. You can leave the default text in place or add something new like A sample Pulumi project. 
  • AWS region – This allows you to choose an AWS region to use to deploy your stack. You can leave the default us-east-1 in place or change it to a new region such as eu-west-1 if you prefer. 

Pulumi will create a small number of files in our webserver-pulumi directory: 

  • __main__.py – this is the main file that we will use and add more Python code to shortly to describe our stack 
  • pulumi.webserver.yaml – This file holds any config items for Pulumi. It will show the aws region choice we made in our earlier step. You can change the region if you like but there is no need to. 
  • pulumi.yaml – This file shows the meta data of our project including its name, runtime, and description. You can change the region if you like but again there is no need to. 
  • requirements.txt this file contains a short list of items Python will need to import to use to create our stack with Pulumi. 

Once our stack has been created with the above files, Pulumi will give us three steps to follow to create a virtual environment and install the requirements 

python3 -m venv venv
source
venv/bin/activate
pip3 install -r requirements.txt
 

Run each of the commands in the order they appear.  

You are now ready to run Pulumi by running the pulumi up command. By default, the __main__.py file will have some existing code to try to create an S3 bucket on AWS. 

If you run Pulumi, it will show you a list of the items it wants to create. You will be given a list of options including yes, no, and details. 

  • yes will go ahead and create the S3 bucket on AWS 
  • no will simply end the command and return you to your command prompt 
  • details will show you some more details on the items that will be created 

If you use yes, Pulumi will create the S3 bucket. Since s3 bucket names need to be unique, it will automatically append a string to the my-bucket name if you left the default value in place. 

To continue on with creating a webserver, lets remove the s3 bucket and continue with the next steps. Use the following command to delete your existing stack: 

pulumi destory 

Pulumi will show you a list of the items it will create, you can use yes, no, or details again. Choose yes to allow Pulumi to go ahead and delete the stack resources. 

At this point, the stack still exists though the s3 bucket has been removed. In our next step, we can add some additional information to the __main__.py file to create our webserver. 

Open __main__.py in an editor an perform the following steps:

Empty the file of any content, leaving only the first two lines of : 

import Pulumi
from
pulumi_aws import s3

We will add some variables that Pulumi will need to create our stack. We will add size, vpc_id, subnet_id, public_key, and user_data, very similar to our earlier examples when using the AWS CLI and CloudFormation, so that Pulumi knows the values to use. We will add some minimal user data that the EC2 instance will run when it is created. 

# Define some variables to use in the stack
size = “t2.micro”
vpc_id = “vpc-xxxxxx
subnet_id = “subnet-xxxxxx
public_key = ” [ Add your public key content here, usually found in ~/.ssh/id_rsa.pub”
user_data = “””#!/bin/bash
sudo apt update
sudo apt install apache2 -y
echo ‘Hello World!’ /var/www/html/index.html”””

Next, we will add a get_ami block to tell Pulumi to search for an Ubuntu AMI on AWS to use when creating our EC2 instance.  

# Get the AMI ID of the latest Ubuntu 18.04 version from Canonical
ami = aws.get_ami(
most_recent=”true”,
owners=[“099720109477”],
filters=[{
“name”: “name”,
“values”: [“ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-*”],
}],)

We will use KeyPair to create a new Key Pair on AWS so that we can connect to the EC2 instance via SSH later if we want to. We provide a key pair name of keypair-pulumi and read in the variable called public_key which we defined earlier to attach our public key value. 

# Create an AWS Key Pair
keypair = aws.ec2.KeyPair(
“keypair-pulumi“,
key_name=”keypair-pulumi“,
public_key=public_key,
tags={“Name”: “keypair-pulumi“})

Next, we will create a Security group and add three rules. We will open port 22 so that we can SSH into the server. In the ingress block, we will open port 80 so that the server can be accessed over HTTP in a web browser. In the egress block, we will add a single entry for -1, which means all traffic, to allow the EC2 instance to send outbound traffic. We will give the security group a name of securitygroup-pulumi and provide a description also. It also used the vpc_id variable we declared earlier to create the security group in the correct VPC. 

# Create an AWS Security group with ingress and egress rules
group = aws.ec2.SecurityGroup(
securitygroup-pulumi“,
description=”Enable access”,
vpc_id=vpc_id,
ingress=[{
“protocol”: “tcp“,
from_port“: 22,
to_port“: 22,
cidr_blocks“: [“0.0.0.0/0”],},
{“protocol”: “tcp“,
from_port“: 80,
to_port“: 80,
cidr_blocks“: [“0.0.0.0/0”],},
],
egress=[{“protocol”: “-1”, “from_port“: 0, “to_port“: 0, “cidr_blocks“: [“0.0.0.0/0”],}],
tags
={“Name”: “securitygroup-pulumi“},)

Finally, we add the interesting part, we will use ec2.instance to create the actual EC2 instance. We give the server a name of webserver-pulumi, we use the size parameter to control the size of the instance. In vpc_security_group_ids, we reference the security group created in the previous step by its block name of group and its value id so the EC2 instance knows the Security group ID to use. 

We pass in user_data so that the EC2 instance will run a bash script when it starts up for the first time. This bash script installs Apache webserver and creates a simple hello World file. 

The ami variable is the Ubuntu AMI we declared at the top of the file. The key_name references the key file we created in a previous step, just like the security group.  

We use the subnet_ids variable to apply the subnet variable we declared at the top of the file also. Finally, we apply some optional tags so we can name the server. 

# Create the webserver EC2 instance
server = aws.ec2.Instance(
“webserver-pulumi“,
instance_type=size,
vpc_security_group_ids=[group.id],
user_data=user_data,
ami=ami.id,
key_name=keypair.key_name,
subnet_id=subnet_id,
tags={“Name”: “webserver-pulumi“},)

Our last step for this file is to add a simple export. This will export the Public IP address of our EC2 instance so that when Pulumi creates our Stack, it will write the EC2 instance Public IP address to our screen, so we can see it and copy it in to our web browser to try it. 

# Show the Public IP address of the webserver EC2 instance
pulumi.export(“publicIp“, server.public_ip) 

With the __main__.py file ready, we can now run the following to get Pulumi to create the Key Pair, the security group, and the EC2 instance 

pulumi up 

Like earlier, Pulumi will show us the items it is going to create. Anwser yes to allow Pulumi to go ahead. 

Once it runs fully to completion, it will show an output IP address. Open the IP address in your web browser and you should see Hello World! from our user data. 

Congratulations, you have successfully created a new webserver on AWS using Pulumi and Python. 

The above code can be found here https://github.com/gordonmurray/pulumi_webserver 

Summary 

In this post we have been introduced to Infrastructure as code, the process of creating and updating cloud-based infrastructure items using code.  

We learned of the advantages of this approach, such as committing our infrastructure code to a source control repository such as GitHub, so code changes can be peer reviewed before deploying changes into production. 

We looked at the four popular IaC tools including AWS CloudFormation, Hashicorp’s Terraform, Redhat’s Ansible, and Pulumi as four different technologies to use to create and update cloud infrastructure and gained some practical experience by using each tool to create a simple webserver on AWS. 

In a future post, we will explore a number of deployment tools to help us to deploy changes into production. Deployment tools such as Travis or Jenkins can help us to perform repetitive steps and allow us to work as part of a team to review changes and deploy safely. 

One Comment

  1. Pingback: An introduction to performing code, infrastructure or database deployments – Gordon Murray

Leave a Reply

Your email address will not be published. Required fields are marked *