local development options and an introduction to Amazon Web Services (AWS)

This post is an introduction to developing infrastructure in both a local development environment such as a laptop or desktop and also developing in a public cloud environment. Local development is an ideal location for learning, usually without any additional costs when compared to developing in the cloud environment.

We will learn about some tools for local and cloud based infrastructure development. These tools will help you in later series as well as provide a number of options for your own development needs.

We’re going to cover the following main topics:

  • Local development using Virtual machines
  • Local development using Docker Containers
  • An introduction to AWS
  • Creating a web server using EC2 instances
  • Creating a web server using Elastic Container Service (ECS)
  • Creating a web server using Elastic Kubernetes Service (EKS)

Learning infrastructure development 

Developing infrastructure on cloud service providers such as AWS can be fun. However, If you are not careful, leaving infrastructure items running can cost a lot.

Cloud service providers such as AWS and Google Cloud offer a free tier that are ideal for learning.  Many services are free to use for a period of time, and some services are completely free to use at all times if you remain  under certain usage limits.

For more information on the services that AWS and Google offer as part of their free tier, see their the following pages:



To help avoid some of those financial costs, we will look at some cost saving steps later in this series.

In the meantime, there are several ways to simulate a production environment and develop locally on your laptop or PC.

Local development

Local installation is a process you’re probably already familiar with. It is the process of installing or writing software on your PC or laptop. Installing software locally can be a great way to learn and gain experience with different software languages and tools.

Local installation of a software language such as PHP, Python or Go can be convenient. It allows you to write code in your editor and compile or run your software locally which has a convenient and quick development cycle as well as being easier to learn.

Local development can also lead to challenges when working as part of a team or on multiple software projects. Your local setup might be different to your coworkers, potentially slowing you down as you work together.

As you progress you may work with many different pieces of software or tools, often with different versions of those tools. You might like to have multiple different versions of a programming language installed to use in different projects, this is where virtual machines or Containers can become very useful.

Local Virtual machines

A number of tools exist including VM Workstation, VirtualBox or Parallels which allow you to create a virtual copy of an operating system within your existing local machine. The specific tool you use will depend on your local machine and the items you are developing.

If you are running on a Mac, you can create a Windows or Linux based virtual machine (VM) for example.

Creating a virtual machine comes with some overhead, such as using up additional space on your machine as well as both CPU and memory. However it will provide you with a consistent and repeatable environment that you develop and test on.

Virtual machines can be stopped when not in use and started up again when you need them. They can also be copied and shared with coworkers so everyone is using the same environment when working within a team.

Since VMs can be copied, they can provide an ideal place to learn how to work within a new operating system. You can create a new Virtual Machine, set up an operating system you want to use and create a copy of it. While you learn the new OS, if you break it in any way, you can quickly revert to the earlier copy and keep going.

Tools such as Vagrant, which runs on Windows, Mac and Linux machines allow you to describe your chosen Virtual Machine through code. Pre made ‘boxes’ can be downloaded and then started up on your device so you don’t need to install and configure the operating system within the VM yourself.

Once you have Vagrant installed, an example of a simple Vagrantfile is below. Running vagrant up in the same folder as the vagrantfile will create an Ubuntu 18.04 Bionic 64 bit Virtual machine locally in seconds.

Vagrant.configure(“2”) do |config|

config.vm.box = “bionic64”

config.vm.hostname = “bionic.box”

config.vm.network :private_network, ip: “”


Virtual machines run a full operating system within the virtual environment, so this can take a toll on your local machine. Depending on the power of your device you might be able to run 2 or 3 virtual machines at the same time before your device slows down too much to be usable. This is when Containers can come in handy.

Local containers

Containers are much smaller than virtual machines, in both disk space usage and also in terms of memory and CPU usage when they are running.

Containers do not contain a copy of an operating system, they contain the application and dependencies needed to run.

Since Containers use less resources than Virtual Machines, you can run many more Containers at once than VMs on your devices.

Tools such as Docker, rkt and LXC allow you to create custom containers as well as run them on your local machine.

Once you have Docker installed, an example of a simple Dockerfile is below. This will download an Ubuntu image, install and run Apache web server and make a web page available on port 8080

FROM ubuntu:18.04

RUN apt-get update && apt-get install -y apache2



ENV APACHE_LOG_DIR /var/log/apache2

ENV APACHE_RUN_DIR /var/www/html/


CMD [“/usr/sbin/apache2”, “-D”, “FOREGROUND”]

The above example shows a typical Dockerfile. Later in this post we will build our own Dockerfile to run our own application and cover the steps used within the file in detail

Introduction to AWS 

In this section we will cover some of the most fundamental resources available within AWS that you will need to be familiar with when you are building any infrastructure on AWS.

The AWS Management Console

The AWS management console, accessible at https://aws.amazon.com/console/ is the main user interface for AWS services. Once you log in, you’ll see icons representing all of the many services that AWS offers.

If you don’t already have an account, you can sign up at no cost. Many AWS services have a ‘free tier’, meaning they are completely free to use for your first year within certain limits. There are also services that can be used at no cost beyond the first year if you remain under certain limits specific to that service. One example is AWS Simple Notification Service (SNS) which is a great messaging service that can connect events in one AWS service to another, it can even be used to send SMS messages. You can use up to 1 million Publishes per month at no cost.

AWS Simple Email Service (SES) is another great service, used to allow applications to send emails. You can send over 60,000 emails per month from SES at no cost.

These free services make AWS an ideal starting point for learning about infrastructure. You can see some detailed information on the free services that AWS offer here on their website https://aws.amazon.com/free

In this post, we will explore Elastic Compute Cloud (EC2), Elastic Container Service (ECS) and Elastic Kubernetes Service (EKS). Each of these services can be used to host software you have developed and are part of the ‘Compute’ section of Amazon Web Services. These services can also work with other AWS services to build and deploy changes to your infrastructure and software.

Virtual Private Cloud (VPC)

A Virtual Private Cloud (VPC) is a service provided by AWS to contain and control networking in your AWS infrastructure.

Within your AWS Account you can have one or more VPCs and if you have signed up to AWS for the first time, AWS will create a default VPC for you.

A VPC will allow you to control traffic and the security within your network of services.

When creating a VPC you can define a custom Classless Inter-Domain Routing (CIDR) block. This means you can define a range of IP addresses for your AWS network that items within the VPC will use. For example a range of IPs such as will mean that any EC2 instances you create will have an internal IP address of 10.0.0.x. A VPC also contains other networking resources, such as Subnets, Route tables and Security groups to manage web traffic. We will cover each of those items now.


You can use Subnets to separate these IP ranges further. If you plan on deploying a large number of different items such as EC2 or RDS instances, you can use Subnets to define different ranges for each service to use. EC2 instances could use a range of 10.0.0.x while your RDS instances could use a separate range of 10.0.x.x. This allows you to separate your instances and avoid potential IP address conflicts.

Subnets can also be public facing or private. Public subnets allow items such as E2 instances to be publicly accessible. A good example of an EC2 instance in a public subnet is a standard web server, where you want your users to be able to access these servers. A subnet can be public facing using an Internet Gateway which we will cover later in this post.

With a Private subnet, you can keep instances within the subnet from being contacted by the outside world. This is ideal for keeping sensitive items such as databases within RDS private and only accessible from within the network. A private subnet is a subnet with no Internet Gateway attached. If you need to allow your Private subnet to connect to the internet, you can use a Nat Gateway.

You will need to allow your EC2 instances which are in a different subnet to connect to your RDS instances, you can do this using Route Tables which we will cover next.

Route Tables

While subnets can define a range of IP addresses used by your services, Route tables allow your subnets to talk to each other, or not depending on your infrastructure needs.

A route table can have many subnets attached though a subnet can be connected to only 1 route table.

Internet Gateways

An Internet Gateway connected to a subnet allows the subnet to become a public subnet. This allows items within the subnet to be accessible from the broader internet.

Security Groups

With a VPC in place, along with one or more subnets connected to a route table and an Internet Gateway, your EC2 instances are now exposed to the Internet. A Security Group allows you to control the specific traffic that is allowed to contact your EC2 instance.

For example, creating a Rule in your Security group to allow IP addresses in the range of on Port 80 will allow all users on the internet to contact your EC2 instances on port 80. This can be a common rule as it allows public internet traffic to talk to a web server such as Apache or Nginx on your web server.

Other common Security Group rules include opening port 22 to your home or office IP address. This allows you to SSH into your Linux server.

Windows users might like to open port 3389 to their home or office IP address so allow an RDP connection to the server.

EC2 instances

EC2 is probably the item you will use the most on AWS. EC2 instances are virtual machines you can start and stop at any time. EC2 instances can be one of many different types from Linux to Windows based virtual machines with 1 CPU up to over 100 CPUs. As we look at other AWS services later on such as ECS or EKS, you will find that those services rely on EC2 instances.

EC2 instances are where you will deploy your web application if you are developing one. New EC2 instances can be launched from within the AWS console. You will be asked a number of questions including the instance type, size, any additional volumes to attach to the instance, their disk size, the security group(s) to apply to the instance, and a Key Pair.

Once you have made your decisions, the EC2 instance will usually launch within a minute or two.

Once created, EC2 instances can be stopped, copied or scaled in size. A copy of an EC2 instance is called an Amazon Machine Image (AMI) and AMIs can be used to back up a running EC2 instance or copy the image to other AWS regions or accounts to make more identical EC2 instances.

EC2 Instance types

AWS provides a number of different instance types. The types provided are for different needs.

  • Instance Types of T2, T3, M4 and M5 are General purpose instances. They contain a balance of both CPU and available memory. Useful for most workloads.
  • Instance Types of C4 or C5 are Compute (CPU) instances. They usually contain more CPUs with moderate increases in memory for each different size.
  • Instance Types of R4, R5, X1 are memory-optimized instances. The amount of available memory in each instance size will double while the CPU will increase moderately.
  • Instance Types of P2, P3, G4, G5 are instances with additional GPUs. These instances are ideal for crunching data.
  • Instance Types of I, D, H are instances with plenty of storage.

Each of these instances come with different underlying hardware, per hour pricing as well as different availability and pricing per geographical AWS region.

If you are starting a new EC2 instance for the first time and you are unsure which Type to use, one of T3 Types can be a good start. They are low powered and low cost instance types and you can grow from there if you need more resources.

For more information on current instance types https://aws.amazon.com/ec2/instance-types/

AWS also provide a Simple Monthly Calculator to help you to estimate the costs of various AWS services before you create them https://calculator.s3.amazonaws.com/index.html

Key Pairs

When you are creating a new EC2 instance, as well as the additional questions mentioned earlier, you will also be asked to create or use an existing Key Pair.

A Key Pair is in the form of a file with a .pem extension. Pem stands for Privacy Enhanced Mail (PEM), a format originally intended for use with email.

This .pem file is the public key half of a public/private key pair. AWS can create one for you, or you can use the Key Pair page within the EC2 section of the AWS Console to import your existing public key and store it as a .pem file. A .pem file can be used when creating any new EC2 instances in your account.

Once you have created an EC2 instance with a .pem key, don’t lose that file as you will need it to connect to the instance later.

Spot Instances

Spot instances are just like any other EC2 instances, however they are instance types that are low cost and depend on the AWS market demand and availability.

Spot instances cost considerably less than a standard EC2 instance however the instances can be short lived and so might terminate at any time, with only a small 2 minute warning.

When creating a new EC2 instance you can select to create a Spot instance. The Console will show you the current price and you can then enter the amount you are willing to pay per hour. As long as your bid amount is the same or higher than the current price of the Spot instance, your instance will start up and remain available to use. As the AWS market for availability changes over time, the price of the instance may increase. If it goes above your big price, your instance will terminate.

Depending on the instance type, your bid price and the availability of the instance type, spot instances can persist for weeks or months without termination.

Reserved Instances

With Reserved Instances (RIs), you can create EC2 instances as normal, or work with EC2 instances that are already running.

Reserved Instances are a commitment to run one or more EC2 instances for 1, 2 or 3 year periods in return for a lower hourly cost for the instances.

You don’t need to make any changes to an EC2 instance for it to become a Reserved Instance. The Reserved Instance is a separate process that works alongside EC2 instances.

If you have a number of EC2 instances running you can apply for the Reserved Instances for those EC2 instances. The Reserved Instances must be the same as the EC2 instances that are running to cover them effectively.

For example, if you have an M5.large EC2 instance running, you can then apply for the same M5.large Reserved Instance. This will lower your hourly running cost for that instance.

When creating a new Reserved Instance, you have a number of options. You can choose to reserve for 1, 2 or 3 year periods. A 3 year RI will be cheaper to run that a 1 Year RI.

The thing to watch out for with Reserved Instances is that if you set up an EC2 instance and a Reserved Instance effectively, it will save you money. However if you change your EC2 instance over time so that the EC2 instance and RI no longer match, then you will end up paying for both. You will pay for the normal EC2 instance cost, plus the additional cost of the instance type described in the RI.

If the number of EC2 instances you have in your account will stay the same over a long period of time without any change to their size, then RI’s can be very effective items to use.

The ‘My Billing Dashboard’ section of the AWS control panel can help you to see how much of your EC2 instances are covered by Reserved Instances.

Creating a basic web server on EC2

With the information we have learned about AWS and EC2 above, we can now go ahead and create a web server hosting a ‘Hello World’ page on a new EC2 instance.

The following steps are all possible using the AWS console however the AWS console interface may change over time and the steps will no longer be valid.

Instead, we will use the AWS Command Line Interface (CLI) in our terminal to create an EC2 instance and its dependencies in a small few steps.

Installing the AWS CLI

The AWS CLI is supported on Windows, Mac and Linux machines. A list of instructions for each one is available on the AWS website at: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html

The examples in this section assume you are working from a linux or mac machine.

Installing JQ

JQ is a command line tool for parsing JSON content. When using the AWS CLI, a JSON output is one of the output responses that is supported. Often the output can be verbose and jq is an ideal tool to help parse that JSON response.

JQ is supported on Windows, Mac and Linux machines. A list of instructions for each one is available on the JQ website at: https://stedolan.github.io/jq/download/

Creating our webserver using the AWS CLI

The following commands use the AWS CLI to create the resources we will need on AWS to start an EC2 instance and upload some files to it, to act as a web server.

The following steps assume there is an existing VPC and at least 1 subnet which are available by default in a new AWS account:

Use your terminal to perform the following:

  • First, create a new file called index.html with the content of ‘Hello World!’

echo “Hello World!” > index.html

  • Next, use the following command to determine your current public IP address and take note of it. You will need this later to tell the Security group to allow you access the server.

curl https://www.canihazip.com/s

  • Use the following command to list the ID of one more VPC’s in your account and take note of the output:

aws ec2 describe-vpcs –region us-east-1 –query ‘Vpcs[*].VpcId’

  • Use the following command to list the subnet IDs available in the VPC chosen from the previous step:

aws ec2 describe-subnets –region us-east-1 –query ‘Subnets[*].SubnetId’ –filters “Name=vpc-id,Values=YOUR_VPC_ID”

  • Create a new security group that will be used by our EC2 instance:

aws ec2 create-security-group –region us-east-1 –group-name example-security-group –description “For an example EC2 instance”

  • Add a rule to the security group to allow public traffic to access the default Apache webserver port 80:

aws ec2 authorize-security-group-ingress –region us-east-1 –group-name example-security-group –to-port 80 –ip-protocol tcp –cidr-ip –from-port 80

  • Add another rule to allow your local machines IP address to connect to the EC2 instance later via SSH:

aws ec2 authorize-security-group-ingress –region us-east-1 –group-name example-security-group –to-port 22 –ip-protocol tcp –cidr-ip YOUR_IP_ADDRESS_HERE/32 –from-port 22

  • Query the security group that was created in the earlier step to get its security group ID and note its output:

aws ec2 describe-security-groups –region us-east-1 –group-names example-security-group –query ‘SecurityGroups[*].[GroupId]’ –output text

  • Create a new Key Pair in your AWS Account and save a local copy of it too as a file called example.pem:

aws ec2 create-key-pair –region us-east-1 –key-name example –query ‘KeyMaterial’ –output text > example.pem

  • Create the new EC2 instance based off an Ubuntu 18.04 LTS instance. Substitute the Security group ID and Subnet ID from the earlier steps:

aws ec2 run-instances –region us-east-1 –image-id ami-04b9e92b5572fa0d1 –count 1 –instance-type t2.nano –key-name example –security-group-ids SECURITY_GROUP_ID_HERE –subnet-id SUBNET_ID_HERE  –tag-specifications ‘ResourceType=instance,Tags=[{Key=Name,Value=example}]’

  • Query the EC2 instance we just created to get its Public DNS name so we can connect to it later:

aws ec2 describe-instances –filter “Name=tag:Name,Values=example” –region us-east-1 –query ‘Reservations[].Instances[].[PublicDnsName]’ –output text | head -2 | tail -1

  • Set owner read and write permissions to the key pair file we created earlier:

sudo chmod 600 example.pem

  • Use SCP to copy a local index.html page to the home folder on the new EC2 instance, substitute the Public DNS address from the earlier step:

scp -i example.pem index.html ubuntu@PUBLIC_DNS_HERE:/home/ubuntu

  • Use SSH to connect to the server and perform a couple of steps. Perform an apt update to update the system, install apache2 web server and move the index.html file to the webserver root folder at /var/www/html/

ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no ubuntu@PUBLIC_DNS_HERE -i example.pem “sudo apt update && sudo apt install apache2 -y && sudo mv /home/ubuntu/index.html /var/www/html/index.html”

Now, open your web browser and open up the public dns name from the earlier step and you should see the contents of your index.html file.

Congratulations, you have created your first web server, running on an EC2 instance in a VPC on AWS!

Removing your Webserver

The items we created earlier such as the EC2 instance will cost you money as you leave them running. The following steps will help you to delete the items to clean up.

  • List EC2 instances in your account and note the Instance ID:

aws ec2 describe-instances –filter “Name=tag:Name,Values=example” –region eu-west-1 –query ‘Reservations[].Instances[].[InstanceId]’ –output text

  • Delete the EC2 instance based on its instance ID from the earlier step:

aws ec2 terminate-instances –region us-east-1 –instance-ids INSTANCE_ID_HERE

  • Delete the security group we created earlier. It will remove the 2 security group rules we added earlier too:

aws ec2 delete-security-group –region us-east-1 –group-name example-security-group

  • Delete the key pair we created earlier from the AWS Account:

aws ec2 delete-key-pair –region us-east-1 –key-name example

  • Remove the local copy of the key pair:

rm example.pem

The full list of steps combined in one file is visible here on Github https://gist.github.com/gordonmurray/4e6541f689205db3eb9bddc17a95fb1c

Using the AWS CLI is very powerful, however you will see that if you try to run some of those commands again, you will receive errors as the items already exist. Later in this series, we will look at provisioning tools such as Ansible and Terraform. These tools can also create AWS items such as EC2 instances, if run multiple times they will either create or confirm the instance exists, rather than causing an error.

In the following sections we will look at a number of additional services such as AWS ECS and AWS EKS which use EC2 instances, as examples of AWS services that can be used to run our web services.

Introduction to AWS ECS 

AWS ECS stands for Elastic Container Service. It is a service AWS provides for running applications within containers, instead of apps deployed directly to EC2 instances.

With ECS, you do not need to worry about the underlying EC2 instances directly. The ECS service will deploy your tasks to one or more available EC2 instances to keep your service running.

If you are starting out with deploying apps within containers, ECS can be a great service to use. Kubernetes is another popular container orchestration tool that we will cover later in this series.

ECS comes with some of its own terminology that we’ll need to know to continue:

  • Cluster  – A group of EC2 instances. When an EC2 instance launches, a piece of software called the ecs-agent runs on the server and registers this new  instance to ECS Cluster.
  • Task Definition – This is information that describes how a docker container should launch. It contains settings like exposed port, docker image, CPU, memory requirement, command to run and environmental variables.
  • Task – This is a running container with the settings defined in the Task Definition. It is like a running version of a Task Definition.
  • Service  – Defines a Task that needs to be kept running.

For more information, take a look at AWS’s ECS site at https://aws.amazon.com/ecs/

Container registry

When you have created one or more Docker Containers to run your different applications, you need somewhere to store those Container images.

In the same way that a service like Github can store your code so you can collaborate with a team, Container Image Registries provide a similar server for storing and sharing your container images.

Elastic Container Registry (ECR) is a Docker container registry provided by AWS to store and share your container images. For more information on ECR, see https://aws.amazon.com/ecr/. Docker Hub (https://hub.docker.com/) is another popular Docker container image repository.

Building a Docker Image

Before you begin, you will need a Dockerhub account, https://hub.docker.com/ and you will also need Docker installed on your machine.

To begin using ECS, we will need to create an application that runs within a Container. Once we have created this application, we need to build an image and push it to Dockerhub, so that the ECS service can contact Dockerhub to pull the image to run it in the cluster.

To create a Docker image, we use a file called a Dockerfile. Within this file we can describe a base image to build on top of, then add our own files and build the image.

  • First, let’s create a simple Hello World PHP page that we will add to the Dockerfile in the next step. Create a file called index.php in a folder called src.



Echo “Hello World from Docker!”;

  • Then, create a file called Dockerfile in a new folder with the following content:

FROM php:7.2-apache

COPY src/ /var/www/html


RUN usermod -u 1000 www-data

RUN chown -R www-data:www-data /var/www/html

Lets cover each of the sentences used in this Dockerfile to explain their role in creating a container:

The FROM command tells Docker to build an image starting with the image called php:7.2-apache. This is a pre-built image with PHP version 7.2 and Apache installed. You can start from any image you like and install new items on top of it, this image helps keep the Dockerfile small for demo purposes.

The COPY command tells Docker to copy all contents from the /src folder on your local machine, which should just be the index.php file so far, into the /var/www/hml/ folder in the Image

The EXPOSE command tells the Docker image to listen on port 8080

The usermod command changes the user ID of the www-data user to 1000. UIDs begin with 1000 and UIDs below 1000 are reserved for system use.

The chown command tells Docker to change the /var/www/html to be owned by the www-data user and group so that Apache has permission to manage files in that folder

  • You can build an image from this Dockerfile using the following command:

sudo docker build –rm -f “Dockerfile” -t example:latest .

This command calls docker build to create a new image from the Dockerfile. It is called ‘example’ and tagged with ‘latest’. Tagging allows you to give your images different names, often used for versioning your images.

  • You can see a list of your images using the command:

Sudo docker images

This will show you a list of any images you have built. These are image files and don’t represent a running app yet. To run your image, use the following:

sudo docker run –name=example -it –rm -p 8080:80 example:latest

  • You can confirm your container is running using:

Sudo docker ps

Open your web browser and open http://localhost:8080 and you should see the content from the /src/index.php file!

If you wanted to develop this app further, you would need to update your files in the src/ folder, then build the image again using the docker build command, then run it again using the docker run command. For ongoing local development, this can be a great way to work, though building and running repeatedly can be time consuming. Instead, it is possible to ‘mount’ your local development work from the src/ folder into an already running container and any time you change a file on your local machine, it will be available within the container and visible in your browser straight away.

Now that we have a small app built as an image, our next step is to push this image to Dockerhub so that later on ECS can pull it and run it within the ECS cluster.

  • Once you have created your Dockerhub account, you will need to log in to push your images. You will also need to create a Personal Access Token to use as your password when logging in. You can do this by logging in to Dockerhub, going to your Profile and opening the Security section.

docker login –username=YOUR_DOCKERHUB_USERNAME

  • Next you will need to tag your image for Dockerbub:

sudo docker tag example {YOUR_DOCKERHUB_USERNAME}/example

  • Finally, push your image to Dockerhub using the following:

sudo docker push {YOUR_DOCKERHUB_USERNAME}/example

Once pushed, you should be able to see your saved Image on Dockerhub at https://hub.docker.com/

Our app is now an image located at {YOUR_DOCKERHUB_USERNAME}/example, we will use this location address in our next section when creating an ECS cluster and deploying our app to the cluster.

Creating a basic web service on ECS

The following steps assume you have an AWS account with the AWS CLI installed locally. We will perform several steps to create an ECS cluster.

  • Use the following command to list the ID of one more VPCs in your account and take note of the output

aws ec2 describe-vpcs –region us-east-1 –query ‘Vpcs[*].VpcId’

  • Use the following command to list the subnet IDs available in the VPC chosen from the previous step

aws ec2 describe-subnets –region us-east-1 –query ‘Subnets[*].SubnetId’ –filters “Name=vpc-id,Values=YOUR_VPC_ID”

  • Create a key pair for accessing the EC2 instances.

aws ec2 create-key-pair –key-name example –query ‘KeyMaterial’ –output text > example.pem  –region us-east-1

  • Next, create two security groups. One will be for the EC2 instances and the other will be the Application Load Balancer (ALB)

aws ec2 create-security-group –region us-east-1 –group-name instances-sg –description “For EC2 instances”

aws ec2 create-security-group –region us-east-1 –group-name alb-sg –description “For ALBs”

  • When the security groups are created, they will output their ID after the command has run successfully, in the format of sg-xxxxxxxx where the xxx’s are unique to the security group. With the security groups created, we will add a small number of rules to the security groups to allow the public in on port 80 for web access and a rule to allow to ALB and the EC2 instances to communicate with one another on any TCP port from 1 to 65535

aws ec2 authorize-security-group-ingress –region us-east-1 –group-name instances-sg –to-port 80 –ip-protocol tcp –cidr-ip –from-port 80

aws ec2 authorize-security-group-ingress –region us-east-1 –group-name alb-sg –to-port 80 –ip-protocol tcp –cidr-ip –from-port 80

aws ec2 authorize-security-group-ingress –group-name instances-sg –protocol tcp –port 1-65535 –source-group alb-sg

  • The next step is to create the Application Load Balancer. An application load balancer is able to receive web traffic and send it to different targets, such as a target for EC2 instances that will be used by our ECS cluster. You will need to fill in your application security group ID from the earlier step. You will also need to provide at least 2 subnet IDs. Use 2 subnets from the earlier step that listed all available subnets.

aws elbv2 create-load-balancer  –region us-east-1 –name example-load-balancer –security-groups ${LB_SG} –subnets {SUBNET_ID_1} {SUBNET_ID_2}

  • Now that the load balancer has been created, we will need to know its Amazon Resource Name (ARN). This is a name given to AWS Resources. Store this ARN for later.

aws elbv2 describe-load-balancers | grep LoadBalancerArn

  • Next, we will create a Target Group. A target group is usually a group of one or more EC2 instances. In this case, we will leave the target group empty and ECS will populate it later when we add a service.

aws elbv2 create-target-group –name example-targets –protocol HTTP –port 80 –target-type instance –vpc-id {VPC_ID}

  • Just like with the load balancer, now that the target group has been created, we need to query it to get its ARN. Save this ARN for later also.

aws elbv2 describe-target-groups | grep TargetGroupArn

  • Next we will create a listener on the load balancer. A listener tells the load balancer to listen for traffic on one or more ports and send that traffic to a target group. You will need to substitute in your load balancer ARN and your Target group ARN from earlier.

aws elbv2 create-listener –load-balancer-arn {LB_ARN} –protocol HTTP –port 80    –default-actions Type=forward,TargetGroupArn={TG_ARN}

  • Now we’re getting to the interesting steps, next we will create the cluster called ‘example’

aws ecs create-cluster –cluster-name example

  • Next we will create a file to tell an EC2 instance the name of our ECS cluster so it can join it when it starts up. Create a file called data.txt with the following content


echo ECS_CLUSTER=example > /etc/ecs/ecs.config

With the cluster started, we will create one EC2 instance to join the cluster, if you’d like to start more, then change the ‘count 1’ value to a number you are happy with.

This is simply an EC2 instance that will become part of the ECS Cluster. It is using an ECS optimized AMI which has a docker and the ecs-agent running on it. We are passing some user data from a file called data.txt to the instance which is a way to pass some commands to an EC2 instance when it is starting up for the first time.

You will need to substitute in a subnet ID and your instance security group we created earlier

aws ec2 run-instances –image-id ami-097e3d1cdb541f43e –count 1 –instance-type t2.micro –key-name example subnet-id {A_SUBNET_ID} –security-group-ids {INSTANCE_SG} –iam-instance-profile Arn=arn:aws:iam::016230046494:instance-profile/ecsInstanceRole –user-data file://data.txt

Our EC2 instance will take a few seconds to start up. Next we will create a Task Definition. This describes how a docker container should launch. It contains settings like ports, docker image, cpu, memory, command to run and env variables.

Create a file called `definition.json’ with the content below. This is where we tell ECS to use our Docker image that we created earlier. You will need to substitute in your Dockerhub username.


“family”: “hello-world”,
“containerDefinitions”: [
“name”: “php-hello-world”,
“image”: “{YOUR_DOCKERHUB_USERNAME}/example:latest”,
“cpu”: 100,
“command”: [],
“memory”: 100,
“essential”: true,
“portMappings”: [
“containerPort”: 8080,
“protocol”: “tcp”

  • Register the task definition

aws ecs register-task-definition –cli-input-json file://definition.json

Next we will create a Service. A service is responsible for maintaining the tasks we have described in the task definition.

Create a file called service.json with the following content. You will need to substitute in the target group ARN from earlier.

“cluster”: “example”,
“serviceName”: “example-service”,
“taskDefinition”: “hello-world”,
“loadBalancers”: [
“targetGroupArn”: “{YOUR_TARGET_GROUP_ARN}”,
“containerName”: “php-hello-world”,
“containerPort”: 8080
“desiredCount”: 1,
“role”: “ecsServiceRole”

Next we will use the AWS cli command to create this new service using the file we just created.

aws ecs create-service –cli-input-json file://service.json

We are nearly there. Earlier we created a load balancer which will receive the traffic and pass it onto the target group. This target group has been updated by the Service we created in the last step. Our next step is to query the Public DNS name of the load balancer so that we can access it. You will need to substitute the load balancer ARN from earlier.

aws elbv2 describe-load-balancers –load-balancer-arns {LB_ARN} | grep DNSName

Open the DNSName that is returned in your web browser to see your running container output!

The above steps are available in a single script here: https://gist.github.com/gordonmurray/259eb3c52e66188ea4b0e3b420a6ccd8

Removing your ECS cluster

Keeping a cluster running can add up in cost if you have several EC2 instances and load balancers running. The following steps will help you to remove the steps from above. Later in the series, we will work on some ways to reduce your cluster costs with Spot instances.

  • Get the instance ID of the EC2 instance to delete it in the next step

aws ec2 describe-instances –filter “Name=tag:Name,Values=example” –region us-east-1 –query ‘Reservations[].Instances[].[InstanceId]’ –output text

  • Delete the EC2 instance

aws ec2 terminate-instances –region us-east-1 –instance-ids ${INSTANCE_ID}

  • Get the Amazon Resource Name (ARN) of the load balancer

aws elbv2 describe-load-balancers –region us-east-1 –names “example-load-balancer” –query ‘LoadBalancers[*].[LoadBalancerArn]’  –output text

  • Get the ARN of the listener on the load balancer

aws elbv2 describe-listeners –load-balancer-arn ${ALB_ARN} –query ‘Listeners[*].[ListenerArn]’ –region us-east-1 –output text

  • Delete the Listener based on its ARN

aws elbv2 delete-listener –listener-arn ${ALB_LISTENER_ARN} –region us-east-1

  • Delete the Load balancer based on its ARN

aws elbv2 delete-load-balancer –load-balancer-arn ${ALB_ARN} –region us-east-1

  • Get the ARN of the Target group

aws elbv2 describe-target-groups –region us-east-1 –names “example-targets” –query ‘TargetGroups[*].[TargetGroupArn]’ –output text

  • Delete the target group based on its ARN

aws elbv2 delete-target-group –target-group-arn ${TG_ARN}

  • Delete the security groups

aws ec2 delete-security-group –region us-east-1 –group-name instances-sg

aws ec2 delete-security-group –region us-east-1 –group-name alb-sg

  • Scale the running service down to 0 so it can be deleted

aws ecs update-service –cluster example –service my-service –desired-count 0 –region us-east-1

  • Delete the ECS service

aws ecs delete-service –cluster example –service my-service

  • Delete the ECS cluster

aws ecs delete-cluster –cluster example –region us-east-1

  • Delete the key pair and the local pem key file

aws ec2 delete-key-pair –region us-east-1 –key-name example

rm example.pem

The above steps are available in a cleanup script here: https://gist.github.com/gordonmurray/259eb3c52e66188ea4b0e3b420a6ccd8

In this section we learned how to build and use Docker images to package our web application. We created a Dockerhub account and learned how to store our container images in a Container registry so that those images could be used by services such as ECS or Kubernetes. We then used the AWS command Line (CLI) to create an Elastic Container Service (ECS) cluster and add a single EC2 instance so our application would have available resources to run.

Our next step was to create a Task definition and a Service to describe our image and its requirements to run and deploy our app to run on ECS. mOnce our app was running we were able to query its DNS name and view the output in a web browser.

Our final optional step was to clean up and remove any resources we created so that we wouldn’t have any items still running afterwards.

Amazon’s ECS is one of several great options for deploying container based web applications into production. In our next section we will look at another AWS based solution to also deploy container based web applications, Amazon’s Elastic Kubernetes Services (EKS), a hosted version of Kubernetes.

Introduction to AWS EKS

Kubernetes is an open source container orchestration tool, developed by Google. It is similar to AWS ECS which we covered earlier, in that Kubernetes can be used to run and scale Docker containers spread over a number of host instances.

AWS Elastic Kubernetes Service (EKS) is AWS’s hosted version of Kubernetes. Using a hosted version of Kubernetes such as EKS can save time and can be easier to learn when compared to setting up your own clusters manually, even if you are using Infrastructure as code tools such as Terraform.

Kubernetes has a large community and a growing number of tools. This series can’t cover all the aspects of it, in this section we will cover creating an EKS cluster and deploying an app to the cluster.

Before we begin, you will need to install a number of command line tools

  • eksctl – eksctl is a shortened version of EKS Control. It was originally developed by Weaveworks and has become the official command line tool for EKS. The following page will show you how to install eks locally for your system https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html
  • kubectl – kubectl stands for Kubernetes Control. kubectl is a powerful command line tool. If you work with Kubernetes, you will likely work with this tool a lot. kubectl can be installed independently of eksctl, however it is better to install AWS’s version of kubectl for an easier set up process to work with eksctl. The following page will show you how to install kubectl https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html
  • Helm – Helm is a package manager for Kubernetes. Helm is not strictly required to work with Kubernetes, however it is a very powerful and practical package manager to help you to package your containers, deploy to Kuberntes, and roll back to earlier versions if there are problems. This following page will show you how to install Helm https://docs.aws.amazon.com/eks/latest/userguide/helm.html

With eksctl installed, we can now create an EKS cluster in one very easy command:

eksctl create cluster \
–name example \
–version 1.14 \
–region us-east-1 \
–nodegroup-name example-workers \
–node-type t3.medium \
–nodes 3 \
–nodes-min 1 \
–nodes-max 4 \

The above command will take around 10 minutes to complete. It will create a small cluster using version 1.14 of Kubernetes with 3 nodes. Nodes are the name that Kubernetes gives to EC2 instances. You can change the number of nodes if you wish to make a smaller or bigger cluster.

Once the command has completed, you can begin using kubectl to see if the cluster is ready to use:

The following command will show you how many nodes (EC2 instances) your cluster is using

kubectl get nodes

The following command will show any services deployed to the cluster. It won’t include a lot right now since it’s a new cluster

kubectl get services

If you’d like to know more kubectl commands, you can use kubectl help at any time to get a list of available commands. The Kubernetes site also offers a useful cheat sheet of common commands https://kubernetes.io/docs/reference/kubectl/cheatsheet/

Helm charts

Your cluster is now up and running and our next step is to deploy some apps into the cluster using Helm.

In the same way that we can store and share code on Github and Docker container images on Dockerhub, we also have Helm Hub for sharing helm packages known as Charts, located at https://hub.helm.sh/

  • To begin installing Helm charts, we need to first add a chart repository, a common char repository is the official Helm chart repo

helm repo add stable https://kubernetes-charts.storage.googleapis.com/

  • Once the stable repository is added, you can search it using:

helm search repo stable

  • To install a chart, first make sure your chart repositories are up to date and then install a chart, in this case we will install a mysql chart

helm search repo stable

helm install stable/mysql –generate-name

The generate-name parameter will generate a name for the running service so you can identify it.

Once the chart is installed, it should produce some useful information on screen with connection information.

Try running kubectl get services again to see the mysql service is running.

Congratulations, you have just deployed your first application to a Kubernetes cluster using Helm!

Creating your first custom Helm chart

Earlier when we created our ECS cluster, we built a simple app container image and pushed it to Dockerhb. In this section, we will use the same app and push it to our kubernetes cluster.

  • Go to a new directly and use the following command to create a new Helm chart for your app

helm create php-hello-world

This will create a directory structure for you with a number of files.

  1. In the values.yml file, update the repository to: YOUR_DOCKERHUB_USERNAME/example
  2. In the chart.yml, change the appVersion to : appVersion: latest
  3. We can now deploy the app using the following helm command:

helm install php-hello-world ./php-hello-world –set service.type=NodePort

When the app has been deployed, Helm will give you a couple of commands to issue. This will provide you with the IP address and port where your app has been deployed to, so you can visit it in your web browser.

You have now deployed your first custom app to Kubernetes using Helm!

Cleaning up

If you would like to remove your application from the cluster, you can delete the app using the following

helm delete php-hello-world

If you would like to remove your EKS cluster fully, you can run the following eksctl command.

eksctl delete cluster example

In this section we built on the container knowledge of the previous section. We learned to use new tools including eksctl, kubectl and Helm to create a Kubernetes cluster on AWS. We learned how to use Helm to search for existing premade charts such as mySQL and we learned to package our own container image using a custom Helm chart to deploy our application into our Kubernetes cluster.

Finally, we used a number of commands to remove the EKS items we have created in this section as an optional step to clean up instead of having services running that could cost us money if left running accidentally.

Some recommended reading

  • Docker: Up & Running: Shipping Reliable Containers in Production – Book by Sean P. Kane
  • Docker Deep Dive – by Nigel Poulton
  • The Kubernetes Book – by Nigel Poulton

Leave a Reply

Your email address will not be published. Required fields are marked *