An introduction to performing code, infrastructure or database deployments

In this post we will look at some of the approaches we can use to deploy code or infrastructure changes into a production environment. Regardless of the programming language you may be using to develop a web site or application, from HTML/CSS, NodeJS, Java or Go, there are likely some build steps you need to take to make your application ready for production. Build tools can help you to perform those steps quickly and automatically, saving you time in repeating the same steps any time you want to deploy even a small change.

In a previous post, we were introduced to CloudFormation, Terraform, Ansible and Pulumi. In this post we will look at ways we can deploy those changes into production. While these examples are specific to performing infrastructure changes, the same approach can and often is applied to deploying code changes into production. In our code examples we will include code changes along with infrastructure changes where possible.

Some of the tools we will cover in this post can be run directly from a terminal on your local PC or laptop. This can work well if you are the only developer working on a project. If you are working as part of a team however, it is best to use a central or dedicated system for deploying changes so that any changes can be visible to your other team members. In this post, we will introduce some possible approaches that can be used to build or deploy changes to a web application or a database to production use. We will start with using the command line for deployments and then use Git pull to deploy code changes. We will also learn to use third party tools or services to deploy changes to production and learn to deploy a simple webserver using Travis. Towards the end of the post we will learn to replace a webserver using Packer, Ansible and Terraform and also learn to use Skeema to create or alter database tables.

In this post, we will cover the following topics:

  • Using the Command line for deployments
  • Using Git pull to deploy code changes
  • Using third party tools or services to deploy changes to production
  • Deploy a simple webserver using Travis
  • Replacing a webserver using Packer, Ansible and Terraform
  • Using Skeema to create or alter database tables

Using the Command line for deployments

Issuing commands from your local machine or combining commands into a bash script isn’t the best process for deploying web application changes to production.

At times however, performing a deployment or rolling back a deployment to an earlier version from the command line can be the fastest approach to get some changes made to production in an emergency scenario.

When compared to logging in to a cloud environment such as AWS to perform some actions, using the AWS CLI can save a lot of time.

Incidents do happen and if services such as GitHub are experiencing down time, you might still need to deploy your code into production, this is where command line steps can help.

Using Git pull to deploy code changes

If you are using services such as Github.com, it’s a good idea to get to know some of the most common Git commands such as git clone, git commit, git push and git pull, so that we can run those commands on remote servers or through our build tools.

There are many more Git commands and fortunately there are desktop clients such GitHub Desktop (https://desktop.github.com/) to help you manage your files without knowing every command.

Instead of using commands such as FTP or SCP to copy files from your local machine to the destination server, one of the options with Git is to perform a git pull. This is the process of connecting to a remote server, installing Git to clone the project from GitHub or Bitbucket and then perform a git pull to bring any new changes to the code onto the server.

This process will work however, it also has several disadvantages. If you have many servers, you will need to connect to each server and perform a git pull on each server. This is time consuming and can cause conflicts if you or your team routinely SSH into servers to change files directly as the files in GitHub will differ from the files on the server.

Using Git in this way is also a security concern. Cloning the project on to a server using git will leave a .git folder on the server. This folder is essential for Git to work locally though it also contains the history of all file changes of your project. If you ever committed database passwords or API keys to your code, a third party may be able to see those details in your git history. This folder can also take up a lot of disk space on your server, depending on the size of your project.

Using third party tools or services to deploy changes to production

When working on small or solo projects, it can be convenient to deploy changes from your own local machine to your production servers. As a project grows, or you begin working with others, having a central place such as Github to store your code is essential and having a dedicated service to deploy changes can significantly improve your development workflow.

When using services such as Github or Bitbucket your code is stored in a central place called a Repository. You can continue to develop locally on your own machine and commit changes often. Once you have added your changes, the code is available to coworkers to review or merge into the main project.

As well as working with others on a project, using a central repository can trigger a number of other useful processes. Popular third party services such as Travis can watch for changes to your repository and then trigger some build steps to test your code or deploy it to production.

In this post we will try a few different methods of deploying code to production servers so you can gain a little experience with each one, to see if one of them works well for your own projects. Many 3rd parties such as https://circleci.com/ or https://travis-ci.com/ exist to help out with software deployments.

Services such as these can connect to your Github or Bitbucket repository and then then deploy the files that have changed to your target server(s).

These services can be great to work with. When you commit some code changes to your repository, the service can almost immediately push those changes out to your servers.

They can also provide follow up steps, such as sending you and your team an email letting everyone know that the latest work has now been deployed on the servers.

If you deploy some new code that happens to have a bug, many services also provide a great interface to roll back your changes to the previous version of your work quickly.

Depending on your software of where you are deploying to, these services can often run pre and post deployment commands on your server. For example, you might need to stop a currently running application, deploy your latest changes and then start up or restart the application afterwards. These services can perform the steps for you every time, so you don’t have to. You can focus on writing your software and not worry about missing any routine build or restart commands when deploying any changes, as a good build tool will perform those actions for you.

You could also use those pre deployment hooks to back up your existing files or database first as a precaution before deploying changes to your system. We will cover ways to back up your files and databases in the next post.

There is often some up-front time required to set up these third party services to deploy your work but once set up, they can work very well in the background for you to build and deploy your projects.

If you need to go beyond the options supported by 3rd party tools, such as an unsupported programming language, or needing to self-host your build tools, then tools such as Travis or Jenkins might be a better fit.

Travis allows you to run a number of builds in parallel, for example testing your code on different OS versions or different programming language versions. Travis also allows you to create a .travis.yml file so that you can create your build steps and commit those changes to your project repository next to your application and infrastructure code.

Jenkins can be a great solution if you want to self-host your build tools in your own environment for compliance reasons. Jenkins supports a Jenkinsfile that you can commit to your source code repository just like Travis.

In our next section, we will take some existing work from an earlier post and deploy the same code using Travis.

Deploy a simple webserver using Travis

In this section, we will take our previous work of building and configuring a webserver using Ansible and connect it up with https://travis-ci.org. If you don’t have an account with https://travis-ci.org/ take a moment to create one now. There is no cost for travis-ci.org if you are developing an open source project. Alternatively, you can also use https://travis-ci.com/ as this will work with private repositories on Github.

You can describe the steps you would like Travis to take by creating a .travis.yml file. Travis also uses the YAML format for its build steps.

In this section we will create a new .travis.yml file and add the necessary steps to install Ansible and run the same commands we would run locally to run the Ansible playbook.

We will add this file to the root folder of our Github repository. When we make any changes to our code in the repository, Travis will be notified and start a build process following the steps we have in our.travis.yml file.

In the same folder where you created your Ansible playbook from our last post, create a .travis.yml file. In this file we will create a number of stages to install and deploy our changes. There are more Stages available for different functionality and for the full information on those stages view the Travis website at:

First we will use the os, dist, language and python keys to tell Travis the environment we want to create to run our build. In this example, we want to use a Linux instance using Ubuntu Bionic and use Python version 3.6: 1. Create a new file called .travis.yml and begin by adding the following lines:

os: linux
dist: bionic
language: python
python: 3.6

Next we will use the before_install key to perform an apt update to make sure we have an up to date instance for the next install step. We add the -qq parameter to make sure the output is minimal so the screen in the Travis UI is not filled up with apt update steps.

We then write the VAULT_PASSWORD environment variable to a local file for Ansible to use in a later step when running the playbook:

before_install:
# Scripts to run before the install stage
– sudo apt update -qq
– echo “$VAULT_PASSWORD” > vault_password

In this step we use the install key and use a Python package manager called pip to install awscli, boto3 and ansible version 2.9. Ansible uses the AWS CLI and boto3 to communicate with AWS:

install:
# Scripts to run at the install stage
– pip install awscli boto boto3 ‘ansible==2.9’

Next we form the deploy step. We will use 2 deploy steps. The first step will call our infrastructure playbook to make sure our infrastructure items are in place. The next step will run the webserver playbook to configure our server and deploy our files.

aws_ec2.yml is used to query the AWS resources to find the EC2 instance(s) by Tag so it knows which instance(s) to configure. Each playbook uses the vault password file to decrypt the sensitive credentials.

We use the on key to tell Travis to only deploy this work if it is triggered by a commit to the master branch on Git. This allows for commits to other branches or PRs without triggering a build as shown in the following code block:

deploy:
# deploy infrastructure changes
– provider: script
script: ansible-playbook -i aws_ec2.yml infrastructure.yml –vault-password-file=vault_password
on:
branch:master

# deploy webserver configuration
– provider: script
script: ansible-playbook -i aws_ec2.yml webserver.yml –vault-password-file=vault_password
on:
Branch:master

Finally, we add the after_script step to run as the last stage and to delete the vault_password file just to be just

after_script:
# Scripts to run as the last stage
– rm -f vault_password

Travis provides both online and command line option to validate your .travis.yml file, info on both options can be found at https://support.travis-ci.com/hc/en-us/articles/115002904174-Validating-travis-yml-files

Once you are happy with your .travis.yml file, you can commit the files to the repository to trigger a Travis build using the following steps in your command line:

git add .travis.yml
git commit -m “Adding Travis file”
git remote add origin git@github.com:{YOUR_GITHUB_USERNAME}/ansible-webserver.git
git push -u origin master

Your code will now be pushed to Github. Open your Travis dashboard at https://travis-ci.org/{YOUR_TRAVIS_USERNAME}/ansible_webserver and you should see your project build, performing the same steps you performed locally to create the EC2 instance and configure the server using Ansible.

You now have a Github repository that stores your project code. Any time you’d like to work on the project you can clone or pull the project, make some changes and then commit and push the changes back to the repository to trigger a build to further improve your infrastructure or webserver.

The above code is available here: https://github.com/gordonmurray/learn_sysops/tree/master/chapter_5/ansible

Git and Github are capable of many more items, such as branching or forking code to allow independent development when working as part of a team, though a guide on how to master using Github is outside the scope of this book.

Github have a Github Guides site you can follow here to learn more https://guides.github.com/

In this section, we used Travis to deploy changes to an existing server. In our next section we will use a combination of tools including Packer to create a new server image. We will use Ansible to configure this image and then we will use Terraform to put this new image into place as a working server to replace an existing server.

Replacing a webserver using Packer, Ansible and Terraform

In our earlier post, we used Ansible to create an EC2 instance first, and then took steps to configure it afterwards. In this process, the EC2 instance is usually built once then updated from then on with configuration or code changes.

In this section we will take a different approach. Instead of updating an existing server, we will create a new server, configure it and deploy it. We will use a tool called Packer from Hashicorp, the same company that behind Terraform.

To install Packer, please follow the instructions for your system here from the Packer website: https://packer.io/intro/getting-started/install.html

Servers created in this way are often called ephemeral or temporary servers. Creating servers in this way can help with auto scaling servers during high and low traffic. A pre-made image can be used to increase the number of active servers to handle peak web traffic and scale down off peak to save money.

In a larger organization, this process of replacing a server can help with security measures and upgrade scheduling, since a server image can be created, configured and tested every time there is a code change and deployed into production at any time.

Packer is a tool to create Amazon Machine Images (AMIs) that AWS can understand and use to create EC2 instances. We will use Packer to start with an Ubuntu image and use Ansible to configure the server.

Once we have created an AMI, Terraform can be used to create an EC2 instance using the AMI created by Packer. Since Packer and Ansible will have configured the server already, then when Terraform creates the EC2 instance it will already be ready to serve its website with no further changes.

We will combine some Ansible and some Terraform from our earlier post. Create a new folder called packer-terraform-webserver and we will create the following files:

  • /packer/server.json
  • /packer/variables.json
  • /ansible/

Copy your Ansible files from the last post to this folder

  • /terraform

Copy your Terraform files from the previous post to this folder and we will use an updated ec2.tf file:

  • /ansible/server.yml
  • /packer/server.json

The first file we will create is server.json. It includes a number of parts, so let’s look at each section as we append to the file:

Create a new file located at /packer/server.json with the following content:

{
“builders”: [{
“type”: “amazon-ebs”,
“profile”: “{{user `profile`}}”,
“region”: “{{user `region`}}”,
“source_ami”: “{{user `base_ami_id`}}”,
“instance_type”: “{{user `instance_type`}}”,
“force_deregister”: “true”,
“force_delete_snapshot”: “true”,
“ssh_username”: “ubuntu”,
“ami_name”: “example”,
“ami_regions” : [“{{user `region`}}”],
“tags”:
{
“Name”: “webserver”
}
}],

This builders section provides the main information for Packer to know the kind of AWS AMI to create. It also uses a number of vaiable values that we will define in another file later.

Next, add the following section to the same file:

“provisioners”: [
{
“type”: “shell”,
“inline”: [
“sudo apt update”,
“sudo apt install python3-pip -y”,
“pip install ‘ansible==2.9′”
]
},

This is a shell provisioner. It connects to the AMI that Packer has started and runs some necessary installation steps for the proceed steps to operate normally.

Next add the following to the same file:

{
“type”: “ansible-local”,
“playbook_file”: “../ansible/server.yml”,
“role_paths”: [
“../ansible/roles/apache”,
“../ansible/roles/php”,
“../ansible/roles/mysql”,
“../ansible/roles/deploy”
],
“group_vars”: “../ansible/group_vars”
},

This is an Ansible provisioner. It lets Packer know that we want to use Ansible to configure a server and instructs Packer where to find our Ansible playbook and our Ansible roles.

Finally, in the same file add the following section to upload some files

{
“type”: “file”,
“source”: “../ansible/roles/apache/files/webserver.conf”,
“destination”: “/home/ubuntu/webserver.conf”
},
{
“type”: “shell”,
“inline”: [
“sudo mv /home/ubuntu/webserver.conf /etc/apache2/sites-available/webserver.conf”
]
},
{
“type”: “file”,
“source”: “../src/index.php”,
“destination”: “/var/www/html/”
}
]
}

This section uses a file and shell command to upload our conf files and a shell command to move the conf file into the correct location.

Next, we will create the other files we will need. These files are shorter so we can take each one at a time instead of breaking it down in to several steps per file.

/packer/variables.json

{
“base_ami_id”: “ami-04c58523038d79132”,
“profile”: “example”,
“region”: “eu-west-1”
“instance_type”: “t2.nano”
}

/terraform/ec2.tf

# Get AMI
data “aws_ami” “webserver_ami” {
most_recent = true

filter {
name = “name”
values = [“webserver*”]
}

owners = [“${var.aws_account_id}”]
}

# Create EC2 instances
resource “aws_instance” “webserver” {
ami = “${data.aws_ami.webserver_ami.id}”
instance_type = “${var.default_instance_type}”
vpc_security_group_ids = [“${aws_security_group.example.id}”]
subnet_id = “${aws_subnet.subnet-1a.id}”
key_name = “${aws_key_pair.pem-key.key_name}”

tags = {
Name = “webserver”
}

}

You can validate Packer files using:

packer validate -var-file=packer/variables.json packer/server.json

Once validated, you can build the AMI with Packer using:

packer build -var-file=packer/variables.json packer/server.json

This will take a few minutes to complete. This will connect to your AWS account, start a small EC2 instance and Ansible will then configure the image.

Once it has completed, it will create an AMI in your account and terminate the EC2 instance it was using.

This AMI is then available for Terraform to find and use. The ec2.tf file has the aws_ami to find it based on its name and then aws_instance uses it to create the image.

To deploy the image with Terraform use:

cd /terraform
terraform init
terraform apply

Terraform will show you the items it plans to build so you can review before continuing. If you would like to go ahead, enter Yes and Terraform will create the EC2 instance.

If you would like to clean up your AWS account and remove the EC2 instance and surrounding items such as Security groups and key pairs, you can use the following command:

terraform destroy

This command will ask if you are sure that you want to continue before it removes the items from your AWS account.

You now have a process whereby Packer can create a base image, using Ansible to configure anything you need on the server and then Terraform to deploy the server.

If you make changes to Ansible so that your server is configured differently, for example if you change from Apache to Nginx, or deploy another website, the Packer will create a new AMI which replaces your last AMI. Terraform will then terminate your currently running EC2 instance and put a newer version in its place, within the same VPC, subnet and security group settings as the previous instance.

This process of replacing a server instead of updating it is known as ephemeral infrastructure. When a server is removed, you will lose any data from that server. This may seem scary and dangerous, but if we are backing up our data and abstracting our databases to services such as AWS Relational Database Service (RDS) and our storage to services like S3, we can be sure our data will remain as servers change over time. We will cover backups, testing and restoring data in a future post.

To remove the items created by Terraform, use the following command:

terraform destroy

To delete the AWS IAM user created for Ansible and Terraform, use the following command:

aws iam delete-user –user-name example

To identify the AMI created by Packer, get the AMI ID value using the following command:

aws ec2 describe-images –filters “Name=tag:Name,Values=webserver” –profile=webserver –region=eu-west-1 –query ‘Images[*].{ID:ImageId}’

Then delete the AMI, use the following command:

aws ec2 deregister-image –image-id ami-<value>

The source code for Ansible, Packer and Terraform and the steps mentioned above are all available here: https://github.com/gordonmurray/learn_sysops/tree/master/chapter_5/packer_ansible_terraform

Using Skeema to create or alter database tables

Until now we have looked at code related or infrastructure related changes. Database changes are also a common aspect of software development.

In this section we will use Skeema to create and alter some tables in a MariaDB database.

Skeema is a very useful tool that works with mySQL or Mariadb databases to help you to compare your development database files to a production database. Skeema will generate any alter statements that are required and apply those changes to your production database.

Skeema is a free and open source tool developed using Go and is available on Github at https://github.com/skeema/skeema

If you have very large tables in your production database, performing alters to the tables can take a lot of time to complete fully and may cause problems for your users if there are users actively using your software while you change the database.

Skeema has an option to use an external service to alter your tables. In the example that follows we will get up and running with Skeema and in another example we will update Skeema to use Perconas pt-online-schema-change to alter a pretend larger table.

The Percona alter can perform the necessary work by copying the target table structure to a new temporary table, then copying any data from the existing data to the new temporary table. It can then rename the tables to that new temporary table becomes the main table and optionally drop or leave the old table in place.

Skeema won’t let you make dangerous changes such as dropping a column. Sometimes however you might need to do this kind of change so you can tell Skeema to continue by using the –allow-unsafe parameter when performing a Push to change to a production database.

The following steps assume you have access to a mySQL or MariaDB database to try Skeema. If you don’t already have a database installed, you can follow the installation instructions here on the MariaDB website https://mariadb.com/kb/en/getting-installing-and-upgrading-mariadb/

If you are on a Linux based machine, you can install MariaDB locally by using the command:

sudo apt install mariadb-server

To show Skeema in operation, create a new database called prodution using the following command:

CREATE DATABASE IF NOT EXISTS production;

If you would like to create a dedicated database user to use with Skeema, create a database user using the following command:

grant all privileges on *.* to ‘dbuser’ @’localhost’ identified by ‘password’;

You can change the username of dbuser and its password instead of password to whatever you wish to use.

In the prodution database, lets create a simple users table with some data to mimic a production database.

Use the following to create a users table and insert a couple of records:

use production;

CREATE TABLE `users` (
`id` int(11) UNSIGNED NOT NULL AUTO_INCREMENT,
`first_name` varchar(100) NOT NULL,
`last_name` varchar(100) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE = InnoDB DEFAULT CHARSET = utf8mb4;

INSERT INTO `users` (`first_name`, `last_name`) VALUES
(‘John’, ‘Murphy’),
(‘Fred’, ‘Smith’);

Now that we have an example production database, with a table, some data and a database user, next we can install Skeema to start making changes to that database.

The easiest way to use Skeema is to get a compiled version of it from the Releases page on Github, suitable for your system at https://github.com/skeema/skeema/releases

Once you have Skeema installed, ensure it is working by running the following command:

./skeema version

You should see an output similar to “skeema version 1.4”.

So that we can keep things organized lets create a folder called database to contain our work:

mkdir database && cd database

We can initialize Skeema by using the following command:

./skeema init -h 127.0.0.1 -u dbuser -ppassword -d development

In this command we told Skeema to initialize and told it that our database is accessible at 127.0.0.1, that we are using the database user called dbuser and a password of “password” and finally to create a folder to represent our database called development

If you have existing databases and tables present in your database, Skeema will create a folder to represent each database and place .sql files in to each one to represent the table schemas.

Since we created a database called production and added a table called users you should now see a folder called production and within it a file called users.sql. You will also notice a file called .skeema in the folder which contains some configuration information for Skeema.

From here, we can add or alter any .sql files in the Production folder. As you develop a software application you will probably change your database requirements over time. You might use a tool such as PHPMyAdmin or Navicat as an interface to your database to help you to make changes to your local database. You can use those tools to export your tables into the production folder and Skeema will do the work of determining the changes to make and apply them to your production database.

To simulate a change, lets create a new file called production/comments.sql with the following content:

CREATE TABLE `comments` (
`commentid` int(10) UNSIGNED NOT NULL AUTO_INCREMENT,
`body` text NOT NULL,
`created` DATETIME NOT NULL,
`modified` DATETIME NOT NULL,
PRIMARY KEY (`commentid`)
) ENGINE = INNODB DEFAULT CHARSET = utf8mb4;

We could run this SQL file directly on the database but instead we will use Skeema to compare our production folder to our database and apply any changes.

To get Skeema to show us the changes it will make, run the following command:

./skeema diff -ppassword

You should see an output similar to the following, telling you that Skeema wants to create the new comments table:

[INFO] Generating diff of 127.0.0.1:3306 development vs /home/database/production/*.sql
— instance: 127.0.0.1:3306
USE `production`;
CREATE TABLE `comments` (
`commentid` int(10) unsigned NOT NULL AUTO_INCREMENT,
`body` text NOT NULL,
`created` datetime NOT NULL,
`modified` datetime NOT NULL,
PRIMARY KEY (`commentid`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
2020-04-09 15:50:43 [INFO] 127.0.0.1:3306 production: diff complete

You can get Skeema to apply this change by running the following command:

./skeema push -ppassword

You should see an output similar to the following:

[INFO] 127.0.0.1:3306 production: push complete
[INFO] Pushing changes from /home/database/production/*.sql to 127.0.0.1:3306 production
[INFO] 127.0.0.1:3306 production: No differences found

If you connect to your production database, you will need the new Comments table has been created there, ready for your application to use.

If you or your coworkers have altered the production database directly, you can tell Skeema to re-read in the details from the production database by running the following command:

./skeema pull -ppassword

If you created or changed any tables, the files in production/*.sql will be created or updated to match.

Using Skeema with Percona’s pt-online-schema-change

Performing alters to large database tables can take a lot of time to complete fully and may cause problems for your users if there are users actively using your software while you change the database.

The Percona pt-online-schema-change tool can perform the necessary work by copying the target table structure to a new temporary table, then copying any data from the existing data to the new temporary table. It can then rename the tables to that new temporary table becomes the main table and optionally drop or leave the old table in place. You can read more about pt-online-schema-change here : https://www.percona.com/doc/percona-toolkit/3.0/pt-online-schema-change.html. Skeema can be updated to use pt-online-schema-change very easily.

The first step is to install the pt-online-schema-change tool locally. The schema change tool is part of the Percona Toolkit and can be installed on a Linux machine using the command:

sudo apt install percona-toolkit

Other Linux based installation details can be found at https://www.percona.com/doc/percona-toolkit/LATEST/installation.html

To ensure pt-online-schema-change is installed correctly, use the following command:

pt-online-schema-change –help

You should see a large output of usage parameters that the tool supports.

To update Skeema to use pt-online-schema-change, open the file at /production/.skeema and add the following:

alter-wrapper=/usr/bin/pt-online-schema-change –execute –alter {CLAUSES} D={SCHEMA},t={TABLE},h={HOST},P={PORT},u={USER},p={PASSWORDX}

If you run the Skeema diff command again, you will notice a slightly longer output that include pt-online-schema-change:

./skeema diff -ppassword

You will see an output similar to the following:

[INFO] Generating diff of 127.0.0.1:3306 development vs /home/database/production/*.sql
— instance: 127.0.0.1:3306
USE `development`;
\! /usr/bin/pt-online-schema-change –execute –alter ‘ADD COLUMN `deleteddate` datetime NOT NULL’ D=development,t=comments,h=127.0.0.1,P=3306,u=dbuser,p=XXXXX
[INFO] 127.0.0.1:3306 development: diff complete

This output shows that Skeema is ready to use pt-online-schema-change and you can continue to apply the change using the same command as above:

./skeema push -ppassword

Since this is an example and not a true production environment, then the change will apply quickly. If you used this process to change a very large and busy production environment database it could take a long time to alter a table.

While the change would take time, your application would remain online and not cause issues for your users, as the pt-online-schema-change tool will be copying data from the old to the new table behind the scenes.

Summary

In this post, we covered several build and deployment tools. We used approaches such as commands in our terminal to make changes as well as using third party services such as Travis to make repeatable build and deployment steps for us.

We looked at Packer, Ansible and Terraform working together to create a server, to copy and configure files and then replace an existing server with a new one. We also looked at altering database tables, a job that often comes hand in hand with developing web applications.

In our next post, we will look at backing up our databases which is a great process to have in place before altering large databases in production use.

Leave a Reply

Your email address will not be published. Required fields are marked *