Setting up a Salt cluster on DigitalOcean using Terraform

I use Ansible to provision or automate infrastructure tasks whenever I can. I wanted to try out Salt, from SaltStack to see how it compared. I read about its ability to push commands out to multiple servers very quickly and I wanted to try it for myself.

To gain some practical experience using Salt I used Packer and Terraform to create a Salt Master and several Salt Minions on DigitalOcean.com. This post is a guide on how to create a small Salt cluster of your own to begin learning Salt. The end result will be 1 Salt Master and 3 Salt Minions, running on the smallest DigitalOcean droplets, so you can safely experiment without a large cost.

Packer and Terraform are both great tools from Hashicorp. Packer allows you to describe a server in code, customize it and build it as an Image that you can store on cloud providers such as DigitalOcean or AWS for later use.

There are 2 overall steps to get the Salt cluster up and running. First I use Packer to create 2 base Images, a Salt master and a Minion and then I use Terraform to create the Droplets on DigitalOcean based on those images.

Packer is doing the work of provisioning the server Images ahead of time, so once the servers are running, you won’t need to SSH in to them to configure them unless there are problems.

To follow this guide, you will need:
* Install Terraform https://www.terraform.io/intro/getting-started/install.html
* Install Packer https://www.packer.io/intro/getting-started/install.html
* A DigitalOcean account and an API key https://www.digitalocean.com/?refcode=2c62404bb57

Once you have installed Terraform and Packer, the first step is to create the Images for the Salt Master and a Salt Minion using Packer.

All of the code used in this post is available on Github: https://github.com/gordonmurray/terraform_salt_digitalocean

Create a file called packer/variables.json with the following code, replace the ‘xxx’ string with your DigitalOcean API key. This will allow Packer to store the Images in your DigitalOcean account

{
"do_token": "xxx"
}

Next create a file called packer/salt_master_do_image.json with the following content:

{
  "variables": {
    "do_token": ""
  },
  "builders": [
    {
      "droplet_name": "salt",
      "snapshot_name": "salt",
      "type": "digitalocean",
      "ssh_username": "root",
      "api_token": "{{ user `do_token` }}",
      "image": "ubuntu-18-04-x64",
      "region": "lon1",
      "size": "512mb"
    }
  ],
  "provisioners": [
    {
      "type": "shell",
      "scripts": [
        "scripts/install_salt_master.sh"
      ]
    }
  ]
}

The ‘builders’ section allows you to make choices such as Image name, location, server size and type. In this example I am using Ubuntu version 18.04 and a small 512mb size Droplet.

The ‘provisioners’ section allows you to call additional scripts to install and run any commands to customize the Image further. In this example I am using a simple Bash script to install the Salt master and Salt minion. You could also use Ansible here if you wanted to.

For the Bash script to install necessary items, create a new file called packer/scripts/install_salt_master.sh with the following lines:

#!/usr/bin/env bash
set -xe
sudo apt-get update
sudo apt-get upgrade -y
sudo apt-get install salt-master -y
sudo service salt-master status

These steps will perform an apt update on the server and install salt-master.

Next create a file called packer/salt_minion_do_image.json with the following content:

{
  "variables": {
    "do_token": ""
  },
  "builders": [
    {
      "droplet_name": "salt-minion",
      "snapshot_name": "salt-minion",
      "type": "digitalocean",
      "ssh_username": "root",
      "api_token": "{{ user `do_token` }}",
      "image": "ubuntu-18-04-x64",
      "region": "lon1",
      "size": "512mb"
    }
  ],
  "provisioners": [
    {
      "type": "shell",
      "scripts": [
        "scripts/install_salt_minion.sh"
      ]
    }
  ]
}

Create a new file called packer/scripts/install_salt_minion.sh with the following lines:

#!/usr/bin/env bash
set -xe
sudo apt-get update
sudo apt-get upgrade -y
sudo apt-get install salt-minion -y
sudo service salt-minion status

You now have 2 Packer files which you can use to build the Salt Master and Minion Images and 2 Bash scripts to install the Salt Master and Salt Minion files.

You can validate the Packer files using :

packer validate -var-file=variables.json salt_master_do_image.json
packer validate -var-file=variables.json salt_minion_do_image.json

You can build the Images using :

packer build -var-file=variables.json salt_master_do_image.json
packer build -var-file=variables.json salt_minion_do_image.json

Once both builds have finished, in the Images section of your DigitalOcean account, you should see two new Images, ready to be used.

digitalocean imagesThose Images won’t cost you anything to store. You’ll be able to use them any time in the future as templates for any new Droplets you create. Its also possible to change their owner, so you can send Images to other DigitalOcean accounts.

Now that the Images are ready, we’ll use Terraform to create 4 new Droplets, 1 Salt Master and 3 Salt Minions.

Just like Packer, we need to give Terraform access to your DigitalOcean account, create a new file called variables.tfvars with the following content, changing the ‘xxx’ string to your DigitalOcean API key.

do_token = "xxx"

Next create main.tf:

variable "do_token" {}

# Configure the DigitalOcean Provider
provider "digitalocean" {
    token = "${var.do_token}"
}

This will tell Terraform to get the DigitalOcean Provider and use your API key.

Next create ssh_key.tf, this will tell Terraform to create an SSH key in your Digital Ocean account which can be used to SSH in to the Droplets later if needed.

module "my_ssh_key" {
  source     = "./modules/digitalocean/ssh_key"
  name       = "my_ssh_key"
  public_key = "${file("/home/ubuntu/.ssh/id_rsa.pub")}"
}

Update the path in ‘public_key’ to point to your own public key file.

Next create droplet.tf, this is where Terraform will use the Images we have created and create new Droplets.

# get the pre made salt master image
data "digitalocean_image" "salt" {
  name = "salt"
}

# get the pre made salt minion image
data "digitalocean_image" "salt-minion" {
  name = "salt-minion"
}

# generate a short random string
resource "random_string" "first" {
  length  = 8
  special = false
  upper   = false
}

# generate a short random string
resource "random_string" "second" {
  length  = 8
  special = false
  upper   = false
}

# generate a short random string
resource "random_string" "third" {
  length  = 8
  special = false
  upper   = false
}

# create salt master
module "salt" {
  source             = "./modules/digitalocean/droplet"
  image              = "${data.digitalocean_image.salt.image}"
  name               = "salt"
  region             = "lon1"
  size               = "512mb"
  backups            = "false"
  monitoring         = "true"
  ssh_keys           = ["${module.my_ssh_key.ssh_fingerprint}"]
  private_networking = "true"

  content     = "#interface: 0.0.0.0"
  destination = "/etc/salt/master"

  remote_exec_command = "salt-key -A -y"
}

# create salt minion with a unique hostname
module "salt-minion-1" {
  source             = "./modules/digitalocean/droplet"
  image              = "${data.digitalocean_image.salt-minion.image}"
  name               = "salt-minion-${random_string.first.result}"
  region             = "lon1"
  size               = "512mb"
  backups            = "false"
  monitoring         = "true"
  ssh_keys           = ["${module.my_ssh_key.ssh_fingerprint}"]
  private_networking = "true"

  # add the master IP address and the minions host name to a salt config file
  content     = "master: ${module.salt.droplet_ipv4_private}\nid: salt-minion-${random_string.first.result}"
  destination = "/etc/salt/minion"

  # restart salt-minion on the host to take in the config file change
  remote_exec_command = "sudo service salt-minion restart"
}

# create salt minion with a unique hostname
module "salt-minion-2" {
  source             = "./modules/digitalocean/droplet"
  image              = "${data.digitalocean_image.salt-minion.image}"
  name               = "salt-minion-${random_string.second.result}"
  region             = "lon1"
  size               = "512mb"
  backups            = "false"
  monitoring         = "true"
  ssh_keys           = ["${module.my_ssh_key.ssh_fingerprint}"]
  private_networking = "true"

  # add the master IP address and the minions host name to a salt config file
  content     = "master: ${module.salt.droplet_ipv4_private}\nid: salt-minion-${random_string.second.result}"
  destination = "/etc/salt/minion"

  # restart salt-minion on the host to take in the config file change
  remote_exec_command = "sudo service salt-minion restart"
}

# create salt minion with a unique hostname
module "salt-minion-3" {
  source             = "./modules/digitalocean/droplet"
  image              = "${data.digitalocean_image.salt-minion.image}"
  name               = "salt-minion-${random_string.third.result}"
  region             = "lon1"
  size               = "512mb"
  backups            = "false"
  monitoring         = "true"
  ssh_keys           = ["${module.my_ssh_key.ssh_fingerprint}"]
  private_networking = "true"

  # add the master IP address and the minions host name to a salt config file
  content     = "master: ${module.salt.droplet_ipv4_private}\nid: salt-minion-${random_string.third.result}"
  destination = "/etc/salt/minion"

  # restart salt-minion on the host to take in the config file change
  remote_exec_command = "sudo service salt-minion restart"
}

This file may look like there is a lot going on when you first look at it, though the the sections are straight forward. The ‘data’ section at the beginning of the file is getting the pre-made Images we made earlier in to variable names we can reference later in the file.

The ‘resource’ sections are creating random strings, so that we can give the Salt Minions unique host names, otherwise they’ll have the same host name when we create them.

The ‘modules’ are where the real work is happening. Each module section is a Droplet that Terraform will create. There is 1 for a Salt master and 3 for Salt Minions. The module is describing the Droplet that will be made, including information such as the Image, the host name, region, public key for access and the size of the Droplet to use.

Two lines worth highlighting though are :

 # add the master IP address and the minions host name to a salt config file
 content = "master: ${module.salt.droplet_ipv4_private}\nid: salt-minion-${random_string.first.result}"
 destination = "/etc/salt/minion"

Those lines are applied to the Minions only and they are used to create a file called /etc/salt/minion within each Minion. This file will contains both the Private IP address of the Salt master Droplet so they know what to connect to, and also the unique hostname of the Minion itself, otherwise each Minion will contact the Master under the host name of ‘salt-minion’

All 4 Modules also contain a ‘source’ field and each one points to a single module in the modules folder called modules/digitalocean/droplet/main.tf. Creating Terraform Modules allows us to create re-usable blocks of code we can use in other Terraform projects.

resource "digitalocean_droplet" "default" {
  image              = "${var.image}"
  name               = "${var.name}"
  region             = "${var.region}"
  size               = "${var.size}"
  backups            = "${var.backups}"
  monitoring         = "${var.monitoring}"
  ssh_keys           = ["${var.ssh_keys}"]
  private_networking = "${var.private_networking}"

  provisioner "file" {
    content     = "${var.content}"
    destination = "${var.destination}"
  }

  provisioner "remote-exec" {
    inline = [
      "${var.remote_exec_command}",
    ]
  }
}

# Send outputs from this resource back out
output "droplet_ipv4" {
  value = "${digitalocean_droplet.default.ipv4_address}"
}

# Send outputs from this resource back out
output "droplet_ipv4_private" {
  value = "${digitalocean_droplet.default.ipv4_address_private}"
}

We need to create 1 more filemodules/digitalocean/ssh_key/main.tf, a Module for the Key pair we used earlier:

resource "digitalocean_ssh_key" "default" {
  name       = "${var.name}"
  public_key = "${var.public_key}"
}

# Send outputs from this resource back out
output "ssh_fingerprint" {
  value = "${digitalocean_ssh_key.default.fingerprint}"
}

Our next step is to initialise Terraform and run its ‘plan’ command to see what Terraform will create:

terraform init
terraform plan

This command will show you and output consisting of 5 items to create, the public key, the Salt Master and 3 salt minions.

To go ahead and create them, run:

terraform apply

This will show you the same output as the ‘plan’ command again with a N/Y Prompt for it to go ahead and create the Droplets, you can press Y to continue or you can use the following to skip the confirmation and continue to build the Droplets

terraform apply -auto-approve

In a short few minutes, you will now have a Salt Master and 3 Salt Minions running ready to you.

To determine the IP addresses of the new Droplets, you can look at your DigitalOcean Droplets page, or you can type:

terraform show

This will show you the IP adresses and other information about the Droplets Terraform has created.

To log in to the Salt Master, you can use:

ssh root@{ip address of salt master}

As you are using a Public key, you will be able to log directly in.

You can issue the following command to see the Salt Minions that are tring to connect to the Master:

salt-key -L

You can accept all Minions using:

salt-key -A -y

Your Salt Master can no issue commands to each of the Minions, for example:

salt * cmd.run 'hostname -a'

You will get back a response from each Salt Minion with their unique host name.

You now have a small, functioning and affordable Salt cluster to continue to use to learn about Salt, grains, pillars and Services.

All of the code used in this post is available on Github: https://github.com/gordonmurray/terraform_salt_digitalocean

My notes from “Practical Monitoring” by Mike Julian

I finished this book earlier today, I enjoyed it.

I wanted to write up some notes I took along the way, not as a book review or anything, just to try and help me to remember some of the lessons learned.

  • The message that monitoring is not just for sysadmin/ops engineers is mentioned a couple of times in the book.
  • Focus on monitoring what is working, that is, what makes the app work, instead of a broader metrics like CPU / Memory
  • Focus on the over all monitoring mission, and not just to the specific tool(s) in use at the moment
  • Components of a monitoring service are: Data collection, Data storage, Visualization, Analytics and reporting, Alerting.
  • Use a SAAS tool for monitoring, it costs more than you think to develop in house. Unless you’re Google or Netflix don’t do it.
  • For alerting, automate the solution and remove the alert of possible. If human action is needed, use run books to list out the options and steps to resolve.
  • Incident response management guidelines
  • Front end monitoring is often overlooked, despite having an impact on revenue, page load time can increase over time and impacts on users happiness.
    • webpagetests.org is worth looking at in this area. For example it’s possible to measure front end performance impact of every pull request. ( look up webpagetest private instances )
    • For APM, statsD may be worth a look. Node based.
  • Monitoring deployments is often overlooked but worth doing to help correlate deployments against increased error rates in an API for example.
  • Kook in to distributed tracing. Ideal for micro service architectures.
  • Good info on use of /proc/meminfo on page 94, related to server monitoring and how to read it’s output correctly, as well as grepping syslog for OOMKiller, meaning the system is looking to free up some memory.
  • iostat good for disk stats, specially to see transfers per second (tps) also called IOPS.
  • Stop using SNMP. Insecure. Hard to extend. Opt for push based such ad collectd or telegraf
  • For databases, keep an eye in slow queries and IOPs.
  • For queues, such as RabbitMQ, start by monitoring queue size and messages per second
  • For Caches, such as Redis aim for 100% hits. Not always possible to do, but worthwhile aiming for.
  • Auditd is useful for monitoring user actions on a server. It can be told to monitor specific files too. Ideal for watching config files for changes. Use audisp-remote to send logs.
  • Security monitoring
    • Look in to Cloud Passage and Threat Stack
    • Use rkhunter and use a cron entry to keep it updated daily. Set up alerts for warnings in its logs.
    • Look in to Network Intrusion Detection (NIDs) and network taps to analyse traffic for stuff that has gotten passed the firewall

Storing files to multiple AWS S3 buckets

Option 1, Single bucket replication (files added to Bucket A are automatically added to Bucket B)

If you aim to store files to a second S3 bucket automatically upon uploading, the built in “Cross Region Replication” is the method to use.

Its very easy to set up with just a few clicks in the AWS console.

  • Select the properties of the S3 bucket you want to copy from and in “Versioning”, click on “Enable Versioning”.
  • In “Cross Region Replication”, click on “Enable Cross-Region Replication”
  • Source: You can tell S3 to use the entire bucket as a source, or only files with a prefix.
  • Destination Region: You need to pick another region to copy to, it doesn’t work in the same region
  • Destination Bucket: If you have created a bucket in that region already you can select it, or create a bucket on the fly.
  • Destination storage class: You can chose how files are stored in the destination bucket.
  • Create/select IAM role: This will allow you to use an existing IAM role or create a new role with the appropriate permissions to copy files to the destination bucket.

Once you press Save, Cross Region Replication is now set up. Any files you upload to the source bucket from now on will automatically be added to the destination bucket a moment later.

It doesn’t copy across any pre-existing files from the source, only new files are acted upon.

Also, Cross Region Replication can’t (currently) be chained to copy from a source to more than one destination, however there’s a way to do that using Lambda.

Option 2, Multiple bucket replication (files added to Bucket A are automatically added to Bucket B,C,D )

In a nutshell, AWS Lambda can be used to trigger some code to run based on events, such as a file landing in an S3 bucket.

The following steps help create a Lambda function to monitor a source bucket and then copy any files that are created to 1 or more target buckets.

Pre-Lambda steps

  1. Create 1 or more Buckets that you want to use as destinations
  2. Clone this node.js repository locally (https://github.com/eleven41/aws-lambda-copy-s3-objects) and run the following to install dependencies
    1. npm install async
    2. npm install aws-sdk
    3. compress all the files in to a Zip file
  3. In AWS’s ‘Identity and Access management’, click on ‘Policies’ and click ‘Create Policy’, copy the JSON from the ‘IAM Role’ section of the above repository : https://github.com/eleven41/aws-lambda-copy-s3-objects
  4. In IAM Roles, create a new Role. In ‘AWS Service Roles’, click on Lambda and select the Policy you created in the previous step.

Lambda steps

  • Log in to the AWS Console and select Lambda, click on “Create a Lambda function”.
  • Skip the pre made blueprints by pressing on Skip at the bottom of the page.
  • Name: Give your lambda function a name
  • Description: Give it a brief description
  • Runtime: Choose ‘Node.js’
  • Code entry type: Choose ‘upload a .zip file’ and upload the pre-made Zip file from earlier, no changes are needed to its code
  • Handler: Select ‘index.handler’
  • Role: Select the IAM role created earlier from the Pre-Lambda steps.

You can leave the remaining Advanced steps at their default values and Create the Lambda function.

  • Once the function is created, it will open its Properties. Click on the ‘Event Sources’ tab.
    • Event Source Type: S3
    • Bucket: Choose your source bucket here
    • Event Type: Object Created
    • Prefix: Leave blank
    • Suffix: Leave blank
    • Leave ‘Enable Now’ selected and press ‘Submit’
  • Go back to your original source S3 bucket, create a new Tag called ‘TargetBucket’
  • In the ‘TargetBucket’ value add a list of target buckets, separated by a space, that you want files copied to. If the buckets are in different regions you’ll need to specify, for example:
    • destination1 destination2@us-west-2 destination3@us-east-1

You can use Lambda’s built in Test section to test the function works well. Don’t forget to change the Test script to specify your source bucket.

If there are errors, there will be a link to Cloud Watch error logs to diagnose the problem.

Any files added to the source bucket will now automatically be added to the one or more target buckets specified in the ‘TargetBucket’ value of the original bucket.

Saving ZohoCRM reports to Google Drive – a roundabout journey

I wanted to write about an integration I created for a client recently, as a form of documentation to myself on how it works, should I ever need to repair or update it.

I also thought it was interesting to build since a couple of different APIs and services are involved along the way. The data takes quite a journey to get from ZohoCRM to Google Drive.

The aim was to get data from scheduled ZohoCRM Reports, sent via email with attached Excel documents, into a spreadsheet on Google Docs, so that overall summary information could be viewed by the relevant stakeholders.

All of the following was necessary because:

  1. ZohoCRM don’t provide a way to send Reports directly to Google Drive, manually or automatically
  2. The kind of (summary) information provided in the Reports isn’t accessible via ZohoCRM’s API, at least not directly.

If ZohoCRM provided more data via their ZohoCRM API, then it would have been possible to have 1 small script running in the background to periodically poll ZohoCRM for new data and add it to a Google Spreadsheet without all of the “moving parts” that were needed in the end.

A summary of how it all works:

  1. Scheduled emails are sent from ZohoCRM to one of the members of the team.
  2. A rule was created on the mail server to automatically forward those emails to a virtual email inbox I created using http://www.mailgun.com/, an email like: reports@domain.com.
  3. Mailgun receives the email and POST’s the data to a URL I provided it
  4. This URL is a PHP script, running on an Ubuntu Digital Ocean server.
  5. This PHP script receives the data and stores it in a JSON file in a local folder. The data is parsed to get information on any attachments and those attachments are stored in a local folder also.
  6. A separate PHP script, scheduled to run periodically scans the folder mentioned previously for any Excel files (xls or xlsx) and performs the following:
  7. The excel files are copied to another folder for processing and removed from their original location to avoid re-processing the same excel files during the next iteration.
  8. The excel files are converted to CSV format, using LibreOffice –headless (https://www.libreoffice.org/), meaning it runs from the command line without a user interface. (Strangely, I was able to push Excel files directly to Google Drive on a Windows based server, but not on a Linux based server, which I couldn’t solve. It wasn’t related to permissions or mime type encoding like I thought it would me. I opted for converting to CSV which worked well)
  9. The resulting CSV files are pushed to Google Drive and converted to Google Spreadsheet format (so that they don’t contribute to the owners allowed storage space there) and given a title based on the name of the excel file and the date the file was created. For example ‘my_report.xls’ would be saved as ‘My Report 20/01/2016’
    The excel files are stored locally in case they are needed in future for debugging purposes.

I thought all of the above was an interesting process to develop as I imagined the journey an email from ZohoCRM took to get to Google Drive as intended.

  • An email from ZohoCRM originated from one of their servers, presumable somewhere in the US.
  • The email landed in one of the team members Microsoft based mail servers’ mailbox, presumably somewhere in Europe.
  • The mail server forwarded the email to MailGun, who are based in San Francisco, so their server(s) are probably located over in the US somewhere.
  • Mailgun posted the data in the received email to a URL I provided, which is an Ubuntu server created using Digital Ocean, located in London.
  • The PHP code on the Digital Ocean server in London used Google’s Drive API to push the data to Google Drive, again probably hosted somewhere in the US.

Despite hopping from country to country a couple of times, an email sent from ZohoCRM ended up a Google Spreadsheet just a few seconds later, no worse for wear.

How to back up your mysql database from a Laravel 5.1 application to Amazon S3

The following steps are a summary of the instructions from https://github.com/backup-manager/laravel, specific to a Laravel 5.1 application that needs to back up a mySQL database to an AWS S3 bucket.

The backup-manager library uses mysqldump to perform the database dump and works well on larger databases also.

First, create your bucket on AWS and create the IAM user with suitable S3 bucket permissions.

In the project directory, install backup-manager using the following:

composer require backup-manager/laravel

Add the AWS S3 bucket provider using:

composer require league/flysystem-aws-s3-v3

Edit /config/app.php and add the following in to the list of Providers:

BackupManager\Laravel\Laravel5ServiceProvider::class,

Back in the command line write the following in the project folder:

php artisan vendor:publish –provider=”BackupManager\Laravel\Laravel5ServiceProvider”

Update the following config file with the AWS S3 bucket credentials:

/config/backup-manager.php

‘s3’ => [
‘type’ => ‘AwsS3’,
‘key’ => ‘{AWSKEY}’,
‘secret’ => ‘{AWSSECRET}’,
‘region’ => ‘eu-west-1’,
‘bucket’ => ‘{AWSBUCKETNAME}’,
‘root’ => ‘/database_backups/’ . date(“Y”) . ‘/{APPLICATIONNAME}/’,
],

The folder(s) in the ‘root’ section will be created automatically if needed.

Finally, from the command line or a cron schedule, use the following to initiate a backup

php artisan db:backup --database=mysql --destination=s3 --destinationPath=`date+\%d-%m-%Y` --compression=gzip

We now place HTTPS on all client web applications using Lets Encrypt

A normal step for us when developing and deploying web applications or APIs for our clients is to add HTTPS certificates to the finished application when it is deployed Live.

Putting certificates in place has a cost in both time and money as they typically need to be purchased from providers such as Comodo or Verisign and put in place by a developer.

Putting secure certificates in place is often frustrating for a developer, as either an email address needs to be set up specific to the domain and a notification acknowledged by the domain owner or in some cases DNS records can be used to verify ownership, both of those can take time to resolve.

From today we will be using Lets Encrypt to place HTTPS-only access on all client sites, including development work on staging servers too.

Any new client projects will get certificates from the beginning of the project and for existing client sites, Lets Encrypt certificates will be put in place instead of renewal of existing certificate providers.

What is Lets Encrypt?

Lets Encrypt is a new certificate authority which entered public beta on December 3rd 2015, with major sponsors such as Mozilla, Cisco and Facebook.

Lets Encrypt is free and since there is no cost for us to purchase the certificates, then there will be no cost passed on to our clients.

For more information on Lets Encrypt, check out their site at https://letsencrypt.org/

If you are a developer and want to know how to install certificates, check out their “How it works” page https://letsencrypt.org/howitworks/ which shows 3 easy steps on how to get up and running.

Some good reasons to have HTTPS only access to your website or application include:

  1. Security – without HTTPS, its possible for cyber criminals to intercept data in transit to and from your site.
  2. Google Ranking – Google may place your site higher in their results if you have HTTPS access in place.
  3. HTTPS Access to a site makes a site slower is no longer true, The SSL performance Myth

Laravel Forge for creating and managing your PHP servers

I’ve tried a few different services to manage servers in the past, such as Command.io and I’ve settled on Laravel’s Forge for its ease of use, low cost and quick responses to any Support tickets, when building or maintaining servers for clients or side projects.

Forge can be used to create a server, designed for PHP development on your choice of provider, whether its Digital Ocean, AWS or a custom provider.

It will install nginx, PHP 7 (or 5.6), mySQL, postgres and Redis, possibly faster than using Ansible and definitely faster that doing it yourself by SSH’ing in.

Its not specific to Laravel based projects, it can be used to create servers to host any kind of PHP application. I’ve used it to host Laravel, Lumem, Slim native PHP and even WordPress sites. This blog is hosted on Digital Ocean via Forge.

You pay a flat fee to Forge for its control panel, regardless of the number of servers and the costs of any servers you create are invoiced as normal from your provider, such as AWS.

For me, the benefits of Forge are:

  1. Very quick to create a new PHP-ready server on Digital Ocean or AWS
  2. Adding sites will create the nginx virtual hosts automatically, including automatic www to non-www redirects.
  3. Forge also now supports Lets Encrypt so it only takes 1 or 2 clicks to add SSL to your site.
  4. You can add ‘apps’ to your sites which connect Github to your server, so when you commit to your project it can auto deploy to the server.

There are plenty of other features and a good place to see them if you like video tutorials about Forge in action are on Laracasts.

Some small complaints I have about Forge are that it doesn’t support roll-backs of deployments to a server, but I think maybe that’s saved for Laravel Envoyer, which I haven’t tried out yet.

Also, when adding a new site, such as “domain.com”, it will also create the www version nginx virtual host for you of “www.domain.com”, the problem is that if you add a site of “www.domain.com”, it goes ahead and creates a virtual host of “www.www.domain.com” :)

My next step it to try out PHP 7 on a Forge-built server.

Adding HTTPS by Lets Encrypt to a site in 3 easy steps

I wanted to add HTTPS to this blog to try out the new Lets Encrypt authority, with the intention of using it for other web apps if it worked out well.

I’ve been a happy user of SSLMate for a number of months as it’s easy to implement from the command line with DNS entries rather than waiting for emails and I didn’t think Lets Encrypt could be easier.

Lets Encrypt – Apache configuration

Lets Encrypt’s installer definitely worked out well! I ended up adding it to 4 sites in 1 sitting as it was so simple to do.

From the command line, type:

$ git clone https://github.com/letsencrypt/letsencrypt
$ cd letsencrypt
$ ./letsencrypt-auto

It detected the other virtual hosts on the same server and gave a menu of the sites I’d like to implement HTTPS on.

It even set up the (optional) redirects from HTTP to HTTPS.

My 3 in-development web apps and this blog are now all up and running with HTTPS in just a few minutes.

  1. https://serpp.net/
  2. https://mousejs.com/
  3. https://gitlogbook.com/
  4. https://www.gordonmurray.com/

Update: Lets Encrypt – Nginx configuration

After writing this post I needed to add SSL to an Ubuntu + Nginx configuration, which isn’t as automated as  the above Apache based configuration.

If using AWS, make sure to open port 443 for HTTPS in the AWS Console before you begin.

Get Lets Encrypt:

$ git clone https://github.com/letsencrypt/letsencrypt
$ cd letsencrypt

Stop Nginx for a minute:

sudo service nginx stop

Navigate to where Lets Encrypt was installed, for example /home/{username}/letsencrypt/ and type (changing www.domain.com to your own domain name):

./letsencrypt-auto certonly –standalone -d domain.com -d www.domain.com

If you haven’t opened port 443, you’ll get an error here

Once the certs are issued, take note of their paths.

Update the server section of your websites nginx conf to include the following (changing www.domain.com to your own domain name):

server {

listen 443 ssl;

ssl_certificate /etc/letsencrypt/live/www.domain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/www.domain.com/privkey.pem;

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers ‘EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH’;

}

You might want to redirect non-https traffic to your new https version, if so add the following to the nginx config file also:

server {
listen 80;
server_name domain.com www.domain.com;
return 301 https://www.domain.com$request_uri;
}

Start nginx again

sudo service nginx start

If you have any problems starting nginx, then write the following to debug nginx’s problem(s):

nginx -t

Setting up the basics

For each gitlogbook.com. serpp.net and mousejs.com;

  • AWS Route53 – for managing the DNS for each project
  • Bitbucket – to store the source code of each project
  • DeployHQ – to deploy code changes for each project on to its target server(s)
  • Google Analytics – to record visitor traffic to each project
  • Google Webmaster tools Search Tools – to report on indexing, any 404’s, any malware etc on each project

Trying out Laravel Spark as a SaaS front end

Laravel Spark is an early alpha project from the creator of Laravel, Taylor Otwell.

Spark is designed to be like a pre-built web application with user authentication, subscription plans, Coupons, Teams and Roles.

These are the kinds of things that are in a typical web based application and by using Spark, a developer can save a lot of time and energy by not reinventing the wheel in implementing these features in a new development project and focusing instead on the important stuff.

This sounds like an ideal use case for the projects I am developing as I don’t want to have to redevelop or even copy/paste these facilities for each project.

I tried it out earlier today on an Ubuntu VM, however I couldn’t get it going. I received an error which others seem to have run in to too and have opened an issue on Github.

I’m putting this on hold for now, despite being a beta product and this particular error, I still think its a useful tool to use so I’ll wait a few days to see if the Issue is closed on Github and try again then.