My notes from “Practical Monitoring” by Mike Julian

I finished this book earlier today, I enjoyed it.

I wanted to write up some notes I took along the way, not as a book review or anything, just to try and help me to remember some of the lessons learned.

  • The message that monitoring is not just for sysadmin/ops engineers is mentioned a couple of times in the book.
  • Focus on monitoring what is working, that is, what makes the app work, instead of a broader metrics like CPU / Memory
  • Focus on the over all monitoring mission, and not just to the specific tool(s) in use at the moment
  • Components of a monitoring service are: Data collection, Data storage, Visualization, Analytics and reporting, Alerting.
  • Use a SAAS tool for monitoring, it costs more than you think to develop in house. Unless you’re Google or Netflix don’t do it.
  • For alerting, automate the solution and remove the alert of possible. If human action is needed, use run books to list out the options and steps to resolve.
  • Incident response management guidelines
  • Front end monitoring is often overlooked, despite having an impact on revenue, page load time can increase over time and impacts on users happiness.
    • is worth looking at in this area. For example it’s possible to measure front end performance impact of every pull request. ( look up webpagetest private instances )
    • For APM, statsD may be worth a look. Node based.
  • Monitoring deployments is often overlooked but worth doing to help correlate deployments against increased error rates in an API for example.
  • Kook in to distributed tracing. Ideal for micro service architectures.
  • Good info on use of /proc/meminfo on page 94, related to server monitoring and how to read it’s output correctly, as well as grepping syslog for OOMKiller, meaning the system is looking to free up some memory.
  • iostat good for disk stats, specially to see transfers per second (tps) also called IOPS.
  • Stop using SNMP. Insecure. Hard to extend. Opt for push based such ad collectd or telegraf
  • For databases, keep an eye in slow queries and IOPs.
  • For queues, such as RabbitMQ, start by monitoring queue size and messages per second
  • For Caches, such as Redis aim for 100% hits. Not always possible to do, but worthwhile aiming for.
  • Auditd is useful for monitoring user actions on a server. It can be told to monitor specific files too. Ideal for watching config files for changes. Use audisp-remote to send logs.
  • Security monitoring
    • Look in to Cloud Passage and Threat Stack
    • Use rkhunter and use a cron entry to keep it updated daily. Set up alerts for warnings in its logs.
    • Look in to Network Intrusion Detection (NIDs) and network taps to analyse traffic for stuff that has gotten passed the firewall

Storing files to multiple AWS S3 buckets

Option 1, Single bucket replication (files added to Bucket A are automatically added to Bucket B)

If you aim to store files to a second S3 bucket automatically upon uploading, the built in “Cross Region Replication” is the method to use.

Its very easy to set up with just a few clicks in the AWS console.

  • Select the properties of the S3 bucket you want to copy from and in “Versioning”, click on “Enable Versioning”.
  • In “Cross Region Replication”, click on “Enable Cross-Region Replication”
  • Source: You can tell S3 to use the entire bucket as a source, or only files with a prefix.
  • Destination Region: You need to pick another region to copy to, it doesn’t work in the same region
  • Destination Bucket: If you have created a bucket in that region already you can select it, or create a bucket on the fly.
  • Destination storage class: You can chose how files are stored in the destination bucket.
  • Create/select IAM role: This will allow you to use an existing IAM role or create a new role with the appropriate permissions to copy files to the destination bucket.

Once you press Save, Cross Region Replication is now set up. Any files you upload to the source bucket from now on will automatically be added to the destination bucket a moment later.

It doesn’t copy across any pre-existing files from the source, only new files are acted upon.

Also, Cross Region Replication can’t (currently) be chained to copy from a source to more than one destination, however there’s a way to do that using Lambda.

Option 2, Multiple bucket replication (files added to Bucket A are automatically added to Bucket B,C,D )

In a nutshell, AWS Lambda can be used to trigger some code to run based on events, such as a file landing in an S3 bucket.

The following steps help create a Lambda function to monitor a source bucket and then copy any files that are created to 1 or more target buckets.

Pre-Lambda steps

  1. Create 1 or more Buckets that you want to use as destinations
  2. Clone this node.js repository locally ( and run the following to install dependencies
    1. npm install async
    2. npm install aws-sdk
    3. compress all the files in to a Zip file
  3. In AWS’s ‘Identity and Access management’, click on ‘Policies’ and click ‘Create Policy’, copy the JSON from the ‘IAM Role’ section of the above repository :
  4. In IAM Roles, create a new Role. In ‘AWS Service Roles’, click on Lambda and select the Policy you created in the previous step.

Lambda steps

  • Log in to the AWS Console and select Lambda, click on “Create a Lambda function”.
  • Skip the pre made blueprints by pressing on Skip at the bottom of the page.
  • Name: Give your lambda function a name
  • Description: Give it a brief description
  • Runtime: Choose ‘Node.js’
  • Code entry type: Choose ‘upload a .zip file’ and upload the pre-made Zip file from earlier, no changes are needed to its code
  • Handler: Select ‘index.handler’
  • Role: Select the IAM role created earlier from the Pre-Lambda steps.

You can leave the remaining Advanced steps at their default values and Create the Lambda function.

  • Once the function is created, it will open its Properties. Click on the ‘Event Sources’ tab.
    • Event Source Type: S3
    • Bucket: Choose your source bucket here
    • Event Type: Object Created
    • Prefix: Leave blank
    • Suffix: Leave blank
    • Leave ‘Enable Now’ selected and press ‘Submit’
  • Go back to your original source S3 bucket, create a new Tag called ‘TargetBucket’
  • In the ‘TargetBucket’ value add a list of target buckets, separated by a space, that you want files copied to. If the buckets are in different regions you’ll need to specify, for example:
    • destination1 destination2@us-west-2 destination3@us-east-1

You can use Lambda’s built in Test section to test the function works well. Don’t forget to change the Test script to specify your source bucket.

If there are errors, there will be a link to Cloud Watch error logs to diagnose the problem.

Any files added to the source bucket will now automatically be added to the one or more target buckets specified in the ‘TargetBucket’ value of the original bucket.

Saving ZohoCRM reports to Google Drive – a roundabout journey

I wanted to write about an integration I created for a client recently, as a form of documentation to myself on how it works, should I ever need to repair or update it.

I also thought it was interesting to build since a couple of different APIs and services are involved along the way. The data takes quite a journey to get from ZohoCRM to Google Drive.

The aim was to get data from scheduled ZohoCRM Reports, sent via email with attached Excel documents, into a spreadsheet on Google Docs, so that overall summary information could be viewed by the relevant stakeholders.

All of the following was necessary because:

  1. ZohoCRM don’t provide a way to send Reports directly to Google Drive, manually or automatically
  2. The kind of (summary) information provided in the Reports isn’t accessible via ZohoCRM’s API, at least not directly.

If ZohoCRM provided more data via their ZohoCRM API, then it would have been possible to have 1 small script running in the background to periodically poll ZohoCRM for new data and add it to a Google Spreadsheet without all of the “moving parts” that were needed in the end.

A summary of how it all works:

  1. Scheduled emails are sent from ZohoCRM to one of the members of the team.
  2. A rule was created on the mail server to automatically forward those emails to a virtual email inbox I created using, an email like:
  3. Mailgun receives the email and POST’s the data to a URL I provided it
  4. This URL is a PHP script, running on an Ubuntu Digital Ocean server.
  5. This PHP script receives the data and stores it in a JSON file in a local folder. The data is parsed to get information on any attachments and those attachments are stored in a local folder also.
  6. A separate PHP script, scheduled to run periodically scans the folder mentioned previously for any Excel files (xls or xlsx) and performs the following:
  7. The excel files are copied to another folder for processing and removed from their original location to avoid re-processing the same excel files during the next iteration.
  8. The excel files are converted to CSV format, using LibreOffice –headless (, meaning it runs from the command line without a user interface. (Strangely, I was able to push Excel files directly to Google Drive on a Windows based server, but not on a Linux based server, which I couldn’t solve. It wasn’t related to permissions or mime type encoding like I thought it would me. I opted for converting to CSV which worked well)
  9. The resulting CSV files are pushed to Google Drive and converted to Google Spreadsheet format (so that they don’t contribute to the owners allowed storage space there) and given a title based on the name of the excel file and the date the file was created. For example ‘my_report.xls’ would be saved as ‘My Report 20/01/2016’
    The excel files are stored locally in case they are needed in future for debugging purposes.

I thought all of the above was an interesting process to develop as I imagined the journey an email from ZohoCRM took to get to Google Drive as intended.

  • An email from ZohoCRM originated from one of their servers, presumable somewhere in the US.
  • The email landed in one of the team members Microsoft based mail servers’ mailbox, presumably somewhere in Europe.
  • The mail server forwarded the email to MailGun, who are based in San Francisco, so their server(s) are probably located over in the US somewhere.
  • Mailgun posted the data in the received email to a URL I provided, which is an Ubuntu server created using Digital Ocean, located in London.
  • The PHP code on the Digital Ocean server in London used Google’s Drive API to push the data to Google Drive, again probably hosted somewhere in the US.

Despite hopping from country to country a couple of times, an email sent from ZohoCRM ended up a Google Spreadsheet just a few seconds later, no worse for wear.

How to back up your mysql database from a Laravel 5.1 application to Amazon S3

The following steps are a summary of the instructions from, specific to a Laravel 5.1 application that needs to back up a mySQL database to an AWS S3 bucket.

The backup-manager library uses mysqldump to perform the database dump and works well on larger databases also.

First, create your bucket on AWS and create the IAM user with suitable S3 bucket permissions.

In the project directory, install backup-manager using the following:

composer require backup-manager/laravel

Add the AWS S3 bucket provider using:

composer require league/flysystem-aws-s3-v3

Edit /config/app.php and add the following in to the list of Providers:


Back in the command line write the following in the project folder:

php artisan vendor:publish –provider=”BackupManager\Laravel\Laravel5ServiceProvider”

Update the following config file with the AWS S3 bucket credentials:


‘s3’ => [
‘type’ => ‘AwsS3’,
‘key’ => ‘{AWSKEY}’,
‘secret’ => ‘{AWSSECRET}’,
‘region’ => ‘eu-west-1’,
‘bucket’ => ‘{AWSBUCKETNAME}’,
‘root’ => ‘/database_backups/’ . date(“Y”) . ‘/{APPLICATIONNAME}/’,

The folder(s) in the ‘root’ section will be created automatically if needed.

Finally, from the command line or a cron schedule, use the following to initiate a backup

php artisan db:backup --database=mysql --destination=s3 --destinationPath=`date+\%d-%m-%Y` --compression=gzip

We now place HTTPS on all client web applications using Lets Encrypt

A normal step for us when developing and deploying web applications or APIs for our clients is to add HTTPS certificates to the finished application when it is deployed Live.

Putting certificates in place has a cost in both time and money as they typically need to be purchased from providers such as Comodo or Verisign and put in place by a developer.

Putting secure certificates in place is often frustrating for a developer, as either an email address needs to be set up specific to the domain and a notification acknowledged by the domain owner or in some cases DNS records can be used to verify ownership, both of those can take time to resolve.

From today we will be using Lets Encrypt to place HTTPS-only access on all client sites, including development work on staging servers too.

Any new client projects will get certificates from the beginning of the project and for existing client sites, Lets Encrypt certificates will be put in place instead of renewal of existing certificate providers.

What is Lets Encrypt?

Lets Encrypt is a new certificate authority which entered public beta on December 3rd 2015, with major sponsors such as Mozilla, Cisco and Facebook.

Lets Encrypt is free and since there is no cost for us to purchase the certificates, then there will be no cost passed on to our clients.

For more information on Lets Encrypt, check out their site at

If you are a developer and want to know how to install certificates, check out their “How it works” page which shows 3 easy steps on how to get up and running.

Some good reasons to have HTTPS only access to your website or application include:

  1. Security – without HTTPS, its possible for cyber criminals to intercept data in transit to and from your site.
  2. Google Ranking – Google may place your site higher in their results if you have HTTPS access in place.
  3. HTTPS Access to a site makes a site slower is no longer true, The SSL performance Myth

Laravel Forge for creating and managing your PHP servers

I’ve tried a few different services to manage servers in the past, such as and I’ve settled on Laravel’s Forge for its ease of use, low cost and quick responses to any Support tickets, when building or maintaining servers for clients or side projects.

Forge can be used to create a server, designed for PHP development on your choice of provider, whether its Digital Ocean, AWS or a custom provider.

It will install nginx, PHP 7 (or 5.6), mySQL, postgres and Redis, possibly faster than using Ansible and definitely faster that doing it yourself by SSH’ing in.

Its not specific to Laravel based projects, it can be used to create servers to host any kind of PHP application. I’ve used it to host Laravel, Lumem, Slim native PHP and even WordPress sites. This blog is hosted on Digital Ocean via Forge.

You pay a flat fee to Forge for its control panel, regardless of the number of servers and the costs of any servers you create are invoiced as normal from your provider, such as AWS.

For me, the benefits of Forge are:

  1. Very quick to create a new PHP-ready server on Digital Ocean or AWS
  2. Adding sites will create the nginx virtual hosts automatically, including automatic www to non-www redirects.
  3. Forge also now supports Lets Encrypt so it only takes 1 or 2 clicks to add SSL to your site.
  4. You can add ‘apps’ to your sites which connect Github to your server, so when you commit to your project it can auto deploy to the server.

There are plenty of other features and a good place to see them if you like video tutorials about Forge in action are on Laracasts.

Some small complaints I have about Forge are that it doesn’t support roll-backs of deployments to a server, but I think maybe that’s saved for Laravel Envoyer, which I haven’t tried out yet.

Also, when adding a new site, such as “”, it will also create the www version nginx virtual host for you of “”, the problem is that if you add a site of “”, it goes ahead and creates a virtual host of “” :)

My next step it to try out PHP 7 on a Forge-built server.

Adding HTTPS by Lets Encrypt to a site in 3 easy steps

I wanted to add HTTPS to this blog to try out the new Lets Encrypt authority, with the intention of using it for other web apps if it worked out well.

I’ve been a happy user of SSLMate for a number of months as it’s easy to implement from the command line with DNS entries rather than waiting for emails and I didn’t think Lets Encrypt could be easier.

Lets Encrypt – Apache configuration

Lets Encrypt’s installer definitely worked out well! I ended up adding it to 4 sites in 1 sitting as it was so simple to do.

From the command line, type:

$ git clone
$ cd letsencrypt
$ ./letsencrypt-auto

It detected the other virtual hosts on the same server and gave a menu of the sites I’d like to implement HTTPS on.

It even set up the (optional) redirects from HTTP to HTTPS.

My 3 in-development web apps and this blog are now all up and running with HTTPS in just a few minutes.


Update: Lets Encrypt – Nginx configuration

After writing this post I needed to add SSL to an Ubuntu + Nginx configuration, which isn’t as automated as  the above Apache based configuration.

If using AWS, make sure to open port 443 for HTTPS in the AWS Console before you begin.

Get Lets Encrypt:

$ git clone
$ cd letsencrypt

Stop Nginx for a minute:

sudo service nginx stop

Navigate to where Lets Encrypt was installed, for example /home/{username}/letsencrypt/ and type (changing to your own domain name):

./letsencrypt-auto certonly –standalone -d -d

If you haven’t opened port 443, you’ll get an error here

Once the certs are issued, take note of their paths.

Update the server section of your websites nginx conf to include the following (changing to your own domain name):

server {

listen 443 ssl;

ssl_certificate /etc/letsencrypt/live/;
ssl_certificate_key /etc/letsencrypt/live/;

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;


You might want to redirect non-https traffic to your new https version, if so add the following to the nginx config file also:

server {
listen 80;
return 301$request_uri;

Start nginx again

sudo service nginx start

If you have any problems starting nginx, then write the following to debug nginx’s problem(s):

nginx -t

Setting up the basics

For each and;

  • AWS Route53 – for managing the DNS for each project
  • Bitbucket – to store the source code of each project
  • DeployHQ – to deploy code changes for each project on to its target server(s)
  • Google Analytics – to record visitor traffic to each project
  • Google Webmaster tools Search Tools – to report on indexing, any 404’s, any malware etc on each project

Trying out Laravel Spark as a SaaS front end

Laravel Spark is an early alpha project from the creator of Laravel, Taylor Otwell.

Spark is designed to be like a pre-built web application with user authentication, subscription plans, Coupons, Teams and Roles.

These are the kinds of things that are in a typical web based application and by using Spark, a developer can save a lot of time and energy by not reinventing the wheel in implementing these features in a new development project and focusing instead on the important stuff.

This sounds like an ideal use case for the projects I am developing as I don’t want to have to redevelop or even copy/paste these facilities for each project.

I tried it out earlier today on an Ubuntu VM, however I couldn’t get it going. I received an error which others seem to have run in to too and have opened an issue on Github.

I’m putting this on hold for now, despite being a beta product and this particular error, I still think its a useful tool to use so I’ll wait a few days to see if the Issue is closed on Github and try again then.

My side project plans for 2016

Over the past number of years, while starting and working at Murrion Software, I’ve developed many web applications for clients, either as an individual or as part of a team.

During that time, I’ve had a few ideas for useful web applications of my own and started a few side projects along the way, getting a basic MVP going, but I have never really given them any dedicated time.

In 2016, I want to put some dedicated time towards 3 web applications that I personally find useful and want to develop further.  They are:


I’ve received some good feedback on them from early users so I believe there is a small market for them.

I’ll consider these projects a success if I can 1 or 2 paying users using each project.

I’m also using this opportunity to learn a few things including;

  • Use the newly released PHP7
  • I developed early versions of a couple of the apps in Codeigniter and I want to change those to Laravel 5.1 LTS
  • I want to implement Caching, Queuing, load-balancing where appropriate
  • I want to deploy the above to AWS and gain more understanding some of the AWS services, such as AWS’s API Gateway, AWS Lamda and to aim for AWS certification in the near future.
  • I want to learn a little about online marketing, to help promote the 3 apps

Overall, programming is still a hobby I enjoy, even after starting a software company and if all else fails, I’ll enjoy building out these 3 apps. Anyone can do 1!