Saving ZohoCRM reports to Google Drive – a roundabout journey

I wanted to write about an integration I created for a client recently, as a form of documentation to myself on how it works, should I ever need to repair or update it.

I also thought it was interesting to build since a couple of different APIs and services are involved along the way. The data takes quite a journey to get from ZohoCRM to Google Drive.

The aim was to get data from scheduled ZohoCRM Reports, sent via email with attached Excel documents, into a spreadsheet on Google Docs, so that overall summary information could be viewed by the relevant stakeholders.

All of the following was necessary because:

  1. ZohoCRM don’t provide a way to send Reports directly to Google Drive, manually or automatically
  2. The kind of (summary) information provided in the Reports isn’t accessible via ZohoCRM’s API, at least not directly.

If ZohoCRM provided more data via their ZohoCRM API, then it would have been possible to have 1 small script running in the background to periodically poll ZohoCRM for new data and add it to a Google Spreadsheet without all of the “moving parts” that were needed in the end.

A summary of how it all works:

  1. Scheduled emails are sent from ZohoCRM to one of the members of the team.
  2. A rule was created on the mail server to automatically forward those emails to a virtual email inbox I created using http://www.mailgun.com/, an email like: reports@domain.com.
  3. Mailgun receives the email and POST’s the data to a URL I provided it
  4. This URL is a PHP script, running on an Ubuntu Digital Ocean server.
  5. This PHP script receives the data and stores it in a JSON file in a local folder. The data is parsed to get information on any attachments and those attachments are stored in a local folder also.
  6. A separate PHP script, scheduled to run periodically scans the folder mentioned previously for any Excel files (xls or xlsx) and performs the following:
  7. The excel files are copied to another folder for processing and removed from their original location to avoid re-processing the same excel files during the next iteration.
  8. The excel files are converted to CSV format, using LibreOffice –headless (https://www.libreoffice.org/), meaning it runs from the command line without a user interface. (Strangely, I was able to push Excel files directly to Google Drive on a Windows based server, but not on a Linux based server, which I couldn’t solve. It wasn’t related to permissions or mime type encoding like I thought it would me. I opted for converting to CSV which worked well)
  9. The resulting CSV files are pushed to Google Drive and converted to Google Spreadsheet format (so that they don’t contribute to the owners allowed storage space there) and given a title based on the name of the excel file and the date the file was created. For example ‘my_report.xls’ would be saved as ‘My Report 20/01/2016’
    The excel files are stored locally in case they are needed in future for debugging purposes.

I thought all of the above was an interesting process to develop as I imagined the journey an email from ZohoCRM took to get to Google Drive as intended.

  • An email from ZohoCRM originated from one of their servers, presumable somewhere in the US.
  • The email landed in one of the team members Microsoft based mail servers’ mailbox, presumably somewhere in Europe.
  • The mail server forwarded the email to MailGun, who are based in San Francisco, so their server(s) are probably located over in the US somewhere.
  • Mailgun posted the data in the received email to a URL I provided, which is an Ubuntu server created using Digital Ocean, located in London.
  • The PHP code on the Digital Ocean server in London used Google’s Drive API to push the data to Google Drive, again probably hosted somewhere in the US.

Despite hopping from country to country a couple of times, an email sent from ZohoCRM ended up a Google Spreadsheet just a few seconds later, no worse for wear.

How to back up your mysql database from a Laravel 5.1 application to Amazon S3

The following steps are a summary of the instructions from https://github.com/backup-manager/laravel, specific to a Laravel 5.1 application that needs to back up a mySQL database to an AWS S3 bucket.

The backup-manager library uses mysqldump to perform the database dump and works well on larger databases also.

First, create your bucket on AWS and create the IAM user with suitable S3 bucket permissions.

In the project directory, install backup-manager using the following:

composer require backup-manager/laravel

Add the AWS S3 bucket provider using:

composer require league/flysystem-aws-s3-v3

Edit /config/app.php and add the following in to the list of Providers:

BackupManager\Laravel\Laravel5ServiceProvider::class,

Back in the command line write the following in the project folder:

php artisan vendor:publish –provider=”BackupManager\Laravel\Laravel5ServiceProvider”

Update the following config file with the AWS S3 bucket credentials:

/config/backup-manager.php

‘s3’ => [
‘type’ => ‘AwsS3’,
‘key’ => ‘{AWSKEY}’,
‘secret’ => ‘{AWSSECRET}’,
‘region’ => ‘eu-west-1’,
‘bucket’ => ‘{AWSBUCKETNAME}’,
‘root’ => ‘/database_backups/’ . date(“Y”) . ‘/{APPLICATIONNAME}/’,
],

The folder(s) in the ‘root’ section will be created automatically if needed.

Finally, from the command line or a cron schedule, use the following to initiate a backup

php artisan db:backup --database=mysql --destination=s3 --destinationPath=`date+\%d-%m-%Y` --compression=gzip

We now place HTTPS on all client web applications using Lets Encrypt

A normal step for us when developing and deploying web applications or APIs for our clients is to add HTTPS certificates to the finished application when it is deployed Live.

Putting certificates in place has a cost in both time and money as they typically need to be purchased from providers such as Comodo or Verisign and put in place by a developer.

Putting secure certificates in place is often frustrating for a developer, as either an email address needs to be set up specific to the domain and a notification acknowledged by the domain owner or in some cases DNS records can be used to verify ownership, both of those can take time to resolve.

From today we will be using Lets Encrypt to place HTTPS-only access on all client sites, including development work on staging servers too.

Any new client projects will get certificates from the beginning of the project and for existing client sites, Lets Encrypt certificates will be put in place instead of renewal of existing certificate providers.

What is Lets Encrypt?

Lets Encrypt is a new certificate authority which entered public beta on December 3rd 2015, with major sponsors such as Mozilla, Cisco and Facebook.

Lets Encrypt is free and since there is no cost for us to purchase the certificates, then there will be no cost passed on to our clients.

For more information on Lets Encrypt, check out their site at https://letsencrypt.org/

If you are a developer and want to know how to install certificates, check out their “How it works” page https://letsencrypt.org/howitworks/ which shows 3 easy steps on how to get up and running.

Some good reasons to have HTTPS only access to your website or application include:

  1. Security – without HTTPS, its possible for cyber criminals to intercept data in transit to and from your site.
  2. Google Ranking – Google may place your site higher in their results if you have HTTPS access in place.
  3. HTTPS Access to a site makes a site slower is no longer true, The SSL performance Myth

Laravel Forge for creating and managing your PHP servers

I’ve tried a few different services to manage servers in the past, such as Command.io and I’ve settled on Laravel’s Forge for its ease of use, low cost and quick responses to any Support tickets, when building or maintaining servers for clients or side projects.

Forge can be used to create a server, designed for PHP development on your choice of provider, whether its Digital Ocean, AWS or a custom provider.

It will install nginx, PHP 7 (or 5.6), mySQL, postgres and Redis, possibly faster than using Ansible and definitely faster that doing it yourself by SSH’ing in.

Its not specific to Laravel based projects, it can be used to create servers to host any kind of PHP application. I’ve used it to host Laravel, Lumem, Slim native PHP and even WordPress sites. This blog is hosted on Digital Ocean via Forge.

You pay a flat fee to Forge for its control panel, regardless of the number of servers and the costs of any servers you create are invoiced as normal from your provider, such as AWS.

For me, the benefits of Forge are:

  1. Very quick to create a new PHP-ready server on Digital Ocean or AWS
  2. Adding sites will create the nginx virtual hosts automatically, including automatic www to non-www redirects.
  3. Forge also now supports Lets Encrypt so it only takes 1 or 2 clicks to add SSL to your site.
  4. You can add ‘apps’ to your sites which connect Github to your server, so when you commit to your project it can auto deploy to the server.

There are plenty of other features and a good place to see them if you like video tutorials about Forge in action are on Laracasts.

Some small complaints I have about Forge are that it doesn’t support roll-backs of deployments to a server, but I think maybe that’s saved for Laravel Envoyer, which I haven’t tried out yet.

Also, when adding a new site, such as “domain.com”, it will also create the www version nginx virtual host for you of “www.domain.com”, the problem is that if you add a site of “www.domain.com”, it goes ahead and creates a virtual host of “www.www.domain.com” :)

My next step it to try out PHP 7 on a Forge-built server.

Adding HTTPS by Lets Encrypt to a site in 3 easy steps

I wanted to add HTTPS to this blog to try out the new Lets Encrypt authority, with the intention of using it for other web apps if it worked out well.

I’ve been a happy user of SSLMate for a number of months as it’s easy to implement from the command line with DNS entries rather than waiting for emails and I didn’t think Lets Encrypt could be easier.

Lets Encrypt – Apache configuration

Lets Encrypt’s installer definitely worked out well! I ended up adding it to 4 sites in 1 sitting as it was so simple to do.

From the command line, type:

$ git clone https://github.com/letsencrypt/letsencrypt
$ cd letsencrypt
$ ./letsencrypt-auto

It detected the other virtual hosts on the same server and gave a menu of the sites I’d like to implement HTTPS on.

It even set up the (optional) redirects from HTTP to HTTPS.

My 3 in-development web apps and this blog are now all up and running with HTTPS in just a few minutes.

  1. https://serpp.net/
  2. https://mousejs.com/
  3. https://gitlogbook.com/
  4. https://www.gordonmurray.com/

Update: Lets Encrypt – Nginx configuration

After writing this post I needed to add SSL to an Ubuntu + Nginx configuration, which isn’t as automated as  the above Apache based configuration.

If using AWS, make sure to open port 443 for HTTPS in the AWS Console before you begin.

Get Lets Encrypt:

$ git clone https://github.com/letsencrypt/letsencrypt
$ cd letsencrypt

Stop Nginx for a minute:

sudo service nginx stop

Navigate to where Lets Encrypt was installed, for example /home/{username}/letsencrypt/ and type (changing www.domain.com to your own domain name):

./letsencrypt-auto certonly –standalone -d domain.com -d www.domain.com

If you haven’t opened port 443, you’ll get an error here

Once the certs are issued, take note of their paths.

Update the server section of your websites nginx conf to include the following (changing www.domain.com to your own domain name):

server {

listen 443 ssl;

ssl_certificate /etc/letsencrypt/live/www.domain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/www.domain.com/privkey.pem;

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers ‘EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH’;

}

You might want to redirect non-https traffic to your new https version, if so add the following to the nginx config file also:

server {
listen 80;
server_name domain.com www.domain.com;
return 301 https://www.domain.com$request_uri;
}

Start nginx again

sudo service nginx start

If you have any problems starting nginx, then write the following to debug nginx’s problem(s):

nginx -t

Setting up the basics

For each gitlogbook.com. serpp.net and mousejs.com;

  • AWS Route53 – for managing the DNS for each project
  • Bitbucket – to store the source code of each project
  • DeployHQ – to deploy code changes for each project on to its target server(s)
  • Google Analytics – to record visitor traffic to each project
  • Google Webmaster tools Search Tools – to report on indexing, any 404’s, any malware etc on each project

Trying out Laravel Spark as a SaaS front end

Laravel Spark is an early alpha project from the creator of Laravel, Taylor Otwell.

Spark is designed to be like a pre-built web application with user authentication, subscription plans, Coupons, Teams and Roles.

These are the kinds of things that are in a typical web based application and by using Spark, a developer can save a lot of time and energy by not reinventing the wheel in implementing these features in a new development project and focusing instead on the important stuff.

This sounds like an ideal use case for the projects I am developing as I don’t want to have to redevelop or even copy/paste these facilities for each project.

I tried it out earlier today on an Ubuntu VM, however I couldn’t get it going. I received an error which others seem to have run in to too and have opened an issue on Github.

I’m putting this on hold for now, despite being a beta product and this particular error, I still think its a useful tool to use so I’ll wait a few days to see if the Issue is closed on Github and try again then.

My side project plans for 2016

Over the past number of years, while starting and working at Murrion Software, I’ve developed many web applications for clients, either as an individual or as part of a team.

During that time, I’ve had a few ideas for useful web applications of my own and started a few side projects along the way, getting a basic MVP going, but I have never really given them any dedicated time.

In 2016, I want to put some dedicated time towards 3 web applications that I personally find useful and want to develop further.  They are:

  1. serpp.net
  2. mousejs.com
  3. gitlogbook.com

I’ve received some good feedback on them from early users so I believe there is a small market for them.

I’ll consider these projects a success if I can 1 or 2 paying users using each project.

I’m also using this opportunity to learn a few things including;

  • Use the newly released PHP7
  • I developed early versions of a couple of the apps in Codeigniter and I want to change those to Laravel 5.1 LTS
  • I want to implement Caching, Queuing, load-balancing where appropriate
  • I want to deploy the above to AWS and gain more understanding some of the AWS services, such as AWS’s API Gateway, AWS Lamda and to aim for AWS certification in the near future.
  • I want to learn a little about online marketing, to help promote the 3 apps

Overall, programming is still a hobby I enjoy, even after starting a software company and if all else fails, I’ll enjoy building out these 3 apps. Anyone can do 1!

Update Google Spreadsheets with 3rd party APIs

I’ve been using Google Docs and Spreadsheets more and more over the past number of months and I wanted to see if I could easily add or update information in a Google Spreadsheet, using Google’s Spreadsheet API and also some 3rd party APIs.

I found a handy PHP library which does most of the work on Github:

https://github.com/asimlqt/php-google-spreadsheet-client

This library makes makes it easy to list any spreadsheets, workbooks and rows of data to view, add and update them.

Before using this particular library though, you need to create an Application on Google first to get an Access token for your application to have the permission to manage the documents.

This project also on Github walks you through creating the Application and getting the Access token, its easy to get up and running in a short few minutes:

https://github.com/asimlqt/php-google-oauth

Once you have generated an Access token from using the ‘php-google-oauth’ library, you can then use the ‘php-google-spreadsheet-client’ library to access your Spreadsheets using PHP and manipulate the data.

I created 2 quick examples to try out updating existing data in a Spreadsheet. The first was to look up the Gender of names in a spreadsheet using https://genderize.io/, a free Gender lookup API.

Gender source code in a Gist

Here’s a little animation of the result:

Automatically add a Gender

Automatically add a Gender

 

Another example was to use http://fixer.io/ to look up the current exchange rate of a lit of Euro values and return the current USD value

Currency source code in a Gist

Here’s a little animation of the result:

Auto look up Currency conversion

Auto look up Currency conversion

 

 

 

 

How to migrate an EC2 instance from one AWS account to another

You might want to “copy and paste” an existing EC2 instance from once AWS account to another, to go from Development to Production if you used different accounts for example. Here’s a step by step guide.

One potential ‘gotcha’

If you created your existing EC2 instance from a community AMI (such as creating a WordPress instance from a Bitnami AMI) then you might have trouble doing this transfer, as some AMI’s require you to accept a terms and conditions, which can’t be done to my knowledge while going though this process of creating and copying your own AMI. You’ll run in to an un-passable alert when you try to launch an instance from the AMI in your new AWS account.

In this case, your only option is to create the new instance in your new AWS account in the same way you created the first one, by launching the new instance form the original community AMI, then bring bring over any files, databases, users etc by other means that are outside of this blog post.

Before you start (preferably a few hours before you need to make the switch-over)

  1. Update your existing DNS settings to have low TTL values, this will allow you to make changes to DNS records later on, which will kick in quickly as a result. If you are using Route53 for your DNS settings, set all of your TTL values to 60 seconds (1 minute).
  2. Write down the AWS account ID’s of both accounts for later use, mainly the ‘new’ AWS account. Available in the AWS “My Account” section.

The steps..

  • Log in to your existing/old AWS account and go to EC2. Select your existing EC2 instance. Click Actions, Image, Create Image and fill in the form with a name and description. If you have several different EC2 instances to copy over, then you’ll need to do this several times.
  • Go to the AMI’s section in the EC2 menu and you should see your new AMI. Select it and click Actions, Modify Image Permissions. Here you will need to enter the Account ID of your new AWS account to share the AMI.
  • Log out of the old AWS account and log in to the new AWS account and click on EC2, AMIs so you are in the same place as the last step, just in the newer account.
  • In the Filter/Search panel, The AMIs will default to showing “Owned by Me”. Change this to “Private Images” and the AMI(s) you created earlier will be visible.
  • You create a new EC2 instance from the AMI, select the AMI you want to use and click on Actions, then Launch.
  • You’ll need to go through several screens here related to picking the EC2 instance type, assign it to a VPC, Assign or create Public/Private keys, storage space and so on. At the end you’ll be able to Review and Launch.

At this point, you now have a new server, the same as your existing server, in another AWS account. If you want to direct traffic to this server, you’ll need to update your DNS records.

  • In your new AWS account, go to EC2 and you’ll see your new instance(s). Click on them and note the Public IP address, you’ll need this next.
  • If you are keeping your DNS settings in the old/existing AWS account, then log in to the old/existing AWS account, go to Route53 and update the A name record, add in IP address of the new Instance on your new AWS account.

If you are also moving the management of the domain name from the old Route53 account to your new Route53 account, then you’ll need to copy over all of the DNS records from old to new.

  • Log in to your old AWS account, go to Route53. Select the hosted zone you want to mange to view all of its DNS records. Zoom out a little with your browser using CTRL -/+ until all DNS records are visible.
  • Highlight them all and press Copy (CTRL + C). Paste these in to Notepad, Excel or Google Spreadsheets for safe keeping.
  • Log in to your new AWS account and go to Route53. Create a new hosted zone for the domain name and start to manually create the DNS records copied from your old account. You won’t need to create the SOA or NS dns records.
  • There is a way to Import the DNS records in a specific BIND format. You’ll need to get familiar with the AWS command line for this, which is outside this blog post.
  • Once you have your DNS records created, they should be the same as the old Route53 account for that domain name, except for the SOA and NS records. Go to the EC2 section and note the Public IP address of the new instance you created.
  • Update the A name record to change the IP address to your new IP address.
  • Finally, note the 4 Nameserver records in the NS record. If you registered your domain name somewhere else originally, such as godaddy.com or namecheap.com for example, you will need to log in to your control panel there and update the Nameservers from their own nameservers (probably 2 of them) to the 4 nameservers from your AWS account in the previous step.