How to migrate an EC2 instance from one AWS account to another

You might want to “copy and paste” an existing EC2 instance from once AWS account to another, to go from Development to Production if you used different accounts for example. Here’s a step by step guide.

One potential ‘gotcha’

If you created your existing EC2 instance from a community AMI (such as creating a WordPress instance from a Bitnami AMI) then you might have trouble doing this transfer, as some AMI’s require you to accept a terms and conditions, which can’t be done to my knowledge while going though this process of creating and copying your own AMI. You’ll run in to an un-passable alert when you try to launch an instance from the AMI in your new AWS account.

In this case, your only option is to create the new instance in your new AWS account in the same way you created the first one, by launching the new instance form the original community AMI, then bring bring over any files, databases, users etc by other means that are outside of this blog post.

Before you start (preferably a few hours before you need to make the switch-over)

  1. Update your existing DNS settings to have low TTL values, this will allow you to make changes to DNS records later on, which will kick in quickly as a result. If you are using Route53 for your DNS settings, set all of your TTL values to 60 seconds (1 minute).
  2. Write down the AWS account ID’s of both accounts for later use, mainly the ‘new’ AWS account. Available in the AWS “My Account” section.

The steps..

  • Log in to your existing/old AWS account and go to EC2. Select your existing EC2 instance. Click Actions, Image, Create Image and fill in the form with a name and description. If you have several different EC2 instances to copy over, then you’ll need to do this several times.
  • Go to the AMI’s section in the EC2 menu and you should see your new AMI. Select it and click Actions, Modify Image Permissions. Here you will need to enter the Account ID of your new AWS account to share the AMI.
  • Log out of the old AWS account and log in to the new AWS account and click on EC2, AMIs so you are in the same place as the last step, just in the newer account.
  • In the Filter/Search panel, The AMIs will default to showing “Owned by Me”. Change this to “Private Images” and the AMI(s) you created earlier will be visible.
  • You create a new EC2 instance from the AMI, select the AMI you want to use and click on Actions, then Launch.
  • You’ll need to go through several screens here related to picking the EC2 instance type, assign it to a VPC, Assign or create Public/Private keys, storage space and so on. At the end you’ll be able to Review and Launch.

At this point, you now have a new server, the same as your existing server, in another AWS account. If you want to direct traffic to this server, you’ll need to update your DNS records.

  • In your new AWS account, go to EC2 and you’ll see your new instance(s). Click on them and note the Public IP address, you’ll need this next.
  • If you are keeping your DNS settings in the old/existing AWS account, then log in to the old/existing AWS account, go to Route53 and update the A name record, add in IP address of the new Instance on your new AWS account.

If you are also moving the management of the domain name from the old Route53 account to your new Route53 account, then you’ll need to copy over all of the DNS records from old to new.

  • Log in to your old AWS account, go to Route53. Select the hosted zone you want to mange to view all of its DNS records. Zoom out a little with your browser using CTRL -/+ until all DNS records are visible.
  • Highlight them all and press Copy (CTRL + C). Paste these in to Notepad, Excel or Google Spreadsheets for safe keeping.
  • Log in to your new AWS account and go to Route53. Create a new hosted zone for the domain name and start to manually create the DNS records copied from your old account. You won’t need to create the SOA or NS dns records.
  • There is a way to Import the DNS records in a specific BIND format. You’ll need to get familiar with the AWS command line for this, which is outside this blog post.
  • Once you have your DNS records created, they should be the same as the old Route53 account for that domain name, except for the SOA and NS records. Go to the EC2 section and note the Public IP address of the new instance you created.
  • Update the A name record to change the IP address to your new IP address.
  • Finally, note the 4 Nameserver records in the NS record. If you registered your domain name somewhere else originally, such as godaddy.com or namecheap.com for example, you will need to log in to your control panel there and update the Nameservers from their own nameservers (probably 2 of them) to the 4 nameservers from your AWS account in the previous step.

 

 

 

How to prevent ‘GoogleBot’ from repeatedly taking your WordPress site offline

If your website is powered by WordPress and seems to go offline at different times for no reason at all, its possible that an automated system is calling one or more URLs on your WordPress site frequently enough to overwhelm the server.

This can happen as part of an automated process, looking for vulnerabilities on a WordPress based site. It can occur frequently enough to cause your web server to be too busy to handle normal user requests and take up too much CPU or Memory causing your site to go offline.

Rebooting the server can help, but if you find yourself on the receiving end of this, then it won’t be long until it starts again after a reboot and the your site goes down again.

The following assumes you are using a typical LAMP stack to host your WordPress site, consisting of a CentOS operating system, Apache web server, mySQL and PHP.

To find out if this is happening on your server your first step would be to look at your access logs, to see a list of recent visitors to your site. This sounds easy but if your server is currently in the middle of being overwhelmed with requests, then your access to the server will either be slow or impossible to use.

To get around this, the only solution is to reset your server if you have this option from your hosting providers control panel, then to SSH in to the server as fast as you can and temporarily disable Apache from handle incoming requests using:

sudo service httpd stop

This will stop the web sever from handling new requests but there may still be several in memory. To clear them use the following:

sudo pkill apache

At this point your website will be offline, but you should have a stable system to use to diagnose the problem.

The next step is to see if you have an access.log file. This will show you any recent visits to your site. The location of this file will depend on how your site is set up, but typically it will be in the root of your website, for example at /var/www/www.domain.com/access.log. The following Find command may help find it:

find / -name access.log

Once you find the file and change to that folder location, then type the following to list the contents of the file:

tail access.log

Or you can be more specific and type:

cat access.log | grep googlebot

In the output, check to see if “Googlebot” is appearing many times in a short space of time based on the time stamps.

Viewing a 'busy' server on New Relic

Viewing a server that’s under pressure – using New Relic

Its worth mentioning that this is probably not the real Google that is attacking your site. Its some other automated process calling itself GoogleBot, since Googlebot is usually a good thing to see on your site, so you know Google is indexing your site.

If you see a lot of mentions of ‘Googlebot’ in your access log then look for its IP address, usually to the left of the same line.

To verify if the IP address that is contacting your site actually belongs to Google, use the following to do a reverse lookup:

host [IP.ADDRESS.HERE]

In the response text, you should see some mention of Googlebot. There is more information here on what you might see in the response.

Edit: I’ve been told that using ‘Host’ above can be fooled easily enough, so a more thorough way to determine the owner of the IP address is to use ‘whois’ for a more authoritative source of information, using:

whois [IP.ADDRESS.HERE]

If you don’t see GoogleBot in the response, then this is something else, pretending to be Google and we should block it from accessing the site.

To block the IP address using ipTables, use the following:

iptables -A INPUT -s [IP.ADDRESS.HERE] -j DROP

There might be more than 1 IP address to block, so check your access.log again just in case.

If you are worried that you might have blocked a legitimate IP address, you can use the following commands to view blocked IP Addresses:

iptables -L INPUT -v -n

To remove an IP address from your list so that it can access your site again:

iptables -D INPUT -s [IP.ADDRESS.HERE] -j DROP

Of course, its possible for someone or some thing to keep putting pressure on your site from different IP addresses, but there is a cost related to that (unless the person has many Zombie machines acting on his or her behalf) and they’ll run out of IP Addresses eventually :)

Once blocked, make sure to restart or enable the Apache web server again so that normal visitors can access your site.

A calm server again

A calm server again

 

By performing the above steps,  you’ll have blocked any IP addresses identifying themselves as Googlebot but you won’t have blocked GoogleBot itself, so there won’t be any negative side effects to your SEO.

If it turns out that Googlebot itself, or any other bot is overwhelming your site, you could place a direction in your robots.txt file to ask it to return at a later time and not visit your site so often using the following

User-agent: *
Crawl-delay: 30

I haven’t tried it, but I’ve been told that WordFence is a good plugin for WordPress that can help with the above too.

 

 

 

Create time sheet entries on Teamwork Projects from your code commits

I’m a big fan of adding on to Teamwork Projects (Full disclosure: That’s a referral link) in any way I can to make our day to day development work a little easier.

In the past I developed and wrote about how we use Teamwork Projects as an issue tracker, by showing Task IDs on all tasks in Teamwork and then allowing developers to automatically complete the tasks (or reassigning to QA for testing) upon making code commits with those task IDs included in the commit message.

I also wrote about how we used Teamwork Projects as a Support system (this was before the new Teamwork Desk!) where emails sent to our Support email address went directly in to Teamwork and we used some 3rd party APIs to perform sentiment analysis on the text to determine if the incoming support question should be Low, medium or High Priority.

One of the additions I wanted to add to Teamwork Projects for a long time now is the ability to log time against a completed task.

This is already possible using the ‘Time’ section of Teamwork to log time against and active task or a completed task, but I wanted to make it easier for a developer to fill time without breaking the flow of developing and committing.

Over the weekend, I put some time aside to finally put this in place. Now if a developer on our team here (including myself) is working on some tasks, when we make a code commit we make sure to include the Task ID as normal and also a time in minutes, for example:

[Finish 12345:30] commit message here

This commit message is sent to a repository on Bitbucket and using a Bitbucket Webhook it also performs a few actions behind the scenes on Teamwork using their APIs

  • It re-assigned the task number ‘12345’ over to the project manager (or whoever created the task in the first place) and labels the task title as [Ready to Test]
  • The details of the code change are added as comments in to the Task for the PM or client to see.
  • The ‘:30’ part of the commit message creates a new Time sheet entry in the Time section of Teamwork and fills in all of the relevant details such as the task that has been completed, the developer responsible, the developers commit message and a list of any files add/edited/deleted.
  • If the Time section of Teamwork happens to be turned off (we often have Time + Billing sections turned off in Teamwork as we use other software for that), the Time section will be turned on automatically for the project.
A Bitbucket commit message with time

A Bitbucket commit message with time

There are a few reasons for this. Mainly it is to give our clients a clear record of time spent on tasks.

We’ll also use it to make sure that our initial estimates are accurate over time. We can add estimates to Tasks and now also clearly see how long they took, all within Teamwork.

Knowing me, I will probably take this information further in future so that when a brand new task is added to teamwork for a new or existing client, it will look for similar tasks in our recent history of tasks and use the completed time information on that task to give an estimate on this new task.

 

A completed Time sheet entry on Teamwork

A completed Time sheet entry on Teamwork

How I automate sharing content to Linkedin using Aylien’s content analysis API and Browsershot.

My goal here is to keep my Linkedin profile active in an automated way. The end result is that content related to PHP, which I have a lot of interest in, is posted to Linkedin, in to relevant Groups or on my Linkedin status page, often a few times a day, during the working hours.

Here is my Linkedin profile if you’d like to take a look at the automated posts, and to connect with m too of course! http://ie.linkedin.com/in/gordonmurray

The following is a small application I put together using PHP to gain experience in using Linkdin’s API along with Browsershot to take screenshots of any URLs, as well as using Aylien’s API to extract some useful information such as a title, summary and some hash tags from a single webpage URL.

I have added the source code to Github if you wold like to implement this yourself : https://github.com/murrion/php_post_to_linkedin

The starting point is just a URL to a web page article I find interesting and want to share. The end result is a new ‘Share’ on my own Linkedin profile and/or a post to a Linkedin Group, with a prefilled Title, description, a small number of hash tags and a screenshot extracted from the original URL, all automatically.

If you want to replicate this process, you will need to create a Linkedin Application and sign up for an Aylien API key. Both are free to use.

The workflow of this application is the following:

  1. Look for any new links posted to the PHP subreddit at https://www.reddit.com/r/php
  2. Check to make sure it isn’t a link I have shared on Linkedin before by using a local log of links shared
  3. Generate a screenhot of the URL using Browsershot. (If the image is too small in byte size, it means that Browsershot didn’t work for some reason and instead a generic PHP image is used.
  4. Send the URL to Aylien, to see if their API can summarize the content of the page and also discover some potential hash tags.
  5. If the title of the URL it is sharing (determined from Aylien’s API) has PHP, Codeigniter or Laravel in their title, then post the content of one of those Groups on Linkedin that I am a member of. If one of those words isn’t in the title, just post it to my normal status page on Linkedin.
  6. Then, update the local list of links already shared.

Here is an image of the end result. This is an article shared to a Laravel group that I am a member of on Linkedin.

Screenshot from 2015-02-09 22:17:37

The Title, hash tags and brief description are all generated automatically by using Ayliens Article Extraction and Hashtag suggestion API calls.

Its worth mentioning that one of Aylien’s API endpoints is capable of extracting images from a URL source, but I found that it didn’t return an image every time and even if it did, it wasn’t necessarily relevant to the content of the page. Instead I opted for taking a screenshot of the target URL.

The screenshot image is generated using Browsershot, which uses PhantomJS to take a screenshot of the URL.

Aylien’s Hashtag suggestion tool is very very good. In one of my earlier attempts, I didn’t limit it in any way and it posted 24 Hash tags in a post. Nothing wrong with that, but I wanted to keep it short, so I limited my future versions to just 3 hash tags, as well as removing duplicates and avoiding hash tags with greater than 10 characters.

Screenshot from 2015-02-09 21:54:06

One of the more recent automatic posts to Linkedin, complete with a short number of hash tags, a title, some summary text and a screenshot of the site.

Screenshot from 2015-02-09 21:59:00

Useful links

Linkedin’s API documentation: https://developer.linkedin.com/apis
Ayliens API documentation: http://aylien.com/text-api-doc
Browsershot on Github : https://github.com/freekmurze/browsershot

How to send and receive emails with Mandrill

https://mandrill.com/ is an email delivery system which is ideal for use in web applications to send and receive emails. Mandrill is developed by Mailchimp, the company behind the very successful email marketing software at http://mailchimp.com/

In the past, I wrote a quick guide, with code samples on how to use Mandrill to send email templates designed in Mailchimp

Sending emails with Mandrill

Once you log in to your Mandrill account, click on ‘Settings’. This page shows you the SMTP credentials you will need to begin sending emails from your web application.

If you don’t see an SMTP password, you’ll need to generate an API key using the button below the existing SMTP. This becomes your SMTP password.

Take a note of the HOST, Port, SMTP Username and SMTP Password to use in your web application.

In your web application, update your SMTP details with the details from above and you’re ready to send.

Mandrill automatically adds any domain you use for sending through Mandrill to its control panel. Mandrill also automatically adds authentication to all messages sent through their servers, but adding SPF and DKIM records for your sending domain(s) is strongly recommended for better deliverability. You can do this by logging in to Mandrill and going to Settings->Domains to view and Test the DKIM and SPF instructions.

Receiving emails with Mandrill

To allow Mandrill to receive email on your behalf, log in to Mandrill and go to ‘Inbound’. If it is your first time using it, you will be asked to add a domain name first.

Once you add your domain, you will need to add 2 MX records to your domain name. This is to allow Mandrill to receive emails on your behalf. Press on ‘DNS settings’ for Mandrill to tell you the specific MX records to add. They will be in the format of :

xxxxxxx.in1.mandrillapp.com
xxxxxxx.in2.mandrillapp.com

To add these MX records, it depends on where you have purchased your domain name originally. Unfortunately, its outside of the scope of this guide to be able to give specific instructions for every hosting company, so in your own domain control panel, look for a DNS Settings section and add your MX records there. Once you have added your MX records, allow an hour or more for them to take effect and you can use the ‘Test’ button in Mandrill to make sure they are in place.

You can use a useful site http://mxtoolbox.com/ to check to see if your MX records are in place.

But what if I already have email DNS Records, such as Google apps set up on my domain?

This is a common scenario. It is likely that you already have some MX records in place for your business or personal emails and Mandrill’s new MX records won’t work well in this case.

To get around this, the solution is to create a new sub domain and add the MX records to that sub domain instead of the main domain name, so Mandrill doesn’t interfere with your regular emails.

For example, your normal emails could be using a domain name such as @domain.com. If you want to leave that alone for your personal or business emails, then set up a sub domain such as @app.domain.com. Notice the word ‘app’ that is now in the email address. The word can be anything you like, it is just a label of sorts to separate your normal emails from those that will be sent/received by your web application.

Once your MX records are in place..

The MX records allow Mandrill to receive emails. The next step is to to tell Mandrill what to do with those emails. The solution is to use ‘Routes’, this allows you to tell Mandrill where to send the emails it receives.

To create a new route, or edit existing ones, click on ‘Inbound’ and then click on ‘Routes’ next to the domain name you added earlier.

When adding a new route, you will be asked for the email address that you would like people to be able to email and also the URL to post to.

You can set the email address to ‘*’ to allow it to receive all emails to the domain name you have added, or else specify a particular email address if you prefer, such as ‘incoming@app.domain.com’.

You will then need to tell Mandrill the URL to send your email to. A useful site to see what information Mandrill will send to your application is http://requestb.in. This site gives you a temporary URL which you can add as a route and tell Mandrill to send some test emails to it to seem them.

Once you are ready, set the URL to something in your web application, such as : http://mydomain.com/recieve_mandrill_email.

How you process the incoming email depends on the programming language you are using.

Here is some PHP/Codeigniter code you can use as a starting point to receive incoming email.

https://github.com/murrion/mandrill-receive-email

 

Extract useful information from notification emails

TL/DR: Extract relevant information from notification emails and add them to your Analytics or CRM system to follow up on.

 

Linkedi Notification Email

Linkedin Notification Email

The above image is a typical email notification from Linkedin to let you know that someone has clicked the Like button on one of your recent Posts. It you wanted to add this persons details in to your CRM system so that you or one of your co-workers might follow up with them in the future or if you wanted to gather some Analytics data on successful posts, it is possible to extract this information easily and automatically from the email.

In some cases, an email notification such as this one is the only way to gather the information you need as the service sending you the email notification may not have any API or any other way to export data, so parsing the email might be your only alternative.

The first step is that the notification emails need to go somewhere. This can be a dedicated email address set up specifically to recive email notifications, or you could set up a Forwarding rule in your inbox to automatically forward any incoming notification emails to a dedicated email address.

The next step is to use a email service such as MailGun to create that dedicated email address so that you can set up an email address such as notifcations @ mydomain dot com.  When MailGun receives an email to your chosen address, you can create a Route to POST on the content and headers of the email to your own application to store or parse.

For example, here is a Route you might set up in MailGun to forward on your notification email:

match_recipient(“notifications@mydomain.com”) forward(“http://mydomain.com/mailgun/receive_email”)

On your website, you would need to create a function or script that is ready to receive the information from mailgun:

<?php
$posted_data = file_get_contents(“php://input”);
?>

You could use this PHP snippet to retrieve the incoming information (don’t forget to sanitize it first!) and log in, or process it on the fly.

Your next step would be to parse the incoming content from Mailgun. Mailgun sends you on plenty of information and you probably don’t want all of it.

The kind of parsing you do will depend on the content of the email you are working on. If you are going to receive different emails from different sources then you might want to create functions to parse several different email templates and use the email subject line as the marker to know which template to use.

To retrieve the subject of the email that Mailgun has POSTed to your application, use:

<?php
$subject = $posted_data[‘Thread-Topic’]; // store the email subject line
?>

Then depending on your subject line, you could send the email content to a function you have prepared to parse the data out of that template.

<?php

$content = $posted_data[‘body-plain’]

switch ($subject)
{
case “Joe Bloggs likes your update”:
// Parse the content of a Linkedin Notification Email
$parsed_email_content = $this->parse->linkedin_email($content);
break;
}
?>

If you can parse the email content you would end up with a nice structured array of data from the body of the email, ready to save it directly in to your CRM package such as Salesforce or Analytics package.

array(
‘source’=>’Linkedin’,
‘name’=>’Joe Bloggs’,
‘Image’ => XXX,
‘Update_liked’=>YYY,
‘Date’=>’2014-01-01
);

 

You could go a little further too and determine the persons Gender based on their name so that you have further information to store.

Start turning email addresses in to useful information – part 1

TL/DR – Take an email address such as joebloggs23423@hotmail.com, and turn it in to useful information such as: Firstname: Joe, Surname: Bloggs, Gender: Male.

If you have developed a web application which allows users to sign up using a username or an email address or if you maintain a newsletter subscriber list or if you are capturing email addresses from users on landing pages, you may want to expand on your existing information to determine a users full name, gender, location and age so that you can better trailer your web application or newsletter to your particular audience, or just have more complete information in your CRM system. This is a brief guide to show you how to get started and turn a username or email address in to a name and gender.

There are a number of API’s to help with this and here are a couple of useful ones which are Free to use:

Determine a persons full name from a username or email address:

This is using an API from FullContact called Name Deducer. To use the API, sign up for a free API key and then send a query to their system like the following snippet (change the parts in Red to the email address/username that you want to look up and the API to your own API key provided by FullContact.)

<?php
$url = 'https://api.fullcontact.com/v2/name/deducer.json?email=joebloggs23423@hotmail.com&apiKey=XXXXXXXXXXXXXXXXXX';
$response_json = file_get_contents($url);
?>

You will receive a JSON response

{
  "status": 200,
  "likelihood": 0.68,
  "requestId": "5d80a469-60bc-4452-b999-81e353a4f18e",
  "nameDetails": {
    "givenName": "Joe",
    "fullName": "Joe Bloggs"
}

From the above output, you can pull out the ‘fullName’ and update your database. You could also look at the ‘likelihood’ value to determine how confident the FullContact API response is of this users name.

Determine a users Gender from their First name

Once you have a users name, the next step it to use another API from http://genderize.io/ to determine Gender.

<?php
$url = 'http://api.genderize.io?name=Joe';
$response = file_get_contents($url);
?>

Again, you will receive a JSON response:

{"name":"Joe","gender":"male","probability":"0.98","count":1604}

From this output you can see that Joe is determined to be Male with a probability of 0.98 out of 1. The API is free, but limited at 2500 requests/day which isn’t bad. You could get a little bit more mileage by comparing a name with your existing names before you send it to Genderize.io more than once.

What if I have users in different countries?

Its also possible to determine the Gender of a name and additional details using FullContact’s other ‘Name Stats’ API call. However I included Genderize here because FullContact is US-specific which might not be ideal here in Europe. Genderize allows you to specify a Country Location code so it will in theory return the more appropriate Gender for that country. If you don’t have the Country of the user, then hopefully you are capturing the users IP Address and determine the Country from that using http://dev.maxmind.com/.

As a result of the above, you now have an email address, full name and Gender.

In a future post, I will talk about how to expand this information further.

API’s we have worked with

API’s related to Payments

Stripe, Paypal, WorldpayPayMill,  Realex Payments

API’s related to Communications

Twilio,  Plivo,  Webtext,  EsendexBlackstone,  MailGun,  Sendgrid,  Mandrill

API’s related to Accounting/Payroll

BulletHQ, Sage Micropay,  Kashflow

API’s related to Infrastructure, such as Domain names & hosting

AWS SES, AWS Route53AWS EC2, AWS S3NameCheap, Digital Ocean, Github, BitbucketGoogle Custom Search

API’s related to Social Networks

LinkedinTwitter, Full Contact,  Klout,  RapleafFacebook 

API’s related to the motor industry

Cartell, HPI,  Autodata

Content Analysis

Lymbix ToneAPI, Yahoo Content Analysis, Aylien Text Analysis

Project Management / Organisation

Basecamp, Teamwork, Google Calendar API

Property / Location

4pmGoogle maps, MaxMind GeoIP

CRM

BatchBook CRM

Use Teamwork with Bitbucket as an Issue Tracker

We use Teamwork on a daily basis to list our Tasks for the whole team to work on.  We also use Bitbucket to store our projects.

For the developers in the team, they can see the tasks and client communication in Teamwork but for clients and project managers, they can’t easily see the activity going on in Bitbucket. I wanted to improve this a little, to give clients and project managers more visibility on code changes , to be able to search for tasks in Teamwork that impacted a certain file name and also to improve our workflow a little too.

So what we’ve done is to make the Task ID’s that are already in Teamwork visible in each Tasks description. Then, if a developer is committing some code changes to make progress or finish that task, they can include specific keywords in their commit which will keep Teamwork updated. Its a system similar to Pivotal Tracker.

A Task with the Task ID

A Task with the Task ID

Here’s the work flow:

If I was working on a task in teamwork, its description would automatically show the following text (the number would change of course from task to task) :

Include [12345678] or [Finish 12345678] including brackets to update this task via commit

Then, as a developer, when I make a commit, I can include [12345678] in my commit message. As a result, a comment is added to the task in Teamwork, which show’s my message and lists any files that I have added/edited/deleted so far in my work on this particular task.

A task with file changes added

A task with file changes added

If I have finished the task, I can include [Finish 12345678] or [Finished 12345678] and my message and files changes will be added as a comment as usual but also the main task will have its title updated to include [Please Test] in the title text and it will also re-assign the task to the person that created it, so they know its ready to be tested.

An updated Task

An updated Task

Bitbucket is already set up to auto-deploy commits to a test site for a project (using deployHQ), so the client and project manager know they can now go to the test site and try out the completed work.

This keeps the client and project manager up to date on progress and finished work and also helps the developer to communicate well.  It adds their commit message to Teamwork, rather than having to commit their work and then also having to add a message to Teamwork separately.

The end result is a work flow thats a little easier for a developer, and more details in Teamwork for the project managers and its very clear what needs to be tested.

In the near future, if there is a bug in a file, it will also allow me to see any previous tasks that contributed to that file, who made those changes and who tested it, all in Teamwork.

How it works:

  1. When a task is added to a list in Teamwork, a COMMENT.CREATED ‘Hook’ is called to POST the details of that task to a particular PHP page on our server.  This PHP page gets the Task ID from those details and re-submits the task details back in to Teamwork with an updated Task Description which contains the text: “Include [12345678] or [Finish 12345678] including brackets to update this task via commit”
  2. In Bitbucket, a Hook is added to perform a similar job. When a Commit is added to a project, the commit details are sent to another PHP script one our server. The commit message is parsed to extract the Task ID and a comment is added to the particular task in Teamwork with the commit message details and files changed.

One of the handy things about Teamwork is that if a Hook is created, then it will work for any task list on any project, it just needs to be set up once.

However, with Bitbucket, the Hook needs to be added for each project, which is ok, but its just something that can be forgotten sometimes.

Reference:

If you liked this post, here are other things we’ve done with Teamwork:

Comparing DeployHQ, Deploy.do and dploy.io for deployments

 

Name URL Price for 20 projects Pricing page
DeployHQ https://www.deployhq.com/ 15 Euro/month https://www.deployhq.com/packages
Deploy.do https://www.deploy.do/ 199 Euro /month https://www.deploy.do/pricing
Dploy.io http://dploy.io/ 18 Euro/month http://dploy.io/

I used the above 3 services to deploy the same repository from Bitbucket to the same Digital Ocean droplet, located in Amsterdam, 512mb RAM, Ubuntu 12.04 LAMP stack

Ease of adding a new project and a server to deploy to?

All more or less the same

Does it support SSH key access?

DeployHQ = Yes
dploy.io = Yes
deploy.do = Yes (and no username + password combination allowed)

It is possible to deploy a project to more than 1 server? (handy for testing, pre-production and production servers)

DeployHQ = Yes
dploy.io = Yes
deploy.do = Yes

Is it possible to duplicate deployment settings? (handy when setting up deployments to more than 1 server)

DeployHQ = No. Now they do, using Project Templates.
Dploy.io = No
deploy.do = Yes, using shell scripts

Is there a post-deployment email notification or hook? (useful for reporting on deployments)

DeployHQ = Yes, several methods of notification supported
Dploy.io =Hipchat and Campfire only
deploy.do = Connects with Hall only as far as I can see

Can I specify a particular branch of a repository to deploy?

DeployHQ = Yes
dploy.io = Yes
deploy.do = Yes

Can exclude files from being deployed? (such as documents, config files etc)

DeployHQ = Yes
dploy.io = Yes
deploy.do = No

Can run commands on the server before and after deployment?

DeployHQ = Yes to both
dploy.io = post-commands only
Deploy.do = Yes to both

Supports auto-deployment?

DeployHQ = Yes, by adding a Hook to Bitbucket,
dploy.io = Yes
deploy.do = Yes, by adding a hook to Bitbucket

Shows a preview of the files that will be changed before deployment?

DeployHQ = Yes, optional
Dploy.io = Yes, optional
deploy.do, Yes.

Can deployments be rolled back?

DeployHQ = Yes
dploy.io = Doesn’t look like it
deploy.do = Yes

Includes an API for triggering deployments remotely?

DeployHQ = Yes
Dploy.io = No
deploy.do = No

Quickest to deploy?

Deploy.do seems to be the fastest. Dploy.io seems to be the slowest. However, I was using free accounts for the above testing and there may be priority deployments with paid accounts.

Most cost effective, assuming more than 1 repository:

DeployHQ

Nicest interface

DeployHQ

Worth noting

Each provider allows for 1 project/repository for free.

DeployHQ and Deploy.do allows you to grant permission to a repository on Bitbucket by adding a deploy key in Bitbucket first. dploy.io expects to be given access to a list of all repositories on Bitbucket first before choosing one to use.

Deploy.do requires that the server has ‘Zip’ installed so that it can copy to the server and then uncompress presumably.

Deploy.do doesn’t deploy the files directly to the target folder like deployhq or dploy.io, it creates a ‘releases’ folder and ‘current’ file  in the target folder and links the ‘current’ to the release in the ‘releases’ folder. This can be used to keep older deployments locally if needed and easily revert back to an earlier deployment if necessary.

None of the above services allow a user to import or export the server settings or a history of deployments to a server, which is something I would really like to have as I sometimes remove a project and I’d still like to retain a log of its history.