Immutable infrastructure and continuous delivery in the cloud
Paul Bourdel | November 2, 2015
There are many benefits of immutable infrastructure as well as continuous delivery. The goal of this blog post is to lightly go over those benefits as well as provide an example on how to achieve both on a cloud provider.
Immutable infrastructure is similar to an immutable object in functional programming. Once it is created it is never modified. In this blog post we’re specifically talking about the application server. The creation of the application server is fully automated as part of the build process. The scripts used for the creation of the application server are stored in the source repository of the project alongside the application source code. Once the application server is built it is never modified by hand. If a change is needed to the server’s configuration, the change is made to the server setup scripts and a new server is built after the changes are committed. The changes will then go live during the next deployment.
Packer is a tool for creating virtual machine images on different cloud platforms such as Rackspace or Amazon Web Services. It is launched from the command line and configured with a JSON formatted configuration file. The result of a successful Packer run is a virtual machine image stored under your cloud provider’s account that is built specifically to your needs. In the Packer configuration file, you specify a shell script that will be run on the server before the VM image is created. This allows you to configure any part of the server that is accessible via shell scripting.
Sample Packer Configuration File
The provision-rackspace-env.sh shell script referenced above in the provisioners section can do anything that is needed to configure the server for your application.
To execute the above packer file have your automated build run the below command:
Useful Links: Packer Documentation
Sequence Diagram of Build Process
The below is a sequence diagram visualizing the build process. For our example we use Bamboo as the build server and Stash is our source repository.
From the Wikipedia article on continuous delivery, the definition is:
Continuous Delivery (CD) is a software engineering approach in which teams keep producing valuable software in short cycles and ensure that the software can be reliably released at any time. It is used in software development to automate and improve the process of software delivery. Techniques such as automated testing and continuous integration (CI) allow software to be developed to a high standard and easily packaged and deployed to test environments, resulting in the ability to rapidly, reliably and repeatedly push out enhancements and bug fixes to customers at low risk and with minimal manual overhead. CD builds on CI by adding the regular deployments to production as part of the process, however CD is not a requirement of CI.
As stated nicely in the Ansible documentation homepage:
Ansible is an IT automation tool. It can configure systems, deploy software, and orchestrate more advanced IT tasks such as continuous deployments or zero downtime rolling updates.
Our main interest lies in Ansible because of the cloud modules it provides. They are all listed and documented on Ansible’s cloud documentation pages.
Supported providers include:
- Amazon Web Services (50+ modules)
- Azure ( 1 module)
- Apache Cloudstack (20+ modules)
- Google Compute Cloud ( 7 modules)
- Openstack (20+ modules)
- Rackspace (20 + modules)
- Vmware (20+ modules)
The modules of interest for this blog post are:
- rax: used for creating servers from a VM image and for destroying servers.
- rax_clb: used to retrieve the servers attached to a load balancer.
- rax_facts: used for getting information about a list of servers.
- rax_clb_nodes: used for adding and removing servers from a load balancer.
*Ansible is an open source project and welcomes new modules. If a cloud provider’s API has a function that is not exposed via an Ansible module it is not difficult to write one if you know Python and have the module be submitted into the Ansible code base. For more information: Ansible Guide to Developing Modules
Ansible Inventory File
The Ansible inventory file (documentation link) is usually used for storing information about a static list of hosts. For the purpose of this example we will mainly use it as a place to store variables that we need for interacting with Rackspace. A sample inventory file is provided below. For your own use you would change the variable names to correspond to desired values for your project.
Explanation of variables:
- rackspace_image: the name of the Rackspace image created in the build phase.
- server_name: what to name the servers that will be created, an incrementing number will be appended to each server name starting with 1.
- load_balancer_name: the name of the load balancer to remove and attach servers from. If a load balancer under this name does not exist it will be created.
- instances_count: the number of servers to create.
- rackspace_private_key_path: the path to the key file used to SSH into the created servers.
Ansible Deployment Playbook
The below playbook is used for the cycling of our servers from the old to the new ones. This is what is run during our deployment process. If you have multiple environments you would configure them by changing the inventory file described in the prior section.
The below sequence diagram is a visualization of the deployment process. This deployment process is typically called a Blue/Green deployment because of the two groups of servers that are used to deploy with no downtime.
Bringing It Together
In our implementation of this workflow Atlassian’s Bamboo build server was used but in theory any continuous integration server should be adequate. The key requirements being that during the build, the following command is run to create the VM image:
And during the deploy process the below command is ran to initiate the ansible playbook:
The continuous integration server will need to pass the same build key to both Packer and Ansible so the appropriate VM image is created and deployed. With Bamboo it is possible to have the build server pass variables that correspond to the build number that is being built or deployed. If you are using a build server other than Bamboo then you will need to investigate the best way to achieve similar functionality.
There you have it. With this template you will have a fully functioning build and deployment pipeline that will allow you to have immutable infrastructure and continuous delivery of your software project. It is very likely that you will have needs that go beyond this example but hopefully this is a good starting point that can help you get most of the way there. For a project launched for one of our clients, this workflow has worked successfully for over a year now and has allowed them to nearly eliminate environment specific issues, to rapidly deploy new versions of their application, and sometimes old ones if a roll back is necessary – all within a matter of minutes. Also, since their server configuration is stored in source control they can see all the changes that have been made to their servers by navigating the commit history of the files described in this post. This workflow has worked successfully as a good compromise between ease of use and fine grained control over their servers and deployment process.
Looking for more engineering tips?
Our engineers have a whole lot to say about custom software. They’re in the trenches every day, building, breaking, re-building, and sharing their hard-won wisdom along the way. Find their latest and greatest discoveries on Slalom’s new software engineering blog.
Paul Bourdel is a solution architect on Slalom’s Cross-Market team. Paul focuses on building web applications using open source tools and frameworks. He is also an advocate of DevOps and spends time thinking of ways to apply modern environment automation tools and optimize the build/deployment pipelines for the projects he works on.