Welcome!

Open Source Cloud Authors: Yeshim Deniz, Pat Romanski, Liz McMillan, Zakia Bouachraoui, William Schmarzo

Related Topics: Open Source Cloud, Java IoT, Industrial IoT, Microservices Expo, Microsoft Cloud, Eclipse

Open Source Cloud: Blog Post

Efficiency in Development Workflows: Deployment Pipelines

Learn how to set up Deployment Pipelines and how to deploy to production servers with Zero Downtime Deployment

Last week we talked about how we review code, open pull requests and use GitHub issues to manage our development workflow.

This week I will show you every step that happens after a pull request is merged into our master branch. We use an automated deployment pipeline for releasing our code into production.

Deployment Pipelines
A deployment pipeline lays out the whole process that your code needs to go through from your repository to production. It breaks the build into several parts (e.g., build, test and deploy) and all the associated steps that need to be taken. By defining a pipeline it is always clear which step needs to happen next. Martin Fowler describes it really well in his blogpost.

If you want to dig deeper into Deployment Pipelines I highly recommend Jez Humble and David Farley's book: Continuous Delivery.

Continuous Delivery book by Jez Humble and David Farley

Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation - Jez Humble and David Farley

Configure deployments per branch
To automate deployment to different environments we have found that it works best to define actions for different git branches. If you always push the latest commit of your production branch to your production environment, it s very easy to determine what is currently deployed by just looking at the git branch. Git and other source code management systems only permit one commit at the top of a branch, so there can be no confusion.

At Codeship we deploy our master branch automatically to production. Many of our customers deploy the master branch to a staging environment and a production branch to their production environment. A simple git merge and git push, or a GitHub pull request, is their way of releasing their changes.

One problem with this approach is that branch names have to be meaningful. Having a development branch which is deployed to staging and a master branch that gets deployed to production can confuse new team members. Naming branches that get deployed"production" or "staging" is more intuitive. "Master" is a convention in git and should be kept, but dedicated branch names are easier to understand in a deployment pipeline.

Deployment Strategy
As soon as the feature branch is merged into our master, a new build is started on the Codeship. We run the same test commands again as we did on the feature branch to make sure there are no problems in the merged version.

When all tests pass for the master branch the deployment starts. Before pushing to production we want to make sure that all database migrations work and that the app starts successfully.

First we deploy to staging. Then we run our current set of migrations. We copy our production database to staging once a day. Therefore, when we run migrations on staging, the database is very close to our production database. This allows us to make sure our migrations work, before deploying to production.

The Codeship Workflow - Deploying to staging

Deploying to staging

The last step in our staging deployment is calling the URL of our staging site to make sure it started successfully. This has saved us twice over the last years, as we would have pushed a change to our unicorn configuration that broke the server. Wget and its retry capabilitiesmake sure the website is up and running.

The Codeship Workflow - Deploying to production

Deploying to production

Then we run our migrations in a separate step after the deployment and clear our cache.

The Codeship Workflow - Running migrations

Running migrations

Then we repeat the whole process for our production system. We push to the Heroku production repository and check that the site still works.

An enhancement would be to have tests that run against the deployed version, but so far we haven't had any problems without these tests. Our extensive Cucumber/Capybara test suite has caught all problems so far.

There is one slight difference between our staging and production deployment though:

Zero Downtime Deployment
As we want to deploy several times a day without any downtime we use Heroku's preboot feature. We started using it at the beginning of this year. Whenever we push a new release, it starts this release on a second server and switches the routing to it after about 3 minutes.

The downside is that zero downtime deployments require more care with database changes. As two versions of your codebase need to be able to work at the same time you can't just remove or rename fields.

Renaming or deleting a column or table needs to be spread out over several deployments. This way we make sure that the application still works with every incremental change. We will go into more detail on database migrations for zero downtime deployments in a later blog post.

In the meantime, you can take a look at the blog posts in our "Further Info" section that explain Zero Downtime Deployments by Etsy, Braintree and BalancedPayments.

Conclusion
It is important to automate every step of the deployment. No matter if you want to deploy your code on every merge to master or trigger it manually by merging the master into another branch.

Now that we've gone from working on a feature to code reviews and finally pushing to production in our web application we will take a closer look at our test infrastructure next time.

In the next blog posts I will delve into immutable infrastructure and how we rebuild our test server infrastructure several times a week.

Further information

More Stories By Manuel Weiss

I am the cofounder of Codeship – a hosted Continuous Integration and Deployment platform for web applications. On the Codeship blog we love to write about Software Testing, Continuos Integration and Deployment. Also check out our weekly screencast series 'Testing Tuesday'!

IoT & Smart Cities Stories
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throug...
Atmosera delivers modern cloud services that maximize the advantages of cloud-based infrastructures. Offering private, hybrid, and public cloud solutions, Atmosera works closely with customers to engineer, deploy, and operate cloud architectures with advanced services that deliver strategic business outcomes. Atmosera's expertise simplifies the process of cloud transformation and our 20+ years of experience managing complex IT environments provides our customers with the confidence and trust tha...
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throug...
AI and machine learning disruption for Enterprises started happening in the areas such as IT operations management (ITOPs) and Cloud management and SaaS apps. In 2019 CIOs will see disruptive solutions for Cloud & Devops, AI/ML driven IT Ops and Cloud Ops. Customers want AI-driven multi-cloud operations for monitoring, detection, prevention of disruptions. Disruptions cause revenue loss, unhappy users, impacts brand reputation etc.
The Japan External Trade Organization (JETRO) is a non-profit organization that provides business support services to companies expanding to Japan. With the support of JETRO's dedicated staff, clients can incorporate their business; receive visa, immigration, and HR support; find dedicated office space; identify local government subsidies; get tailored market studies; and more.
As you know, enterprise IT conversation over the past year have often centered upon the open-source Kubernetes container orchestration system. In fact, Kubernetes has emerged as the key technology -- and even primary platform -- of cloud migrations for a wide variety of organizations. Kubernetes is critical to forward-looking enterprises that continue to push their IT infrastructures toward maximum functionality, scalability, and flexibility. As they do so, IT professionals are also embr...
As you know, enterprise IT conversation over the past year have often centered upon the open-source Kubernetes container orchestration system. In fact, Kubernetes has emerged as the key technology -- and even primary platform -- of cloud migrations for a wide variety of organizations. Kubernetes is critical to forward-looking enterprises that continue to push their IT infrastructures toward maximum functionality, scalability, and flexibility.
Today's workforce is trading their cubicles and corporate desktops in favor of an any-location, any-device work style. And as digital natives make up more and more of the modern workforce, the appetite for user-friendly, cloud-based services grows. The center of work is shifting to the user and to the cloud. But managing a proliferation of SaaS, web, and mobile apps running on any number of clouds and devices is unwieldy and increases security risks. Steve Wilson, Citrix Vice President of Cloud,...
When Enterprises started adopting Hadoop-based Big Data environments over the last ten years, they were mainly on-premise deployments. Organizations would spin up and manage large Hadoop clusters, where they would funnel exabytes or petabytes of unstructured data.However, over the last few years the economics of maintaining this enormous infrastructure compared with the elastic scalability of viable cloud options has changed this equation. The growth of cloud storage, cloud-managed big data e...