United Kingdom: +44 (0)208 088 8978

Continuous integration and deployment – part 1

In the first of a two-part series Ryan gives an overview of the 'whats' and 'whys' of DevOps and CI / CD pipelines.

We're hiring Software Developers

Click here to find out more


Over the last decade or so DevOps has gone from technical curiosity to an essential part of most professional programmers' practice.

Taking ownership of operations has tightened the feedback loop between developers and users / stakeholders, allowing for rapid feedback and providing the ability to respond to change.

There are many cultural and technical challenges which companies face when adopting these practices. I highly recommend grabbing a copy of Accelerate: The Science of Lean Software and Devops for a deeper dive on this, it makes a wonderful case and offers practical guidance if you are looking to convince your colleagues!

One of the key takeaways is that software should be continuously integrated and deployed. When a team thinks and works in this way rather than targeting big bang updates the product can be more agile, in the literal sense.


Depending on your point of view, continuous integration (CI) can mean 'small branches merged often' to 'only one branch'.

Either way, if work is to be reliably merged and tested often then it makes sense to automate the process.

Your CI pipeline will need to run automatically when new code is available. This is usually done via a webhook, which allows the repository host to notify the build host.

The build host is a machine with all the required dependencies to build, run and test your application.

Whilst you could set up a machine to do this yourself, it is often easier, more reliable and usually free to do this with a cloud provider's machine. There are a huge number of companies providing these services such as Github Actions, Azure Pipelines, Travis CI, CircleCI, Bitrise amongst many others.

Whichever you choose you will need to configure your machine, both in terms of software and the build process itself.

How you do this usually takes one of two forms, a GUI-based drag and drop interface or a config file (usually .yaml). A config file is often preferred as it is easier to version control / reproduce.

The process is usually broken down into 'jobs' or 'tasks' ('install x', 'enable service z', 'run build script y', 'run tests' etc).

The last task will usually export the thing/s you have built and tested, often referred to as artifacts.


Once you have an artifact from your build pipeline, you may choose to hand that to a continuous deployment (CD) pipeline.

This is often configured using the same tools and processes as the build, although may be a separate service entirely.

The deploy process has two main goals.

Firstly it can update the deployment environment itself, using an infrastructure as code solution such as Farmer, Bicep or Pulumi.

Secondly, it can take your build artifact and push it out into that environment.

It is common to have multiple environments, often for testing purposes, in addition to a production deployment.

Your pipeline can progressively promote changes from one environment to another, with a variety of policy gateways configurable between stages.

Next time

In part two of this blog I will walk through the steps of constructing a build / release pipeline with Azure Pipelines, SAFE Stack and Farmer.