This is Part 2 of two-part blog post about Release Automation. Click the link to read Part 1 – “We’re going to automate this process”.
“You’re going to what?
Not an unusual reaction when you tell someone you’re going to increase the amount of work going through their remit by a factor of several hundred. And when you’re talking about putting code into production in a mission-critical system, maybe you shouldn’t be surprised at the reaction. That’s when you drop the second big bit of news for the day;
“Oh, and while we’re at it, we’re going to improve system availability and reduce Mean Time to Recover by a factor of at least 10”
At this point, because you are a prudent and caring IT Manager who covers most contingencies, you call in the CPR team you have had waiting around the corner, and they help the Sys Admin back up onto his feet.
Once calm has descended back onto the floor, maybe it’s time to explain things.
The Agile people have got their act together when writing code, testing it, and getting it ready to deploy, but now we need to get that code into production. If, like most organizations, you are careful about what you deploy into your production environment, you won’t want to start deploying rubbish, but at the same time, you need to divorce yourself to some extent from what you are deploying, and look at how you are deploying it. In other words, you have to trust your development colleagues, and the process they follow, and get that code out there as fast as it can be – viably – developed.
In a fully evolved DevOps world you will have DevOps and SRE teams and a DevOps team will care about what is being deployed and the SRE team will build feedback loops to alert on potential outages and blue green fallback systems. Meantime we are back in the trenches figuring out how to get there.
Here at Daysha, we have been working as a DevOps consulting organization since 2013. Back then, as we first started to work with RA technology, we delivered training for one of the leading Open Source products (Chef). What we noticed then was that, for the most part, clients who attended our courses were educating themselves rather planning full-on exploitation.
Over the last three years most of our customer’s focus has been on the left and they now build software more atomically in bi-weekly sprints. The requirement now is to deploy this code as smaller but much more frequent releases. But smaller more frequent releases won’t work without automations. The workload per release is simply too great.
We still deliver this training and what we are now seeing is that demand for RA is moving from early adopter towards early maturity. In line with this we also formed a relationship with XebiaLabs.
We have blogged elsewhere about how automation can cause some trepidation in staff, but we have also spoken about how it’s an opportunity. Other people have written some good articles on managing this (Have a look here at the Daily Telegraph 3 R’s) for a good example of how you design systems differently so to be able to recover faster based on severity of outage incident. The same company also talks about reskilling and offering training to individuals who need to up-skill.
The fact is that the world is changing, and you need to change along with it. To quote one famous hockey player you need to be where ‘the puck is going to be.’
If you want to automate releases you need virtualization, staging environments that mirror production, and staging environments (multiple).
This is a definition of cloud and it comes with an operating cost model to boot. The tedious and lengthy process of raising capital requests to build datacenters are gone. AWS, Azure and now Google compete for your tin by providing infinite capacity as a utility.
Until recently our customers didn’t trust cloud for security reasons. Ironically, recent criminal activities have re-focused minds. WannaCry did not penetrate cloud providers. Their security is simply better because they can focus teams of people on the issue. Cloud is now viable for even large enterprises. At a recent Fintech conference one Bank stated that regulated organisations can deploy 80% of their data in public cloud and be audit compliant.
As alluded to earlier we have been working with Open Source tools for some time. The feedback from our customers has universally been ‘Only Red Hat can make money out of Open Source’ and interestingly Red Hat recently anointed one of the other Open Source RA vendors. Our customers have legacy and invariably Open Source works with the cool new stuff and not earlier generations of systems.
This has led us on a journey to find new tooling from XebiaLabs where we wrap services to assist our customer’s digital transformation. Our customers lead the transformation and we help them to do so.
Time to stand down that CPR team.
POSTED IN: DevOps Case study