If you release once per week, why not two times or five times? What’s holding you back from doubling your current pace?
To achieve the next level of engineering productivity, identify and eliminate your limiting factor.
How we achieved a 1652% increase in deployment frequency
When I started at TrainHeroic, we faced a common limiting factor. Deployments were executed from the command line. This manual step introduced human error and unintentional secrecy. Only one engineer could deploy and successfully rollback if needed.
In this environment, our team only deployed about once per week. Once per week is pathetic. Our customers deserved better.
When you identify your limiting factor, just start fixing it. Don’t ask for permission.
We started by documenting the steps to release in the project README. That democratized the process and set the stage for automation. Engineers love automating manual steps that are well documented. Within a day we had automated deployment pipelines in place, which every engineer could execute.
In the following year, we released 155 times, a 210% increase year-over-year without increasing headcount.
The next year we did it all over again – finding and eliminating our limiting factor. This time our limiting factor was a slow test suite. No doubt, this problem was more complex. We fixed flaky tests, refactored code, parallelized our tests, and reordered our pipelines. It was absolutely worth it.
451 releases later, we had increased deployment frequency again. This time by 191%.
The year after? 938 releases, a 108% increase. In total we’ve experienced a 1632% increase in deployment frequency without adding headcount.
Identify and eliminate your limiting factors.
Common Limiting Factors for Deployment Frequency
Although each team and environment are unique, we can identify common limiting factors holding teams back from deploying more frequently:
|Limiting Factor to Deploying Frequently
|Only a few engineers can run deployments
- Democratize deployments
- Document common errors so more engineers can triage deployment problems
|Code reviews are slow
- Submit smaller and more focused changesets
- Empower more engineers to approve a PR
- Celebrate and thank engineers for timely code reviews
- Manager nudges code reviewers during standup and personally follows up
- Team agreement on coding standards so there is less back and forth on syntax
- Try pair programming, which effectively switches your team to real-time code reviews
- List out deployment steps so anyone can perform them, then translate your list into an automated system
|No joy in the release process
- Celebrate each deployment, no matter how small
- At one company, our team had Release Thursdays which always happened at 4:45pm. In other words we had a company policy of infrequency deployments.Talk about this as a limiting factor in retro and establish an agreement to try a releasing off traditional schedule
- Software construction is difficult when it feels like your feature spec is constantly changing. In this environment, the code never feels stable enough to deploy.If necessary, put development entirely on hold and move on to more productive and stable problems.
- Look for backchannel communications happening to engineers, especially from authority figures like the CEO or VP of Product
- Your feature branches live for too long. Look into trunk based development and short-term feature branches. You need to reduce the pain of getting code into trunk/master.
|Fear of regressions or downtime
- Your automated tests are probably unreliable or insufficiently comprehensive. For all new code, you need corresponding automated tests and you need to start adding tests for existing code.
- Make monitoring deployments easier
- Just pull the trigger. There is something to be said about just going for it and getting over your fear of deploying. Pulling in a more senior engineer can help you overcome this fear by providing mental support.
- Run a manual smoke test after deployment until your automated tests provide sufficient confidence
- Sit with your engineers and designers to reconcile differences between expected functionality and design comps.Most design discrepancies are just incidental and not actually a blocker. Do not expect design comps to be perfect – err on the side of good communication over ironclad deliverables.
- Make sure you have a style guide
- Centralize the designs in an easily accessible and updatable location like ZeroHeight
|Too few automated tests
- Train your team on writing testable code. When you’re frustrated that engineers aren’t adding automated tests, ask yourself why and you’ll often realize they don’t know how to write testable code. Train them on and discuss separation of concerns.
- Celebrate when automated tests are added and testable code is written. Hold up these examples as models to follow.
|Tests are slow
- You may be favoring integrated tests over unit tests. Integrated tests will always run slower.
- Run your tests in parallel, which most CI/CD systems support
- Separate integrated test suites from unit test suites. Run your unit tests more frequently and integrated tests less frequently. In test or staging environments, run unit tests, then deploy, then integrated tests so that your slowest tests don’t block deployment to those environments.
- Find and eliminate slow tests. Most testing frameworks have the ability to output your test results to file, including the time taken on each test. Even
|Tests are flaky
- Eliminate non-deterministic tests
Limiting factors for releasing relentlessly come from all angles – product design, testing, code, customer feedback and everywhere in between. Use the anchors section of retro to think about what your current limiting factors are.