In the first two articles of this DevOps series (you can read part 1 here, and part 2 here), I outlined the myths and common errors surrounding DevOps, and then the initial steps to building a DevOps pipeline. In this article, I want to examine briefly some best practices through a typical continuous integration pipeline.
Best practice 1: Shift the development paradigm
Shifting the development paradigm means moving from “code” to “code coverage analysis”. There are four elements to this:
- Version control (branching). When your team includes several developers collaborating in multiple projects, multiple files are created and saved with different dates and file names using a custom definition (or branch). So to avoid undesired bugs in production, it is essential to track changes in code, at least manually. But a manual method is cumbersome and difficult to administer, particularly if hundreds of shared files are involved that team members are editing simultaneously and that can be overwritten. Most importantly, it is impossible to find out what changes have been made simply from the file names. That’s why version source control is an essential tool.
- Agile proof of concepts. This organizational culture is based on build teams, not groups, working cohesively between operations (stability), development (changes) and testing (risk reduction). It emphasizes the need to implement organizational changes to achieve goals by applying the Just-in-Time (JIT) concept.
- Test-driven development (TDD) and behavior-driven development (BDD). These methodologies integrate the Agile methodology and ITIL-based continuous delivery, to increase the frequency of releases with high reliability. TDD is a software development process in which code is created based on the test case list previously created. This way of working considerably expedites the time invested in the software development lifecycle. Meanwhile, BDD is an agile software development process that encourages collaboration among all the business and IT stakeholders to use conversation and concrete examples to formalize a shared understanding of how an application should behave.
- Built-in automation through code control. Building in automation means automating the software development lifecycle process into one environment – involving coding, building, testing, packaging, release, configuration & monitoring. It also helps teams focus on the product and the product backlog – and not in the project (project management), helped by communication and real-time visibility.
Best practice 2: Setting continuous testing
This second best practice involves several elements from the first – versioning control, Agile proof of concepts, and TDD and BDD methodologies.
But continuous testing also encompasses bringing in traceability tests, and a comprehensive analysis of your testing policy, and a mature evaluation of risk.
Continuous testing means bringing in performance tests based on monitoring IT operations, while also automating repetitive tests, such as smoke tests. The report results also need to be automated.
Best practice 3: Monitoring and release by service performance
This is necessary to create a deploy-to-test environment. It involves three main elements:
- Environment management (cloud, on-premise)
- Release management (blue/green deployment, A/B testing, canary release)
- Automatic rollouts as per infrastructure constraints.
These best practices are critical because they emphasize how teams need to be investing their time in generating value, rather than developing reports. This is exactly what the CALMS framework tries to enable organizations to achieve – creating a lean and measurable environment.
These three best practices will be essential to ensuring your DevOps environment helps your business achieve its objectives. But remember, as I pointed out in my earlier article, every organizational DevOps journey is unique. There is no standard strategy that will work for every company. Having said that, these core principles will go a long way to ensuring long-term success.