In the first part of this series of articles on testing Agile at scale, I explored the importance of building a quality culture and defining the quality process. In this article, let’s start with a quick review of the key concepts shared in the first part when larger programs and structured organizations are adopting Agile methodologies while they scale.
- To embrace the real Agile methodologies promoted by the development and IT teams you need business and upper management engagement.
- To avoid the common mistake of considering testing as the final phase of work requires a change in mindset. Quality is a responsibility of the whole team, and the main goal is to shift-left the testing to prevent issues by building quality into the product development.
- The quality strategy drives the team to release an integrated and tested solution that meets customer needs.
- Do not plan for sequential phases of testing, instead, execute different testing types for both functional and non-functional requirements in parallel and incrementally at various layers within a continuous integration (CI) and continuous delivery (CD) approach.
Testing within the POD
A common mistake in a POD (Agile team) is having testers that only care about creating and automating test cases that are generally executed at the end of the sprint, like a “testing phase.” Development delays directly impact the time available for testing, resulting in testers working after hours or during the weekend to close the stories, or the stories being carried over to the next sprint to complete the testing phase. Neither of these are good options in Agile; shifting-left the testing is key to success.
Test managers and other test professionals can contribute to the development team taking a leading role to facilitate discussions with the team on how to implement the quality strategy, while following good testing practices and methods in every stage of the development cycle.
Testers should have active participation in the whole development cycle, including activities like story refinement, product risk assessments, estimations, planning, pair-work, test case design and execution, test data definition, defect reporting, defect follow-up, test reports, metrics generation, RCA (Root Cause Analysis), quality control sign-off, and the determination of the Definition of Done, among others. Here are some examples of how this participation could be carried out:
- Implementing Test Driven Development, building tests before writing code.
- Testers can work close to the product owner and business analysts to review and complete the user stories.
- All test specialists can encourage the whole team during the user story refinement to analyze the impact of the changes and identify the product risks to guarantee that exceptions and risks are discussed. This practice provides the development team with a good understanding of the problems to be solved.
- Testers can pair with users, product owners, business analysts, and developers to review the test approach for a user story.
- Testers can pair with team members to include non-functional requirements like performance, stress, security, etc in the stories.
- Test automation engineers are responsible not only for automating the story acceptance criteria and any other tests identified as candidates for automation, but also for maintaining the automation framework aligned with the CI pipeline.
- Test automation engineers can monitor automated regression executions.
- Testers that are not able to code can still pair with test automation engineers to review the tests candidates to be automated and discuss the frequency of its execution. They also can contribute to ensuring that in the build and release process, the automated functional and non-functional regression testing is included in the pipeline.
The Definition of Done (DoD) is another key tool for building quality into the development process because it is a way of ensuring that each increment of value has accomplished all of the agreed upon quality conditions to be considered complete.
My recommendation is to implement a unified Definition of Done, followed by all teams, that enables cross-team alignment. You can start with something simple at the Story level and then, as the team evolves and has more mature processes and practices adopted, you can review and adjust your DoD adding more conditions at different levels and have a scaled definition of done.
Testing at program level
At the program level we can think of 3 main objectives:
1. Define a comprehensive quality strategy. As previously mentioned, when we define the quality strategy, its implementation has a direct impact on the whole organization.
2. Support the continuous delivery pipeline. Built-in quality practices require controlled infrastructure to build, integrate, deploy, and release solution assets more frequently and continuously across multiple stages of the pipeline.
I recommend having a dedicated team composed of DevOps specialists and both quality and test automation engineers. Additionally, the quality leadership can be part of this team. The main objective of this shared service team is to assist the whole program in building and supporting the development infrastructure, solution integration, end-to-end testing, as well as the system and solution demos before each release.
Some key activities of this team are:
- To define the quality strategy and provide quality process and guidance on how the Agile teams can adopt good testing practices, as well as providing all the support needed by the PODs for its execution through each iteration.
- Define and maintain quality metrics and dashboards to ensure the data is available to assess the progress of the release and provide a transparent and objective understanding of the product delivered. This information is used by the management team to take decisions and corrective actions when needed.
- To provide an automation framework, which in coordination with DevOps, is integrated with the continuous delivery pipeline.
- To define clear quality gates in the continuous delivery pipeline. An example of these gates is running build verification tests at every build in the main branch, to avoid the introduction of failures in working functionalities and to provide early and continuous feedback.
3. Deliver an integrated and tested solution to customers. Product integration and integration testing is another challenge when scaling in agile.
Each POD contributes with functionalities and part of the solution in the shared code base. Two challenges can emerge here. On one hand it is very difficult to manage dependencies and synchronize the activities between the PODs, but on the other hand, planning and executing thorough integration testing is the major challenge to deliver a solution value for larger systems.
To solve this dilemma, I also suggest an independent team composed of quality engineering specialists on both manual and automated testing, as well as load & performance specialists. Their main focus is on the quality of the features in the Program Increment scope.
This team is mainly responsible for verifying both functional and non-functional behavior of the product as a whole, considering the integration of all features within the application. The testing executed may include the program integration testing, regression testing with a regular cadence, as well as non-functional testing like load & performance, installation, compatibility, localization, and internationalization, among others.
Testing at business level
At this level, product management translates the business vision and needs into strategic product objectives. We can say that this is the linking place between organizational goals and the development work that is being done, so this is where quality must be embedded into the organization.
You can consider:
- Having a quality mindset at the management level to ensure that when business strategic objectives are defined, they don’t only focus on adding “new business functionality”. The reduction of technical debt must also be prioritized, having a robust test architecture and supporting and developing good quality practices to enable a sustainable delivery.
- Ensuring that Epics/Features compliance is defined and the required functional and non-functional needs are implemented.
- Defining KPIs and quality metrics that enhance the transparency of development and operations. The quality of delivery and development sustainably reduce the gap between what is needed and what is implemented.
- For larger or complex solutions composed of different products, system testing can also be executed before a production release. This might include testing multiple products together in a system environment close to the production environment with specific software and hardware and executing a variety of test types such as performance, robustness, and customer use cases.
Based on my experience, even for complex and larger programs or organizations, you can embed testing and drive the adoption of a quality culture at every level to enable multiple teams in a scaled model to build in quality and collaborate on a single release.
My main recommendations to accomplish this include:
- It is not sufficient to just implement the Scaled Agile Framework with its current guidance on quality practices. It requires attention on all levels, at every discipline, and with an Agile quality culture.
- Keep the focus on having a clear and comprehensive quality strategy so people know what is expected of them at every level. Take advantage of including the main quality gates in the Definition of Done.
- People in testing and quality roles should have the knowledge to take on a coaching role, sharing with the team the advantages to having a quality mindset and encouraging them to implement good practices to build in quality.
- Shift-left testing. Taking quality into account since the planning sessions will ensure that quality-related work gets sufficient attention and is appropriately prioritized.
- Discuss the test approach for each story and feature with the whole team. Each point of view contributes to having not only a clear understanding of what should be done, but also allows for a proper analysis of how a change will impact the existing functionalities and plan for adequate regressions.
- Communication among teams and managing dependencies are typical challenges of scaling, keep a close eye on it to facilitate cross-team interactions.
- Test automation and implementing a continuous delivery pipeline is not optional and will require technical knowledge at each level.
- Having an independent team where quality specialists and DevOps work together to provide the right tools and infrastructure is key to success with a CI/CD implementation.
- Even though each Agile team and Agile release train is autonomous and should be committed to delivering stories and features with high quality, each one contributes on a shared code base and the right integration should be a major concern. I strongly recommend having an independent and specialized team responsible to verify both functional and non-functional behavior of the entire application, executing both manual and automated tests.
- Last but not least, it is key to define KPIs and quality metrics that not only provide information about the product quality status and progress against the objectives but also enhances the transparency of development and operations.
Recommended further reading
Scaled Agile Framework
EuroSTAR eBook 2018 Series – “Testing and Quality in the Scaled Agile Framework for Lean Enterprises” by Derk-Jan de Grood & Mette Bruhn-Pedersen