Instead of asking, Why is QA so expensive?, the real question is, What is the cost of doing without QA?
At Globant, we build digital products for a living. We partner with our clients to bring their ideas to life, from inception to release. This process looks different for every client—no two apps are built for the same purpose or with the same end users in mind—but there is one constant in every project: a discussion about cost. During this discussion, we provide the client with a project estimate, and give them details about every line item—every project manager, every designer, every dev.
These costs are all fairly straightforward, but one line item generates more push-back than others: Quality Assurance (QA). More often than not, we hear clients asking questions along the lines of, Why is QA so expensive? The implication is that quality assurance should be a given, so there’s no need to pay extra for this service. This is why clients usually ask about the price of the service, but we think there’s a more important question to consider: What is the cost of doing without QA?
What is Quality?
There’s a difference between how we and our clients think of quality. When clients discuss quality, they are often referring to a subset of a product’s quality aspects, such as its polish, user experience, and stability. These aspects are important, but if I am to be completely transparent, polish and user experience are typically owned by the client, design team, product team, or some combination thereof. It makes sense, then, that a client would be taken aback by the cost of QA services if the only thing they perceive to be getting for their money is a stable product. (To be clear, I agree; a stable product should be a given.) But this is simply not the case.
This Android guideline does a great job of detailing some other aspects of QA that clients don’t immediately consider, such functionality, compatibility, performance, security, and the requirements for being featured in an app store. QA continually exercises (tests) a product to ensure that none of these are compromised. I would like to go further, though, than simply converting each of these aspects into a justifiable line item.
When we work with a client, there is one thing we don’t often get to discuss properly: risk, or risk management. Both of these are inherent to QA, but are often absent from any discussion about the cost of a project.
Let’s investigate risk and risk management in a bit more detail by introducing two hypothetical projects:
- Project Startup (I’ll refer to this project as ‘SU’ from now on): Let’s say that a startup needs to nail their app experience for CES. They have an innovative product leveraging an emerging technology. It’s very important to them to get the experience right for this event, as their funding will run out if they can’t turn a profit soon.
- Project Shiny New Toy (which I’ll refer to as ‘SNT’): For this project, a large, established company wants to explore an emerging technology such as AI, and is not quite sure how this technology fits into its larger product offering. They have set aside some budget and are willing to invest into this project, even with the understanding that it could end up being a financial loss.
For argument’s sake, let’s say both projects are identical in every way. Same designs, teams, features, deadlines—everything except one key aspect: the cost of failure. Let’s examine the cost of failure for each project, as well as an appropriate QA solution for both.
For project SU, launching a successful app is a must, as the company could go under if things go poorly. In order to ensure a successful project, it might be best to mitigate as many variables as possible, such as time and feature scope. Time might be the easiest one to cover, since SU has a hard deadline of CES; if the app is released after the event, we can consider that a project failure. In order to make sure SU hits their deadline, I would assume most project managers would include conservative timelines in their estimates, as it would allow the project to react better to unexpected changes. In some instances, scope and cost could almost be used interchangeably, but for clarity, we will only be discussing scope; you can assume that an increased scope roughly translates to increased cost.
Project SU’s scope will be the key to a successful release. If the scope is vague, it will be difficult for a project team to optimize dependencies and they will be less resistant to change. On the other hand, if the scope is clear, it allows for the early development of a robust project plan against which progress can be measured throughout the lifecycle of the project. This project clarity is ultimately what allows the project team to adapt, as it is much easier to handle changes in flight when the destination is clear and measurable.
For SNT, however, the context is very different, even for a project that is, on paper, identical to SU’s. Timelines, for example, are inherently less risky, as there is little to no penalty for not completing the project on time. Scope is probably the largest difference between the projects. For project SU, each feature needs to work for all users, whereas the features built into SNT’s product need only be demoable.
More specifically, the QA scope for each project is very different. For SNT, the QA effort might be minimized so that the development team would have more time or budget. Less risk mitigation would be acceptable, since the goal of the SNT project is to learn a new technology, and not necessarily to release software.
It is admittedly a little unfair to compare these two hypothetical projects to one another. One has to release, whereas the other doesn’t have to do anything other than learn. But examining these two projects makes it clear that not all projects are the same, nor is their cost of failure identical. It would not be difficult to replace these examples with more realistic projects in which the nuance is harder to see but in which QA would have similar impacts.
What these two examples ultimately lead us to ask is, Can you afford not to have QA? It depends, but one way to get more context is to ask, What are you trying to accomplish with this project? The better our team can understand our client’s needs, the better we can support them in both their short- and long-term goals. Project SNT, for example, simply does not want to invest in performance or security if they can use that effort—and money—to learn more about the new technology. Project SU, on the other hand, simply cannot afford to mess up. These two projects do a decent job of exposing a spectrum of acceptable outcomes for a given project, even when, on paper, they are indistinguishable.
Investing in Quality
So why is QA so expensive? (No, seriously—why?) In short, quite a bit of the work that the QA team does is misunderstood, or the work that they do doesn’t directly translate to a line item. An app’s performance, for example, is one thing that QA teams work to ensure, but it’s difficult to build that into a simple line item estimate. Even with infinite time and resources, can we truly say any project is as performant as it could be? No, so instead of scrutinizing the cost of each line item in an attempt to minimize cost, we can state project requirements and build a plan around them that also meets budget expectations.
I submit that the real question being asked here is not why QA is so expensive, but rather, How much should you invest into the project? And the answer to that question is: How much risk are you willing to take?