Should We Get Rid of the Software QA Department?
CIOREVIEW >> Unified Communications >>

Should We Get Rid of the Software QA Department?

Derek Yoo, CTO, Fuze
Derek Yoo, CTO, Fuze

Derek Yoo, CTO, Fuze

It was recently reported that Yahoo has gotten rid of their QA department. The number of comments this post attracted made me think of a CTO conference I recently attended, where the same question had come up. The conference audience was fairly split on the issue, with several CTOs pursuing a similar strategy to Yahoo, and other CTOs staying with a traditional centralized QA department. The question definitely struck a nerve, and people in both camps were animated in defending their practices. Those of you coming from older, more established companies are probably thinking, “Get rid of the QA department? Are you nuts? Who is going to be responsible for quality?” 

I have to admit—it does sound a little crazy to people like me who cut our teeth using a more traditional waterfall software development process. After all, as projects progress down the waterfall, we always needed QA as the last step to ensure reasonable quality levels for products. There was a fundamental check and balance built into the process where QA could push back on engineering if there were too many defects. This check was important, as the price for failing to deliver acceptable quality levels is high. Nothing undermines customer confidence and goodwill quicker than software with lots of bugs. So what is it really that Yahoo is saying when they talk about getting rid of QA?

​  It is about moving people with a quality focus from a separate and distinct QA group into the engineering teams themselves. 

The argument that Yahoo and others are making is less about getting rid of QA, and more about moving where and who is responsible for software quality. In Yahoo’s model, responsibility for quality is pushed back to engineering. By removing the formal QA step to the process before code gets shipped to market, you are in essence removing the “safety net” that engineers have enjoyed for a long time. With the safety net removed, the argument goes; engineers will be forced to behave differently. They will engineer quality and test automation into the software itself, resulting in higher, not lower quality in the software. Proponents of this camp go further to argue that the result is increased efficiency overall, with fewer defects and a quicker development cycle versus the traditional model with a separate QA department that is in the software delivery pipeline. 

It is also worth pointing out that while Yahoo used this shift to downsize their technical staff, this change is not necessarily about downsizing. It is about moving people with a quality focus from a separate and distinct QA group into the engineering teams themselves. This certainly should take away one of the common complaints I’ve heard from QA; that they are never involved early or deeply enough in the engineering cycle. And it underscores the point that staff focused on software quality should really look more like engineers these days than the more functionally-oriented and manual testers of the past. You want your quality-focused staff to be investing in test automation versus manual testing. 

So what do the companies with more traditional QA departments have to say to all this? Most of them first point out that of course there are fewer defects found in the software when you remove formal QA, but that doesn’t mean that the defects don’t exist! The comment may seem like a troll, but I do think it makes an important point. You have to be careful when looking at the data in judging the efficacy of a quality strategy that looks like Yahoo’s. The risk is that the number of defects being detected is going down, on the surface validating the strategy, but the measurements may not be reflecting reality. You really need to look at the number of defects that are making it all the way to your end users versus those that are detected as part of the development process. If the number of defects being reported by your end users is going up, that means the strategy is not working. 

In many ways it goes back to the idea of checks and balances in your development process. Companies with a traditional QA department suggest that it is dangerous to hold the same team responsible for shipping features on time, and also with the right levels of quality. Given the classic project management triple constraint of scope, schedule, and cost, quality can often be at odds with optimizing those constraints and is the corner that is cut to get the product shipped on time. Moving down from the team to the individual level, there is also risk in having the person who created a piece of software also be responsible for testing it. The problem that advocates of the traditional QA model point out is that the software developer creating a given feature often has a specific mental model of how the software should be used. That mental model is frequently slightly different than the model a non-engineer user brings to the table. A good QA engineer can find bugs that are the result of behavior that lies outside of the developer’s mental usage model, so called “edge cases.” Covering a variety of edge cases well is a good measure of the quality of a software product. 

A more practical reason why a traditional QA setup is needed is because there is a limit to what can be automated. Automated test coverage is great, and getting as much automated test coverage into your software products is an undeniably worthy goal. But certain types of tests are in fact difficult to automate. For example, you can automate the testing of backend API services and user interface flows relatively easily. But creating automation tests to see if a user interface looks right, has the right colors, or has the main elements in the right places is fairly difficult. A human can very quickly judge if all those things are correct versus an automated test. 

As with most questions that generate sustained spirited dialog, there is a kernel of truth to each side of the argument. Here are some conclusions I have drawn from this debate. Fundamentally pushing quality back into engineering and setting a really high quality bar there is a good thing. I believe we will continue to see a shift toward de-centralized QA models and the corresponding move of quality-focused resources directly into engineering teams. These QA resources will be much more developer-like than functional. But at the same time, there still needs to be a QA phase or gate within each of the feature sprints. There will still be a place for quality engineers, who are different than the engineers tasked with creating the features. These quality engineers will continue to work on automation, and bringing their unique skills in finding edge case bugs to improve the quality of rapidly evolving software products albeit from within the engineering team instead of from outside of it.

Read Also

The Need For Data Integration And Apis In Today’s Benefits...

Jim Foley, Vice President, Product and Underwriting, Wellfleet Workplace

Data: An Invaluable Asset

Nathaniel Karp, Chief Economist, BBVA Compass

How To Implement A Successful Data Strategy: A Success Story

Elizabeth Puchek, Chief Data Officer,USCIS

Model Implementation: On The Psychology Of Large-Scale Technical...

Thomas Fletcher, PhD, VP Data Analytics, North America Life, PartnerRe

Data Integration Thought Leadership

Matt Meier, EVP, Chief Digital and Data Officer, Driven Brands

Emerging Technologies Driving the Blue Economy

Guillermo Renancio Artal, Director Of Technology, Expansion and Strategic Partnerships, New Pescanova Group