Updated: Aug 20
There is a multitude of reasons why businesses fail, and investments flop. One of the top reasons for failures is poor-quality products and services. It is one of the primary reasons the customers eventually go away.
Testing plays a significant role in the early stage of product design innovation as well as the end-to-end software development lifecycle. In major software companies such as Google and Microsoft, the concept of unified engineering, where the test discipline is baked into the engineering team from concept to maintenance, is expected.
For example, in a technical due diligence exercise, quality, in various forms, is typically the number one reason why an investor may shy away from an investment. This can be attributed to the quality of the architecture, such as the ability to scale, security flaws, accumulated technical debt, or even the depth of customer scenarios (i.e., use cases) covered.
This article focuses on testing software products, but the concepts can be applied to any other industry.
At its core, software quality testing is about continuously assessing risks and ensuring that an organization is building the right thing, building it right, and supporting it well.
If you are a young company, it will be challenging to get off the ground if your products or services are of low quality. Bad quality will limit your expansion if you are a growing or expanding company. If you are a mature organization, your brand might buy you some time, but disruption and customer attrition will be imminent.
There are 2 common quality assurance misconceptions that lead companies to be trapped with poor-quality products or services that are hard to fix and cost their customers.
Considering quality as an activity taking place at the end of a development cycle
Not using a systematic and strategic approach with sufficient automation.
When Toyota popularized Scrum and connected it to lean production, it was motivated by quality upstream for efficiency gains. They realized that when issues were found early on, the cost to fix them was much cheaper. They also realized that quality assurance in the early envisioning stages is essential; critiquing the product can significantly enhance the quality of the outcome.
Traditionally, testing was hardly an official part of the development effort. In recent years with the standardization of DevOps and Agile, quality testing has become mainstream and at the core of the software development lifecycle, adding business value upstream.
The worst type of testing is one that is an afterthought and is not systematic. The kind that hands off a ready-made or an almost-ready product to be tested randomly.
The best kind of testing is the strategic type that starts at the concept level to be critiqued and continuously moves along the development cycle. Testing is a science and requires a proper investment, design, and strategy.
The Testing Mindset
There are many ways to categorize testing with various sub-types. This is why ownership of the discipline is necessary within an organization. This allows the team to go to the depth levels needed and identify the best strategy suitable for the product or service being built. This is typically done by a test manager who devises a test strategy and a test plan to be executed by other roles in the test discipline.
At a high level, testing can be categorized into two general categories:
Validation - Seeks to understand what is being designed and built and ensures it’s the right thing.
Verification - Seeks to ensure that what is being built is what was planned and is built right.
In this representation, several sample testing types are categorized across the development lifecycle. Of course, each product and situation are different, and thus, testing has to be involved from the beginning, and a strategy has to be well considered to decide which testing types are most appropriate vs. what is nice to have.
Test Activities and Test Quadrant
A popular way to think about testing that ensures the highest quality is to use the test quadrant that breaks down testing responsibilities into four key areas. In this representation, four key activities break down testing responsibilities:
Test the business and customer-facing aspects
Test the product viability and health
Test the technology used in the product
Test the functionality of the product
Poor Quality Can Be Punitive To The R&D Spend
Like it or not, there is a real cost to quality, and it has to be paid sooner or later. Much research was done to estimate the actual costs of finding bugs later in the cycle compared to upstream.
The representation below estimates the effort/cost involved in catching quality defects in early vs later stages.
But perhaps, there is nothing better to drive a point home than real-world examples.
Everyone who boarded an airline flight in late 2016/2017 recalls the flight attendants’ warnings about the Samsung Note 7. A flaw in the battery system led to instability and a potential explosion. This ultimately cost Samsung about $17 billion to address, not to mention the brand impact.
Another popular one was LinkedIn in 2012, where 6.5 million passwords were stolen by attackers and posted on a hacker forum for sale. And later, in 2016, it was determined to be 165 million user accounts.
Both are examples of situations where upfront systematic testing would have likely saved these businesses' monetary and reputational costs. The quality of their products was compromised because of weak testing; they saved upfront but paid later.
Validation Testing Is An Innovation Engine Preventing Product Mortality
Testing goes much deeper than ensuring the end product is functioning well. Innovation is the biggest risk during the test validation stage; this is where the test strategy should shine.
There is critical importance to testing in the early stages of the product development lifecycle. This is the stage where critiquing the product strategy and analyzing the market and usage data to understand the validity of any assumptions made impacts everything else downstream.
Nokia and BlackBerry were amazing success stories, and both enjoyed a tremendous market share in their heyday. Both companies eventually fell victim to their success and missed the next wave of their potential glory days in a world led by smartphones. Although it would not be fair to take a simplistic view of a single root cause for these failures, in both those cases, the critique of a testing mindset was lacking and was an unmistakable contributor.
BlackBerry was the Apple of smartphones by 2005. Several factors led to its failure, but one of those factors was the inability to adapt. This was largely due to limited data analysis used to acquire healthy consumer insights to evolve their product.
In October 1998, Nokia became the best-selling mobile phone brand in the world. Nokia’s story was similar to that of BlackBerry but even more unsettling. They had the right data, but a lack of corporate alignment did not allow them to use the data on time until it was too late.
A third popular example is that of Xerox, who practically invented the first PC and could not exploit their invention. While the common story suggested that their heads were in the sand, the reality again was rooted in their inability to test early-stage and evaluate the data properly. Most people today would not attribute the PC invention to Xerox, and they never repeat any benefits from it.
Testing, especially in larger organizations, has historically been considered a second-class activity that often takes place after the meal has been prepared. This misses the opportunity to evaluate whether it is the right meal for the diners or if the ingredients are right. It’s like inviting a friend for dinner, and you decide to cook a seafood dinner. But the repeated warnings from your spouse about how the friend has seafood allergies (i.e., data analysis and voice of the customer) fall on deaf ears (i.e., complacent and dysfunctional organization).
There are dozens of examples, including household names such as Blockbuster, AOL, MySpace, and GM, who failed to the same root causes. The internet is full of examples of failures due to quality issues ranging from the Global financial crisis, the Challenger space shuttle explosion, the BP deepwater horizon explosion, and the oil spill.
All of those examples share a common weakness in their ability to early concept test and leverage data and were cannibalized by competition. The inability to test the famous “Are we building or delivering the right thing?”. This is a key component of the validation stage (Step #1 in the picture) in the testing process that often takes a back seat or is completely absent.
We are living a few modern-day examples today of how critical thinking and upstream testing at the conceptual requirements level. The most prominent ones are taking place as we speak in both the traditional financial banks (KPMG) and big retailers (HBR) and we will see how these stories unfold.
Testing early is the fastest way to fail early and gain the most benefit!
Automation Testing Done Right Is A Quality Advantage And R&D Spend Boon
Companies often get their automation strategy wrong. They either over-invest where it matters less or under-invest where it matters most. In both scenarios, they suffer higher costs and lower quality.
“The first rule of any technology used in a business is that automation applied to an efficient operation will magnify the efficiency. The second is that automation applied to an inefficient operation will magnify the inefficiency.”- Bill Gates.
A general rule of thumb is that automation is expensive to develop and maintain but is essential. Human intelligence is too valuable to be spent on repetitive tasks. To strike the right balance, one should automate everything they can, but only when there is a healthy return on investment and only when the domain is well understood to ensure efficiency.
The key four questions every test manager should ask are:
What to Automate? Everything possible.
What to Not Automate? Use the Test Quadrant as a guide.
When to Automate? When the process is efficient and well-understood.
What’s a Good Automation Balance? Reference the Test Pyramid as a guide.
Aside from the decision about what to automate, it is also important to have the right balance of where to invest in automation. A popular guide is the automation pyramid. In essence, the point of the pyramid is that the Unit tests are the largest percentage of the automation effort. At the top of the pyramid are automating UI tests which should constitute the smallest investment of automation testing conducted because it changes often and it may have less of an ROI in some cases.
In between those two layers, there are a host of automation opportunities, which include everything from regression testing to integration testing.
Another view on the test pyramid that is more mathematical is to consider that you have a total of 100 automated tests in total. In this case, the Unit tests would roughly be around 90 tests, 6 integration/components tests, 3 system/regression tests, and 1 UI test.
The decisions of what to automate and when to automate should not be taken lightly and should follow the best practices for an optimal outcome.
Rationale Behind The Test Organization Roles
Depending on the size of the team and the product, the composition of a test team will vary between organizations. However, the roles have to be fulfilled even by one person if needed. All of these roles have to be accounted for in some respect.
Test Manager: Responsible for test strategy and a test planning
Test Architect: Responsible for figuring out the test infrastructure and tools
Software Design Engineering in Test: Develops the necessary tools to test
Test Automation Engineer: Develop the necessary script used for automated testing
Software Test Engineer: Develops the test cases and tests the software
The ratios between test and development teams also vary depending on the product's maturity and the business's size. For example, in a new product scenario, the ratio will be higher to 4:1, 5:1, or more if needed. An average Dev:QA average ratio can range between 2:1 and 3:1 in most organizations, but these ratios are for guidance only.
Perhaps a more critical aspect is the art of hiring software test engineers and what to keep in mind and skills to seek. Great software test engineers should also be developers and know how to read and write code. They are also fantastic problem solvers with the ability to connect the dots and assemble the big picture.
Specific Testing Types and Sub-Testing Types Defined
Testing is a complex science where a test team would need to figure out the best approach, the types of tests needed, and what are the adequate KPIs based on the risk levels for a particular product. This is part of the test planning exercise that must take place in the early stages to ensure enough time to prepare.
There are many test categories, such as BlackBox Testing, WhiteBox Testing, Manual Testing, and Automated Testing. Here are the most common approaches in the table below.
Within the categories, there are testing sub-types. The test leadership, along with the team, typically determines the Functional Testing and Non-Functional Testing priorities and the timing based on the levels of risk and resources available and decides where best to invest.
The tables provided below show the most common testing types and sub-types and their definitions.
The key functional tests that are expected of mature organizations and their absence is guaranteed to cause issues are Unit testing, Regression Testing, Feature Testing, Smoke Testing, Integration Testing, Acceptance Testing, Exploratory Testing, and Code Analysis Testing are key components of any software development exercise that need to be continuously balanced.
The key non-functional tests expected of mature organizations and their absence is guaranteed to cause issues are Security Testing, Vulnerability Testing, Penetration Testing, Stress Testing, Performance Testing, and Disaster Recovery Testing. This intent is not to play down the importance of other types of tests but just to emphasize a few expected to be core to most organizations developing software.
Most organizations don’t implement every type of test but rather decide, based on the product needs as well as the risk levels, what is best to implement. It is also common to mature the test practices and reach within the development lifecycle by incrementally adding additional test types as the product grows and matures.
7 Best Practices For Test Organizations
While this list below is not what you might typically expect as a standard definition of test best practices, these are critical considerations for ensuring the test has an impact across the end-to-end product development lifecycle in software development.
Testing Tools Choices Are Important
The ecosystem of tools is a very large topic and outside the scope of this article, but it is nevertheless important to highlight that continuous research, awareness of tools, and choice-making can help save an organization a great deal of time and magnify the overall testing quality. It’s important to spend the upfront time researching, piloting, and investing in the right tools. A healthy list of test tools can be found at Guru99.
Test Strategy And Test Plans Are Essential Exercises
The act of conducting an initial “test strategy” as well as “writing the test plan” is a critical exercise for good outcomes. It forces the team to think collaboratively through the product itself, what is to be tested, and how. The same principle is applied to the process of creating test cases and test suites, especially in the early stages to decide what may become automated tests, regression tests, or remain manual tests.
These activities aim to have a well-considered approach and not jump into testing without a plan.
Pair Programming Should Be Applied To Testing
Pair programming is an agile software development technique where two people collaborate on a given task. For the untrained eyes, it is counter-intuitive and may potentially be viewed as redundant. But the quality benefits are known and have been proven by statistics showing how the end quality is positively impacted. Typically, the observer considers the strategic direction and explores areas of improvement while the driver focuses their attention on the tactical aspects of completing a task. The roles do swap regularly.
The same technique is being used by successful teams for quality testing approaches as well. During white box testing exercises, it can provide advantages to catching bugs early. It is also applied during early-stage software product validation. These activities include testing during the planning phase, data analysis, concept testing, and exploratory testing which enhances the overall requirements quality.
Use A Clean Code And Clean Architecture Mindset
While architecture and code are often viewed as the responsibility of architects and developers, that’s a legacy mindset. The quality, structure, and cleanliness of the code and the architecture are a combined team effort that must involve the test teams. The mental model is “the earlier the architecture and code are reviewed and critiqued, the less effort and cost will be invested to fix those later and the higher the quality of the product will be.” It’s the same as how Toyota’s lean production learned that solving problems upfront saves time and money.
DevOps Is A Mandatory Practice
If you remember when the development teams threw the code and product over a fence to the operations teams, you might laugh at the inadequate and inefficient back-and-forth.
DevOps was born out of this frustration and established a set of engineering practices that combines software development, IT operations, processes, and tools. It is used to shorten the development lifecycle and provide continuous delivery with higher software quality. It gets teams (i.e., people) communicating. It emphasizes a collaborative mindset as well as automation.
In a mature DevOps setup, the collective team is responsible for continuous Integration, Delivery, Testing, and Monitoring. Test and Automation are built-in.
Life is not black and white, and so are people. What is ultimately important is to leverage the talent and strengths of engineers in any capacity possible and give them the freedom to create and innovate. The Unified Engineering concept is the idea that is also practiced in Agile development where the team owns the deliverable of the product or service. Everyone contributes, and everyone is responsible. Unified Engineering is making the classic organizational titles less formal with a career ladder that rewards high-impact individual contributors. It means that anyone can contribute where they have the strengths and passion to create and ship a better software product with fewer constraints than in a classical role-based paradigm.
Use the 20% Rule
Establish and encourage the engineers to make use of the 20% rule that was popularized by Google. Creativity unlocks innovation, and the lazy ones will find better and more efficient ways of doing things in a better way. Some of the best teams I have worked with created innovations and ways to automate things that were impressive. The 20% time unshackles the creativity and benefits the organization and morale that is at times constrained by organizational processes.
Power-Up QA Testing For Maximum Benefits
This paper has outlined the value, process, roles, and more to ensure a high-quality product and avoids the costs Samsung and LinkedIn incurred. The QA testing approach for a given organization is as unique as a fingerprint for an individual. While everyone shares patterns and curves, the intricacies of the testing process vary between organizations and distinct software products.
Investing early in testing strategies and automation is imperative to limiting downstream problem areas, saving costs, ensuring better quality products, solidifying superior customer service, staying innovative, and improving time-to-market and business longevity. But more importantly, treating testing as important as other software roles in the organization is the first step in that journey.
About the Author
Hazem has been in the software and M&A industry for over 26 years. As a managing partner at RingStone, he works with private equity firms globally in an advisory capacity. Before RingStone, Hazem built and managed a global consultancy, coached high-profile executives, and conducted technical due diligence on hundreds of deals and transformation strategies. He spent 18 years at Microsoft in software development, incubations, M&A, and cross-company transformation initiatives. Before Microsoft, Hazem built several businesses with successful exits, namely in e-commerce, software, hospitality, and manufacturing. A multidisciplinary background in computer engineering, biological sciences, and business with a career spanning a global stage in the US, UK, and broadly across Europe, Russia, and Africa. He is a sought-after public speaker and mentor in software, M&A, innovation, and transformations. Contact Hazem at email@example.com.