Quality testing in software engineering comes with a unique set of challenges. Unlike testing something like an engine—which, in the old days, allowed you to test everything from tolerances to horsepower to torque—the complexity of enterprise software makes it impossible to test completely.
With this in mind, having a framework to reign in the complexity is not just compelling, but necessary. Recently, Chris Valas, Logi’s Senior Director of Programming and User Experience, gave a webinar on how to build a successful quality assurance (QA) program with the right processes and the right people. Chris answered dozens of QA questions during the webinar—a few of which we’ve summarized here.
Q: First, can you describe what quality assurance is?
A: You can’t talk about quality assurance without talking about ‘quality.’ Quality isn’t an independent artifact. It is a perception inside a customer’s head. If a customer believes the software is high-quality—then it is high quality. If a customer perceives the software as low-quality, then unfortunately, it is below par.
The customer’s perception is tied to more than just the software. In fact, it also has to do with the quality of the documentation that comes with the product, the way it’s being positioned in the market, and how it is sold. A customer’s perception of the quality, then, is based on these factors combined.
The activity of delivering that ‘quality’—or the positive perception of the product—is the goal for QA. QA can only deal with the reliability, scalability, and ease of using the software. The measure, then, should be gauged on whether the customer’s perception matches what they’re going to get out of it. QA can’t test or gauge this directly.
Q: How does a QA team measure whether the customer’s perception matches what they get if it can’t be tested directly?
A: Obviously, you can’t put a yardstick up to your customers’ heads and measure their perception of the product. But you can measure by proxy. You can measure what your customers are saying. You can look at the number of calls and the severity of issues from the field.
To do this, your QA team should work very closely with your company’s support team to know how many calls they’re getting per day and per month. They should also understand the severities of the tickets. Are there a lot of low-level tickets? If so, you may have a problem, but it isn’t a disaster. However, if the support team is taking a lot of high-severity tickets then you know you’ve got a real problem on your hands.
The key to measuring quality or testing for quality is to have a QA team that is very customer-oriented. The QA team should always be the last gate before something goes out to the customer. They must understand how the product is sold and how the customer expects to use the product.
Q: What should a QA team do with all the feedback they get from customer support tickets?
A: You should have well-defined severity levels about the product. Maintain a map that displays clusters of where you’re taking tickets across a product. The cluster of tickets will tell you that you should prioritize the test cases around certain features of a product. This will also tell you what the customers are doing with the product, what they’re discovering they can do with the product, or what they want to do with the product.
If your support team is taking a lot of tickets in a particular area of the product that was previously benign, then there really may be a problem. However, a good QA team is also going to find out how the product is currently being sold and how it is going to market.
Also, pay attention to ‘pilot error’ ticket clusters. A pilot error is when a customer believes there’s a defect, but you determine there isn’t actually a product problem—that the product is working as designed.
Pilot errors tell you one of three things. Either there’s a documentation gap, a training gap, or you have a UX usability problem. Most often, it’s the latter, and you need to get your UX people to figure out what’s going on. That takes time, and while that is going on, your support team will need to help the customer understand how to use the feature as it stands.
It will need to be addressed, however. UX ‘defects’ are real to the customer. The most important thing is to never ship the product with the same defect twice.
Q: There are a lot of competing issues and other dependencies to consider in executing an effective QA program. How should a QA manager plan testing cycles?
A: A good QA execution will have predictable scheduling. A predictable QA cycle is not going to test every feature and for the same amount of time. Big features are going to take longer to test than smaller features. A good QA manager should plan by analyzing an incoming feature set and prioritizing by size and/or importance of the features. Some features will take three test cycles for one feature but another feature may only need one or two cycles.
Smart QA teams plan around the fix time and not just the test time. Be aware that most of the delay in getting through a QA cycle is really in the development team’s response. It usually takes more time to fix defects than it does to find them.
Additionally, a good QA team thinks about the complexity of the software and how critical the product is that they are testing. For instance, if there are defects in the software operating an X-ray machine, you could kill someone. A good QA team is cognizant of the software’s purpose. The more important or critical the software is, the more time you’re going to want to spend getting the defects out.
Q: You hear a lot these days in large software development organizations about automation. Is QA automation necessary and is it hard to implement?
A: Automation is the essence of repeatability, which is the hallmark of good QA. QA is moving towards automation because of cost/benefit reasons.
Currently, you have to have testers that can automate. Writing a test might take a half day, and automating may take another half day. From that point on, your marginal cost to run that test is minimal and you can run it forever. So, if you’re hearing that an organization isn’t using automation in their QA, they may not have the skills to make it effective.
Some best practices for an automation strategy include:
- Tooling must suit your technology (the business you’re in) and your team’s capabilities.
- One approach is to test new features by hand, and then automate everything if you have the people to do it. This is a great strategy in that, as you’re writing your test cases, you’re debugging the tests as well.
- A second approach is to test new features by hand and then automate selectively based on what the field or the ticket patterns are telling you.
- A third approach is to only automate the features or pieces that are hardest to test.
Q: In the webinar, you outlined 10 guidelines for executing a successful QA program. If you were to choose one more guideline to add, what would it be?
A: QA must have teeth! If a QA team can’t tell an organization that a product isn’t ready, then you’re not doing quality assurance, you’re doing what I call ‘quality theater.’
An organization must make quality assurance as important as any other part of an engineering organization. They must have a seat at the table to voice their concerns and address issues. Otherwise, your company is just paying lip service to QA and they aren’t going to fix the bugs in software or satisfy your customers. Your QA team must be full members of your engineering leadership.