Surveying Test Strategies: A Guide to Smart Selection and Blending

by Rex Black

Introduction

A test strategy provides a set of organizing principles for testing, as well as a project-independent methodology for testing. In some cases, the test strategy is documented.  However, in this article, I won’t examine how to document test strategies, but rather start at the beginning by discussing types of test strategies.

As test managers, we sometimes forget to think critically about why we do testing the way we do, and what our approach to testing does – and doesn’t – offer us, i.e. the project team and the wider organization. Reading this article will help you analyze and improve your current test strategies as well as adding some new strategies to your toolbox.

There are many types of test strategies.  I’ll survey the most common ones. All of these strategies are in use by various test teams.  Depending on the alignment of the strategies with the test mission and the project itself, any of these strategies can succeed – or fail.

Analytical Test Strategies

Analytical test strategies start with analysis as a foundation. With an object-guided strategy, you look at requirements, design, and implementation objects to determine testing focus.  These objects can include requirement specifications, design specifications, UML diagrams, use cases, source code, database schemas, and entity-relationship diagrams. As you might guess, this approach relies on extensive documentation, and breaks down when documentation is not available.

With a risk-based strategy, you use the informal or formal techniques discussed in the chapter on quality risk analysis to assess and prioritize quality risks. You can use various available sources of information, as with the object-guided strategy, but you should also draw on the insights of cross-functional project team members and other stakeholders.

Adjust the tests and the extent of testing according to the risk priority levels.  Unlike the object-guided variant, this approach can handle situations where there is little or no project documentation.

With a fully-informed strategy, you start with the object-guided or risk-based strategy, but take the analysis further. Study the system, the usage profiles, the target configurations, and as much other data as you can find. Design, develop, and execute tests based on the broad, deep knowledge gained in analysis. This approach is great if you have lots of time to research the system.

The items being analyzed are sometimes called the test basis.  The results of the analysis guide the entire test effort, often through some form of coverage analysis during test design, development, execution, and results reporting.  These strategies tend to be thorough, good at mitigating quality risks and finding bugs. However, they do require an up-front investment of time.

Model-Based Test Strategies

Model-based test strategies develop models for how the system should behave or work. With a scenario-based strategy, you test according to real-world scenarios.  These should span the system’s functionality.  In the object-oriented world, a close relative is the use-case-based strategy, where you rely on object-oriented design documents known as use cases.  These use-cases are models of how users, customers, and other stakeholders use the system and how it should work under those conditions.  You can translate these usecases into test cases.  With a domain-based strategy, you analyze different domains of input data accepted by the system, data processing performed by the system, and output data delivered by the system.  (Domains are classifications based on similarities identified in inputs, processing, or outputs.)  Based on these domains, you then pick the best test cases in each domain, determined by likelihood of bugs, prevalent usage, deployed environments, or all three.

With a model-based strategy, you design, develop, and execute tests to cover models you have built. To the extent that the model captures the essential aspects of the system, these strategies are useful. Of course, these strategies rely on the ability of the tester to develop good models. These strategies break down when the models cannot – or the tester does not – capture all of the essential or potentially problematic aspects of the system.

Methodical Test Strategies

Methodical test strategies rely on some relatively informal but orderly and predictable approach to figure out where to test.

With a learning-based strategy, you use checklists that you develop over time to guide your testing.  You develop these checklists based on where you’ve found (or missed) bugs before, good ideas you’ve learned from others, or any other source.

With a function-based strategy, you identify and then test each and every function of the system, often one at a time. Similarly, with a state-based strategy, you identify and test every state and every possible state-transition that can occur.

With a quality-based strategy, you use a quality hierarchy like ISO-9126 to identify and test the important “-ilities” for your system. For example, some groups in Hewlett-Packard use functionality, localization, usability, reliability, performance, and scalability. IBM uses capability, usability, performance, reliability, installability, maintainability, documentation, and operability.

With a methodical test strategy, you follow these standard inventories of test objectives.

These strategies can be quick and effective for systems that remain relatively stable or systems which are similar to those tested before.  Significant changes might render these

strategies temporarily ineffective until you can adjust the test objectives to the new system or organizational realities.

Process-Oriented Test Strategies

Process-oriented test strategies take the methodical approach one step further by regulating the test process.

With a standardized test strategy, you follow official or recognized standards. For example, the IEEE 829 standard for test documentation, created by a volunteer standards committee of the non-profit Institute for Electronics and Electrical Engineers, is used by some organizations to ensure regularity and completeness of all test documents. My book, Critical Testing Processes, describes twelve comprehensive, customizable, and lightweight processes for testing. Such standardization can help to make the test process transparent and comprehensible to programmers, managers, business analysts, and other non-testers. However, you must take care not to introduce excessive, wasteful, or obstructive levels of bureaucracy or paperwork.

One increasingly-popular test strategy, the agile test strategy, has arisen from the programming side of software engineering.  Here, the testing follows lightweight processes, mostly focused on technical risk (likely bugs).  A heavy emphasis is placed on automated unit testing, customer acceptance testing, and being able to respond to late changes without excessive costs.  These strategies are tailored for small teams on short projects with immediate access to the users. Large, long, geographically distributed, or high-risk projects are likely to find that the strategy does not scale.

The topic of automated unit testing brings us to a group of test strategies that rely heavily on automation. One such strategy is an automated random test strategy, where a large amount of random input data are sent to the system. Another such strategy is an automated functional test strategy, where you test system functionality using repeatable scripts. Either strategy might also involve an automated load, performance, or reliability testing element. These strategies rely on the ability to effectively automate most of the testing that needs to be done.

Dynamic Test Strategies

Dynamic test strategies, like the agile test strategies, minimize up-front planning and test design, focusing on making the test execution period responsive to change and able to find as many bugs as possible.

With an intuitive test strategy, you test according to the collective experience, wisdom, and gut instincts of the test team.  Discussions about what to test, anecdotal evidence from past projects, and oral tradition are prime drivers.  With an exploratory test strategy, you simultaneously learn about the system’s behavior and design while you run tests and find bugs.  You continuously refine the test approach based on your test results, and re-focus the further testing.  With a bug hunting strategy, you use bug profiles, taxonomies (classifications), and hunches (bug assumptions) to focus testing where you think the bugs are.   The hunting metaphor is a good one for all these strategies, which are more alike than different.  I hunt and fish.  I’ve learned one critical success factor to both sports:  Hunt where the birds are and fish where the fish are.  Likewise, these test strategies require that you be right about where you think the bugs are, often under conditions of schedule and personal pressure.

Dynamic test strategies value flexibility and finding bugs highly.  They do not usually produce good information about coverage, systematically mitigate risks, or offer the opportunity to detect bugs early in the development lifecycle. They are certainly much better than no testing at all and, when blended with analytical strategies, serve as an excellent checkand-balance that can catch gaps in the analytically designed tests.

Philosophical Test Strategies

Philosophical test strategies start with a philosophy or belief about testing. With an exhaustive test strategy, you assume that everything and anything can and will have bugs. You decide that the possibility of missing bugs is unacceptable, and that management will support a considerable effort to find all the bugs. You attempt to test extensively across the functionality, the quality risks, the requirements, and whatever else you can find to cover. The essence of this strategy is captured in an old tester’s joke, derived from the catchphrase on the back of US currency: “In God we trust…all others we test.”

With a shotgun test strategy, you also assume that everything and anything can and will be buggy. However, you accept that you cannot test everything. Since you lack any solid
idea on where to find bugs, you test wherever and whatever comes to mind. You attempt to randomly distribute the test effort within the given resource and schedule boundaries, like pellets from a shotgun.

With an externally-guided test strategy, you accept that you cannot test everything, nor can you know where the bugs are. However, you trust that other people might have a good idea of where the bugs are.  You ask for their guidance. You test according to their direction, including asking them to help you decide if the observed results are correct. Common guides include programmers, users, technical or customer support, help desk staff, business analysts, sales people, and marketing staff. If the underlying philosophies and beliefs behind these strategies are correct, they can be appropriate. For example, testing weapons systems like nuclear missile guidance software clearly requires an exhaustive strategy. However, when applied in inappropriate situations—and I’d guess that most projects are inappropriate for at least two if not all three of these strategies—they lead to dangerously misaligned test efforts.

Regression and Regression Risk Strategies

Regression is the misbehavior of a previously-correct function, attribute, or feature. The word is also used to mean the discovery of a previously-undiscovered bug while running a previously-run test.  Regression is generally associated with some change to the system, such as adding a feature or fixing a bug. Regression falls into three basic types. The first is a local regression, where the change or bug fix creates a new bug. The second is an exposed regression, where the change or bug fix reveals an existing bug. The third is a remote regression, where a change or bug fix in one area breaks something in another area of the system. Of the three, the last is typically the hardest to detect, but any one of these regressions can slip past us if we’re not careful. Regression can affect new features. In Figure 1, you see a new development project that will produce n new features. The features are built, integrated, and tested one after another or in increments. Feature 3, it turns out, breaks something in feature 1.

—————-Bild

Regression can affect existing features. In the figure, you see a maintenance project that will add m new features in release 1.1. The features are added on top of the existing n features that were present in release 1.0. Feature n+2 breaks existing features 2 and 3. Of the two effects, breaking existing features is typically worse. Users, customers, and other stakeholders come to rely on existing features in a system. When those features stop working, so do they.  For new features, a user, customer, or other stakeholder might be disappointed not to receive the promised capability, but at least it was not a capability around which their daily work has become entwined. So, what test strategies exist for dealing with regression?

Regression Risk Strategy 1: Repeat All Tests

For regression risk mitigation, a brute force strategy is to repeat all of our tests. Suppose you’ve developed a set of tests that are well aligned with quality.  You’ll have done that if you’ve performed a solid quality risk analysis and received sufficient time and resources to cover all the critical quality risks. If we repeat all tests after the very last change to the code, we should find all the important regression bugs.

Realistically, automation is the only means to repeat all tests for large, complex systems. Automation is practical when you can design and implement automated tests where the high up-front test development costs are recouped, through frequent test repetition, by the low (ideally, close to zero) costs of executing and maintaining the tests. This test automation can occur at a graphical user interface (GUI), at an application program interface (API), class function or method interface, at a service interface like those in a networked system with a service-oriented architecture, or at the command line interface (CLI).

Regression Risk Strategy 2: Repeat Some Tests

For one reason or another, it is often not possible to repeat all tests. So, you must select some subset of your overall set of tests to repeat. There are three major techniques for doing so.

The first is the use of traceability. Briefly, traceability is where tests are related to behavioral descriptions of the system like requirement specification elements, design specification elements, or quality risks. We can look at what requirement, design element, or quality risk is affected by the fix or change, trace back to the associated tests, and select those tests for re-execution. The second is the use of change analysis. In this case, you look at structural descriptions of the system to figure out how change effects could ripple through the system.  Unless you have an in-depth understanding of programming and the system’s design, you’ll probably need help from programmers and designers. The third is the use of quality risk analysis. With traceability and change analysis, you are using technical risk—where bugs are likely to be found—to decide what to retest.  However, you should also revisit your quality risk analysis to determine what areas have high business risk.  Even if bugs are unlikely, areas with high business risk should be retested.

Since you’re not rerunning every test, you are likely to miss regressions if they occur in unanticipated areas. So, you can use cross-functional tests to get a lot of accidental regression testing.

The analogy is clearing a minefield. The oldfashioned approach of clearing minefields by walking through the field looking for each mine one at a time has been superceded by the use of large earthmoving equipment fitted with heavy rollers and flails. These clear huge swaths of the minefield at once, and are unlikely to miss anything.

Likewise, a cross-functional test touches lots of areas of the system at one time. The key assumption behind cross-functional tests is that you won’t run into too many bugs. If you do, then you’ll not get through any single test, as you’ll get blocked partway into your crossfunctional voyage through the system.

I have a client that builds a large, complex, high-risk geological modeling package for industrial use. They have a test set that is so large and complex that it takes them a whole year—the length of their system test period— to run each test once, with only a small percentage of tests re-executed to confirm bug fixes. They rely on cross-functional tests to help mitigate their regression risk.

They also use code coverage to help them assess the level of regression risk. In this case, the idea of code coverage is that the greater the percentage of the program you have tested, the less likely it is that you’ll miss bugs. Code coverage measurements allowed my client to make sure that 100% of the statements in the program had been tested at least once in the one-year test period, which mitigated the risk of breaking existing features. Code coverage measurements also told them that, thanks to cross-functional testing, in any given four-week period, they had executed between 50 and 70% of the statements in the program. This not only mitigated the risk of breaking existing features, but also the risk of breaking new features.

Conclusion

At this point, we’ve surveyed the primary types of test strategies in use in most organizations for new systems. Most testers and test managers will recognize their own approach in the list, as well as learning a few new types. In some cases, when we’ve reviewed these types for clients and course attendees, they start to question why they use their current approach, given its strengths and weaknesses. That’s good, because, as I mentioned at the start, thinking critically about why we do what we do is essential to achieving an effective, efficient testing process. As you think about how to improve your test strategy, remember that these strategies in the real world can be—and should be—adapted to their context. In this article, what I’ve described are pure forms, as you might refer to shapes as triangles, circles, squares, rectangles, and so forth. Of course, objects in the real world include features of many types of shapes.  The point is not to achieve test strategy purity, but rather a test strategy that is fit for purpose in your testing.

In addition, remember that strategies are not philosophies or religions—you don’t have to choose just one and stay with it. Strategies may be combined. You don’t have to choose a single strategy. While my dominant strategy in most cases is a risk-based analytical strategy, I often blend and use multiple strategies. Finally, be sure to spend some time on carefully selecting and tailoring your test strategies, not just once, but for each project. You can adopt general strategies for your test team, but remember to be ready to modify or completely re-work those for a particular project, too. Best of luck to you as you take a more strategic approach to your testing!

Acknowledgements

This article is an excerpt from Rex Black’s book, Pragmatic Software Testing, available from Wiley. The taxonomy of test strategies presented here grew from an online discussion between Rex Black, Kathy Iberle, Ross Collard, and Cem Kaner, and Rex thanks them for stimulating the thoughts that lead to this list.

 

Leave a Reply

Your email address will not be published. Required fields are marked *