Turning Software Testing into the Backbone of your Quality Assurance System

by Thierry Duchamp

How to Institutionalize Testing in Your Corporation

Synopsis:

This article describes an efficient way to create a global quality improvement programme based on an existing testing strategy. This approach drastically improves the quality of the software which will be apparent to customers or end-users. This approach uses a global quality system (e.g. CMM-I) that guarantees coherency of quick-wins in a longer term quality improvement programme.

What do you expect from testing?

Understand your context

Information Technology employees in the banking industry and more generally speaking in the software industry, often consider Quality Assurance (QA) as the equivalent of “testing”. Furthermore, they often misinterpret testing as functional testing. Quality improvement models such as CMM-I give QA a different meaning: a department that decides, implements and controls best practices for delivering high-quality software. CMM-I defines testing as “Quality Control”. This is sensible if you prefer an industrial approach to a non-industrial model based on very rare (and therefore expensive) technical and business experts.

An organization that wrongly considers testing as the equivalent to quality assurance is often an organization that addresses quality issues in a way that is not industrialized. This article is not about explaining the advantages of an industrialized model over a non-industrialized model. There are many profitable businesses that do not rely on mass-industry best practices. Choosing one or the other is a matter of scale (number of customers), competition (how good is the quality of their product), time to market (how fast new versions must be released), and there are of course other factors.
The most important things for organizations are agility and flexibility, which allow them to adapt when their business grows and time to market gets crucial. I believe that testing is an important building block for moving towards a managed, agile and flexible approach.

Is testing really an efficient way to improve the quality of software?

The question might sound very provocative, especially if you invest a lot of money in testing. To be even more provocative, the answer is definitely “no”, although of course testing does indirectly have positive effects. Then, why do corporations invest so much in testing? What is the purpose of spending up to 30% of the global budget of R&D in testing? From what I have seen in the software and banking industries, organizations start to put in place specific testing teams when the lack of quality impacts profits or production systems. Most of
the time they put in place a team called “Quality
Assurance”, and stakeholders expect immediate
and high return on their investment and
a drastic increase of customer/end-user satisfaction.
Senior managers expect to receive less
escalations and complaints. This is how they
can measure whether things are improving. In
most cases, results are at first far below expectations.
However, I believe that this is normal
when implementing an industrialized quality
improvement system.

Unfortunately, this initial step has many counter-
productive and destructive effects such as:

• A decrease in the quality of the code,
because developers feel a false sense of
security and assume that the “Quality Assurance”
net will catch every defect.

• Too much invested in testing instead of
solving root causes for poor quality, such
as requirement management or quality of
fixes.

Furthermore, testing 100% of applications
would make most businesses non-profitable
and difficult to deliver on time. Since professionals
cannot build this 100% bug-proof security
net at a reasonable price, they have to
answer three questions:

• How useful are the tests they run?
• How much should they invest to improve
the software?
• Which tests give maximum return?
Let’s consider testing as measurement
(Quality Control)

If software is an industry, then we can certainly
apply industrial (i.e. mass production) best
practices in this area. How do professionals
handle testing in manufacturing? Even if highly
automated, factories still use manual control
for checking the quality of their products.
Investment for doing so is often high because
of the large number of items to test. So, even
in low-cost areas, it is wise to control only a
subset of the manufactured items. I have always
considered this situation very similar to
what happens in our industry. Complexity is
not based on the number of items we produce.
In IT, complexity is:

• the number of lines of code
• management of dependencies between
software components
• the number of combinations of data that
the system can handle.

So, how efficient is it to test with these complexities?

Experience shows that it is often quite poor and often below end-users’ and stakeholders’ expectations.

Even the most efficient testing strategies (highly automated and customer-centric) provide far too weak security net to catch all or even a higher proportion of defects. We must consider testing differently: is testing just a measurement of quality? I know that this question might surprise you, but it is something common for mass-production industries.

Testing and measurements

Coupling testing with efficient measurement

In the past, testing in the past was often not defined and not managed.
Developers and “QA engineers” ran whatever tests they wished as long
as the organization delivered on time with a satisfactory level of quality.

As quality was often below expectations, the “Quality Assurance
Manager” was urged to answer three questions:

• How much does the organization spend on testing?
• What is actually tested?
• What is the return on investment?

Measuring your investment

Most companies track their employees’ activities on a daily or a weekly
basis, which is an excellent way of knowing what your exact investment
is on testing. For an accurate measurement, we must take into
account that people other than testers also participate in the testing effort (support engineers, developers, product managers). You might be
very surprised to find out that some teams perform testing activities for which they were not originally hired. Knowing these facts can help you to understand the roots-causes of poor quality. It is important that the measurement of investment is complete and accurate. This measurement
includes the costs for effort, hardware, software and maintenance
costs.

Measuring test coverage of applications

Since I have been working in quality assurance, customers and stakeholders have often inquired about the test coverage of the application. The key factor for measuring test coverage is to document the test cases. This requires the replacement of informal testing by a repeatable testing process. Low-end solutions for doing this rely on manual testing. They entails having test scenarios that are stored in text documents and test results which are stored in spreadsheets. High-end solutions rely on software vendor solutions which include information systems coupled with test robots. Such solutions are able to replay tests and check results automatically in the application.

Of course, the high-end solutions are much more efficient, event though
they might be very expensive (I am not talking about the price of the test software licenses here, but rather the total cost for setting up the tools and putting in place a dedicated team). Automated tests are repeatable, since one can launch the same set of tests for each release and check results. The way in which test engineers define test scenarios is a good way to calculate the test coverage. Defining an application as a set of features is understood by everyone, so this is an excellent reference for developers, managers, architects and end-users. The basic unit of measurement is the feature (i.e. a specific function point):

It is obvious that one has to describe 100% of the features in the application to get a reliable measurement. I cannot say that this is the
most accurate measurement, but I have experienced that it is sufficient
to put in place and institutionalise in an 18-month testing project. The
measurement of coverage can easily be improved at any time. Depending
on your context, you can also use other basic function points for
measurement. As a further example, you can measure Coverage by the
number of lines of code covered by the test, in proportion to the total
number of lines of code.

At this stage in the document we assume that we can fix any defect
found by internal testing (i.e. all tests have been successfully passed
when the software is released).

How to measure the efficiency of testing

Test coverage is an internal measurement that must be coupled to a
measurement of the quality of software as perceived by end-users.
Support

teams often have very useful information: service requests, escalations
or interruption of services of production systems. These kinds of
figures are more reliable and easier to use than customer satisfaction
surveys that are more expensive to organize and often biased by external
factors.

A typical measurement of quality is the sum of service requests raised
by customers, weighted by severity:

QualityIndex (Month) = If (NonQualityIndex(Month) < 100)
100 – NonQualityIndex(Month)
Else 0

Where NonQualityIndex (Month) = 20 * #SR(sev1) +
5 * #SR(sev2) +
2 * #SR(sev3) +
#SR(sev4)

Using these measurements proves that testing is not very efficient or not efficient at all: the QualityIndex is low. At this stage people in charge of putting in place the test strategy will need strong support from stakeholders and from all levels of management.

Testing measures quality which is very effective to put in place corrective actions to develop better software.

How to leverage testing results

We can improve the things we measure. A quality improvement programme
helps you to do the following:

• Invest the relevant amount of money for reaching a certain level
of quality
• Invest in the right areas where you will get maximum quality
improvement

I propose three steps to achieve this:

• Put testing in relation to the maturity of your organization
thanks to a quality system
• Put in place corrective actions based on a gap analysis with industry’s
best practices
• Measure improvement with the help of metrics as described in
the previous chapter

Put testing in relation to the maturity of your organization
Your organization has certainly implemented a quality system such as
ITIL, CMM-I, COBIT or an internally approved methodology. If this is
not the case it is always possible to quickly if only partially implement one of these methodologies for improving your testing strategy. You
could not avoid hiring an expert who would produce a gap analysis
document (which should take between 10 to 20 man days).
The gap analysis shows, among other things how your organization
manages testing activities, which highly depends on:

• Your business model (software vendor, service, application service
provider)
• Your business field (finance, telecommunication…)
• Other factors such as size and background of the organization

Below is the type of profile your consultant will supply. All examples
are based on the CMM-I, continuous model).

We must now match this profile with the measurements described in the previous section (test efficiency and quality index).

Let’s first focus on the areas (i.e. CMMI process areas) that are the closest to testing activities:

• Validation
• Verification

Here are the details for both process areas:

We can guarantee the effectiveness of the quality improvement progranmme
by ensuring:

• that it positively impacts the internal measurement test Coverage
• that test Coverage is a also a measurement of test efficiency and
therefore is correlated to the QualityIndex

The CMM-I representation will help stakeholders to keep their focus
on global improvement goals, while putting in place specific short-term
actions which will achieve immediate results. You take the risk of many
setbacks and failures if you decide to deploy corrective actions independently
from a quality programme:

• It would be a set of independent short term actions which are often
very confusing in the long run
• Actions could be contradictory
• Actions could address only specific areas which would not have a
global impact the organization

On the other hand, if the actions are part of a global quality improvement
programme, efficient corrective actions will also:

• Improve results of any future appraisal
• Improve the quality of the software as perceived by customers and
end users

If everything works as planned, you will be able to confirm that your
testing strategy was successful. This will show that you have taken the
proper corrective actions.

Examples of corrective action and conclusion

I cannot give an exhaustive list of corrective actions that can be put into place, as this highly depends on your business and your organization. Corrective actions consist of deploying industry best practices that are non-existent or too poorly implemented:

• Peer review
• Standard coding guideline tools (static code analysis)
• Dynamic code analysis tools methodologies
• Functional testing
• Performance testing
• Formal beta testing
• Customer defects root analysis

You will certainly discover that what everyone calls testing/QA/QC in your organization will not cover all the entries of this list. Implementing new corrective actions from this list will allow you to get the best return for the lowest investment. Coupling testing to a quality programme is definitively the most efficient way to accelerate the improvement of quality at a reasonable investment level.

 

Leave a Reply

Your email address will not be published. Required fields are marked *