Putting the ‘Analysis’ in a Test Analyst

By Mike Smith

As the software testing profession has grown at an amazing rate over the past 15 years there has been a corresponding increase in all manner of things to support testing including textbooks, tools, techniques, certification, training and conferences.

However, one observation I have made over the 30+ years that I have been involved in IT, and 20+ in testing, is the change in the type of people who move into testing, their background and their basic training. Most of the early ‗pioneers‘ of the testing profession migrated from an IT background of Systems Analysis, possibly preceded by Programming, and had usually being formally trained in a number of areas to support their Analyst role. In my own situation, this included training and project roles in Structured Programming, Systems Analysis, Data Analysis & Process Analysis. I think this background put me in a good position to build my testing career, and it now worries me that while there are some very talented people working in testing, the general lack of analytical training means that they are not as effective as they should be. That is not to say that there haven‘t been advancements in software testing training including those with an emphasis on analysis. There were very few courses around when I started testing and no formal certification schemes.

But, I hear you say, what about Test Techniques? – they are our analytical “tools”!

Test Analysis for Testmanager

This is, of course, true – but I see several issues with this. All too often, even when I visit companies whose testers have had some training in test techniques, I find little or no evidence of their use. So why is this? I believe the reasons are many and varied. They include:

1. Most testers have very little training in them. Often, their first exposure is in the ISTQB Foundation Certificate course, which is little more than an introduction and overview.

2. Most books and courses necessarily focus on relatively simple examples, often in isolation as individual techniques; there seems no integrated approach to the use of test techniques. The real world is far more challenging with ever-growing complexities and challenges.

3. Testers who go on to Advanced/ Practitioner levels of testing certification get the opportunity to study techniques in more detail. However, and it is a BIG however, it is no good testers getting their first real training on the detailed use of test techniques in an Advanced/Practitioner Certification course. Practical project experience is required to get a grasp of the real-life use of test techniques, so really, separate techniques training and practical implementation should be done prior to attempting an advanced certification course. Certification training should then hone skills to syllabus requirements leading to sucessfully passing the exam. The lack of this approach is, I believe, one of the key reasons behind the very moderate pass rate of post Foundation certification courses.

4. Formal test techniques are not the only analytical weapons in a tester‘s armoury. Experience, informal deconstruction and just ‗tabulating stuff‘ in ways to suit complex functionality, data, threads, risk and nonfunctional requirements (to name but a few things) is required to cater for the multidimensionality of testing.

5. My experience has shown that just like in many other forms of complex learning areas, follow-up coaching and mentoring is necessary for techniques to be successfully applied in the workplace. Just going on a course and trying to employ them in the workplace without any support from experienced technique
practitioners usually fails. This leads neatly on to the last 2 major issues.

6. Many organisations are simply not geared up to be able to effectively implement the use of test techniques. I have seen many testers who achieve Advanced/Practitioner level certification go back to their workplaces only to find practices and processes not in place to support their implementation.

7. Above all, it is the ability & experience required to make & justify the choices in the use of test techniques at both strategic and tactical testing levels that is vital in being able to address the successful implementation of test techniques. Unfortunately, there is very little in the public domain to help. It is expensive to spend time on formal test techniques, so the ability to make appropriate choices backed by metrics including return on investment and risk, is essential to justify the use of them. Senior management buy-in is required to address point 6 above which in turn will then enable point 5 to be addressed. However, this is not all that is required. I once consulted with a major organisation which had ‗bought in‘ to the idea of using test techniques largely because of the ‗safety critical‘ nature of the organisation. They embarked on some education and then ‗blindly‘ followed them in the workplace. They were then shown some more informal techniques and in the end worked out for themselves the best mix of formal and informal techniques that would work for them. The point is, it requires a high degree of skill and experience to make these choices and the absence of any industry benchmarks and standards in this area contributes to making this ability very rare.

I have focussed on test techniques in this article since it should a key part of the analytical skillset of testers. However, there are other contributory issues that need to be addressed when considering the analytical abilities of testers.

One of these issues is what I call ‗Separating the What from the How‘. Much of testing has been built up around standards that focus on Test Cases and ‗Test Design work products‘. These work products tend to combine the ‗What‘ and the ‗How‘ without distinction, and often end up organised in one hierarchical repository. People tend to refer to ‗the activity of Test Analysis & Design‘. I would rather view this as separate activities of ‗Test Analysis‘ & ‗Test Design‘ leading to separate work products that may have separate structures for their organisation. This better reflects the complex logical to physical relationships that usually exist between requirements and systems as implemented. I see Analysis as setting the measures & targets of success for testing and Design how those measures and targets will be achieved. Many techniques labelled ‗Test Design Techniques‘ are really analytical techniques which set the ‗What‘ to test, represented by Test Conditions. So in my view, more evaluation of when and how to use test techniques according to the point you are at in the development life-cycle is required.

Then there is the issue of tools available to support test analysis. Some of the limitations in current standards and thinking relating to separating the ‗What‘ from the ‗How‘ are perpetuated in commercially available test repositories. I see many testers becoming ‗a slave to a tool‘, expecting the tool to do the job and hence not thinking enough themselves. Often, tools constrain experienced testers so they find work-arounds. Unfortunately, unguided and inexperienced testers just don‘t know anything else, contributing to poor test analysis, leading to weak testing.

So in summary, I believe there is a lot of scope for enhancing the ability of test analysts. Here is a formula that might lead to success:
Better general analytical training,

PLUS

Specialist test technique training (outside of certification courses),

PLUS

Coaching and mentoring by specialists after training,

WITHIN

An organisational framework which supports the implementation and use of test techniques;

INCLUDING

Highly skilled and experienced testers with the knowledge and ability to make justified choices about which techniques to use and when I also believe more research needs to be done on effectiveness of techniques. Academics should re-evaluate testing standards in conjunction with industry experts to better define standards and approaches to cater for the challenges faced by the majority of professional testers and should especially look at the ‘What‘/‘How‘ issue I have highlighted in this article.

Tools and certification can then be improved to better support the real needs of testers. I see the growth in both as very positive but there is much more still to be done to provide a full career development path for testers – and this should include non-testing related education, certification and coaching as well as testing related. Testing is one IT discipline that cuts across all the others and it is vital that testers don‘t become insular but reach out to all the others. This will not only help them become more effective, but also help those outside of testing who currently don‘t understand the proper value that it can bring to a project. That way, ‗Testing as a profession‘ can become a reality. So there is a long way to go. Those that think testing is simple and mostly understood and well-defined need to change their mindset!

Biography

Mike Sis Managing Director of Testing Solutions Group (TSG) and has a broad background in systems development and testing stretching back 30 years. After starting his career in pharmaceutical research where he continued his education in Chemistry, he moved into IT where he worked in the Telecoms, Pharmaceuticals and Banking sectors.

Mike became an independent consultant in 1984 and initially worked in the development of critical financial applications. He was the original author of the T-Plan Test Process Management tool which was first implemented by The Bank of England in 1989 (see www.t-plan.co.uk) He also founded ImagoQA Ltd which at the time was the largest independent test consultancy in the UK with global operations in the USA and Australia.
Over the past 20 years, Mike has produced a series of papers about Test Process and Test Management and presented numerous seminars in these subjects together with IT Governance and Information Traceability. Mike was on the presenter list at the very first STAR conference in Las Vegas, 1992 and the most recent STARWest conference in Los Angeles 2007. In October 2007 he presented a keynote at the annual Ericsson Test Conference in Sweden.

Mike is using experience of various entrepreneurial start-ups, board-level appointments as well as his testing and QA background to develop ways in which the profile of testing can be raised in a manner that can be better understood by key business and IT stakeholders. Mike is Secretary of the UK Testing Board (which represents the UK on the International Software Testing Qualifications Board – ISTQB) and the UK representative on the ISTQB Advanced Level Syllabus Working Party. Mike‟s interests include cookery, wine, cycling, reading, quizzes, sport in general especially cricket and football. Mike was appointed a member of the main committee at Essex County Cricket Club in 2007 following over 10 years of involvement in sponsorship, benefit years and sub-committee chairmanship.

Further articles of the testing experience magazine

A Quality Manifesto by Tom Gilb

The Future Tester — What is necessary to know and track? by Alon Linetzki

Leave a Reply

Your email address will not be published. Required fields are marked *