Rule-Based Design Reviews: Objective Design Reviews and Using Evolutionary Quantifed Continuous Feedback to Judge Designs

by Tom Gilb

Abstract

Design reviews would benefit from the support of formal rules. By the use of relevant rules, it should be possible to ensure prior to a review that all the relevant information for a review is present in the design specifications, and that all the minimum review criteria are met. This will ensure management time is not wasted and aid better decision-making. This paper recommends that the Specification Quality Control (SQC) method be used to do this additional quality control. In addition, this paper outlines the impact of Evolutionary Project Management (Evo) on the design review process.

Introduction

We know that most projects are a partial or total failure, and few are totally successful (Morris 1994, Taylor 2001, Johnson 2001: Extracts from Extreme Chaos 2001, a Standish Report). I suggest that a key reason for this lack of project success is the failure of design reviews. In other words, whatever designs did get approved, should probably not have been.

Based on reading the literature, and participating in a very large number of requirement and design inspections in many different industries internationally, I fear that most design reviews are carried out:

– on poorly written design specifications (in the sense that the design specifications lack sufficient detail for them to be safely evaluated), and

– using highly polluted requirement specifications (Requirements being the basis for evaluating any design).

I find it unacceptable that a design review is given no quantified knowledge of the quality of the design specification, or of the estimated ability of the design(s) to impact on the requirements (that is, the system performance and costs). The underlying problem is that specification of design is carried out on the basis of inadequate design standards.

It is the point of this paper to suggest the following remedies:

– a highly defined standard of requirements is met before entry into the design process itself is permitted

– design specification should initially pass quality control (SQC) against design specification rules to make sure the designs are clearly and fully described

– designs should be specified in sufficient detail to enable reasonably accurate estimation of their dominant performance and cost characteristics

– a set of designs should be seen to credibly and numerically contribute to meeting the requirements (the set of performance targets within the resource budgets)

– the design review process should work in the context of evolutionary cycles of design (for example, in 50 steps), and not operate on a large monolithic total design set.

Terminology

Specification Quality Control (SQC): SQC is also known by the name ‘Inspection.’ Given that ‘Inspection’ has a different meaning to engineers within manufacturing, I prefer to use the term SQC. SQC is also sometimes termed ‘Peer Review’ I do not use this term because I want to make a clear distinction between ‘quality control’ and ‘review’.

SQC is a rigorous quality control discipline concerned with defect detection, defect measurement, defect removal, process improvement and entry/exit controls. It is based on evaluating specification conformance to specification rules.

Traditionally, SQC does not pretend to judge the specifications in terms of their relevance or profitability in the real world. It is primarily concerned with ensuring that the specifications are clear, complete and consistent by checking a specification and any of its source and kin documents against ‘specification rules’. It judges whether the specification is suitable use in subsequent engineering or management processes. However, by using a different type of rules, ‘specification review rules’, it is possible to extend the SQC process to checking the readiness of specifications for review. This could be applied for a business review or a technical review.

Review: A review is any process of human examination of ideas with a defined purpose and defined standards of inquiry.

Design Specification: A design specification is the written specification of a design idea. A set of design specifications attempts to solve a design problem. Identification and documentation of the individual design ideas and their potential contribution towards meeting the requirements help in the selection of the ‘best’ design ideas for implementation.

The design specifications should contain information about the expected attributes of the designs for meeting requirements. This ‘expected attributes’ information of a design specification might be in the form of an impact estimation table or, it can be as simple as an assertion of impacts on requirements, referenced by their tags (see example in Figure 2).

————–Bild —————-

I will present this paper in terms of a set of design review principles.

DR1: If the specifications are unclear, you are not ready to review whether the design is the ‘right thing’ to do.

DR2: You cannot review design if you are unclear whether a design is mandatory (a design constraint, which is a ‘requirement’) or optional (one of many possible design solutions).

DR3: Design impacts must be calculated and submitted to a review, they cannot systematically be developed during the review.

DR4: It is part of the purpose of a design review to identify any defective information in the design impact estimates.

DR5: Certain objective review criteria must be met prior to actually carrying out a review; otherwise the review may be wasted.

DR6: The review process should not wait until a large amount of specification is completed; sampling reviews should be held early and frequently to spot systemic problems early.

DR7: A design review process should be carried out on a realistic amount of information and be conducted at an effective rate. Don’t overwhelm the reviewer, so they become incapable of spotting problems.

DR8: In order to benefit from feedback, the design review process should be done evolutionarily, as a long series of design reviews: each review deciding on the next planned evolutionary step.

DR9: The real purpose of design reviews is not to approve a design as correct, but to uncover specific risks and ignorance.

Principles

Principle DR1: If the specifications are unclear, you are not ready to review whether the design is the ‘right thing’ to do. Have you ever actually counted the quantity of unclear and ambiguous expressions per page in a real specification? I get my clients to do this using specification quality control (SQC) almost every week. The result is consistent and always provides an element of shock.

I define a specification defect as any violation of a set of specification rules. As a simple introduction to SQC, I ask a team of two to five people to select a random page of their ‘typical’ specification and then to spend about 15 minutes checking it individually, applying these two rules:

—–Bild ———

For the first rule, it does not matter whether the people looking for defects themselves understand the specification; they must role-play the weakest link in the set of people who might have to correctly understand it (for example, ‘Imagine you had just started here right out of school’, or ‘Imagine you were an Indian subcontractor in Bangalore’).

I also ask the participants to judge whether the rule violations they find (the ‘specification defects’) are ‘major’ (or minor). Defects are ‘major’ if it is judged possible (as opposed to impossible) that the faulty specification could cause something wrong to happen during system engineering or in the operational system (for example, product faults, test case faults, procurement errors, estimation errors, software bugs, and delays).

Most people (for example the writers of the specification and their peers) consistently manage to find between 5 and 15 defects in a single sample page of a specification within the 15 minutes.

From experience, I know that for a small team of two to five checkers, if you double the number of majors found by the checker finding the most majors, then you tend to have the total number found by the entire team. I also know that the initial effectiveness in such a situation for finding majors is in the range of 25% to 35% (This is partly due to the time allowed, and partly due to the source information available). For the sake of simplicity, say 33% (a third).

So if 15 majors were found, I’d estimate that there were 15 x 2 x 3 = 90 defects on the single sample page. That means there are about 90 potential misunderstandings of a specification per page. This is ‘normal’ technical specification ‘pollution’, but it is not a good or necessary basis on which to decide if the design itself is a good one. Any one of those major defects alone is capable of corrupting our understanding of a whole design, and capable of making our judgments, in a design review, worthless.

The bad news is that this level of majors/page will continue to persist unless management ‘changes the game’. The good news is that, if you want to, you can bring down the average level of majors/page by two orders of magnitude within weeks of doing the right things. You can bring down the majors/page density by stating the specification rules (such as ‘clear’ and ‘unambiguous’), and then training individual engineers to follow these simple rules. You do that by teaching the rules, and then making sure that rule violations are measured. Work is considered unacceptable when a defined level of pollution (such as 1 major/ page) is exceeded. Under such conditions, an individual will learn at a rate of about 50% defect injection reduction per cycle of specify/ check/correct. We proved this initially in 1988 at Douglas Aircraft for hundreds of engineers working on airplane drawings. We repeated it the year after at Boeing, and later we experienced this same improvement rate at Ericsson. It doesn’t matter where or with which technology you do this, it works. Others, such as Raytheon Defense Electronics, have reported the same experience working within military software projects (Haley et al. 1995).

What does all this mean for design reviews? In summary, we need to quality control all specifications and make sure they are not polluted, before we can seriously undertake a design review to look at whether we are doing the ‘right thing.’

I suggest that you need at least 2 entry conditions into a design review:

–Bild ————–

I recommend that you consider setting the numeric level at ‘less than 1.0 estimated remaining major defect/page’. Anything less is wasting your review team’s time and your project time. What do you do today instead?

Principle DR2: You cannot review design if you are unclear whether a design is mandatory (a design constraint, which is a ‘requirement’) or optional (one of many possible design solutions).

What most people title ‘Requirements’ often includes some design specification, which is not actually part of the requirements. Only design constraints should be specified as part of the requirement specification: any design that is optional has no place there, it ought to be only in the design specification. (In case you are wondering about this: requirements are the end states needed by stakeholders, irrespective of implementation detail. Design is a decision about how to implement those requirements in practice.)

Of course, mandatory designs (the required design constraints) must also have detailed design specifications. You need to ensure you have the design specification for any mandatory design (clearly marked as mandatory), and that it is considered alongside the optional design.

Principle DR3: Design impacts must be calculated and submitted to a review, they cannot systematically be developed during the review.

There is no time in a design review to begin a process of collecting facts and making estimates about the multiple impacts on performance and costs of each design. The proper time to do that is before the review, not during it.

If the estimates are not made, this fact should be caught in the SQC preceding the review. Note that such SQC merely observes that such estimates are not made, or not made properly (for example, they lack any evidence).

So, tell me, do your current design documents have integrated, or referenced, the impact estimations for the designs, ready to feed into the design review process?

Principle DR4: It is part of the purpose of a design review to identify any defective information in the design impact estimates.

The design reviewers are allowed to question the accuracy of the impact estimates, and the evidence and credibility claimed. The design reviewers are experts in their fields and should consider if they agree with the data they are being presented with.

SQC carried out prior to a design review is mainly concerned with finding out that the required impact estimation process for designs was apparently performed.

Principle DR5: Certain objective review criteria must be met prior to actually carrying out a review; otherwise the review may be wasted.

In addition to making sure the design engineer has in fact adhered to the design specification rules (See Figure 5), there needs to be a check against design review rules in preparation for a design review. Only if the design meets the design review rules should a review proceed.

Principle DR6: The review process should not wait until a large amount of specification is completed; sampling reviews should be held early and frequently to spot systemic problems early.

Once on a German Air Traffic Control project done in Sweden, I saw the signatures of 7 managers approving the detailed logic of the air traffic management: 40,000 pages of logic design (the next stage was the programming). Later that day, using the sampling process described earlier, they found 19 major defects in a random representative sample of 3 of the 40,000 pages of logic design. This was of course about one third of what was actually present in the pages (but for managers they were pretty good!). Their divisional director took 30 minutes to personally check the 19 majors – while the 8 of us waited in his office one evening – and agreed that they were serious. At about 20 majors per page present, that meant there were about (20 x 40,000) 800,000 majors approved for coding by the management review committee. I asked signee number 3 on the list why he signed off what we have now recognized was a very polluted document. His reply will not surprise the experienced reader: ‘because the other two signed it before me.” Now you know why I am skeptical about review committee approvals!

We had many an interesting discussion on the basis of this finding. The central one was that if they had bothered to do some simple samples early, like after 100 pages had been written, they might have been able to prevent most of this work from being totally wasted. In other words, if reviews had been carried out earlier, and if they had demanded numeric quality controls were in place, then it is unlikely that the defect injection would have been allowed to continue at such a level.

So there is a lesson about early partial work done sampling here. Don’t wait for the entire specification before you do some basic quality control on it – a point we shall return to in this paper towards the end.

Principle DR7: A design review process should be carried out on a realistic amount of information and be conducted at an effective rate. Don’t overwhelm the reviewers, so they become incapable of spotting problems.

If an attempt is made to review too great a quantity of design at once, then the analysis is unlikely to be done sufficiently, and the truth is likely to be obscured. If 40 or more (try 40,000 pages of, as in the example above) design specifications are delivered at once for design review, then the review group will clearly not have time to study, discuss, criticize anything in detail. Reality of risks and problems will be lost. It will not be possible for the review team to learn about their misjudgements. The project or product will probably fail and we will not be clear about the cause of failure.

I suggest that we need to feed only a small volume of ideas into a design review committee:

– in order to give the committee a chance to do its work properly

– in order to make sure that any preparatory work has been done properly

May I suggest one page of design per committee hour as a rough guide?

Principle DR8: In order to benefit from feedback, the design review process should be done evolutionarily, as a long series of design reviews: each review deciding on the next planned evolutionary step.

Ideally, most projects should be carried out using Evolutionary Project Management (Evo). The critical distinction between Evolutionary Project Management (Evo) methods and their generic cousins iterative (we do cycles) and incremental (we cumulate) is that evolutionary processes (which are also both iterative and incremental) gather facts about impacts, analyse those facts and change behavior, if necessary, in order to succeed in the higher level objectives (we learn!).

Evo means testing the stakeholder effects of a design idea in the field. We are suggesting maybe 50 ideas, one a week for a year of development effort. The design review committee becomes a learning process: that is, the review team will benefit from the Evo cycle experience feedback for implementation of each previous design; and they will learn quickly and realistically, from real experience, how to evaluate designs.

Principle DR9: The real purpose of design reviews is not to approve a design as correct, but to uncover specific risks and ignorance.

Design reviews are about risk analysis, and we need to build a much better foundation in order to reliably carry out that function. This is where Evolutionary Project Management (Evo) methods are helpful. If you do not implement your designs evolutionarily, you might never learn that a particular one of them was your downfall. Even if you do learn which one it was, it is probably too late to do anything about it on this project.

If you are using Evo, a review could even deliberately decide to implement a high-risk step to find out the results. It is the benefits obtained that count, not the total avoidance of risk! The key benefit of more frequent reviews is that risk is made more manageable. This might mean that lower level management can be empowered to make the review decisions. Alternatively, it could mean the design review itself might be come obsolete, since reality is going to give designers more reliable advice than any committee.

Conclusions

Any design specifications input into a design review must be of known high clarity and completeness: they should have successfully
exited from Specification Quality Control (SQC) using both design specification rules, and then later using design specification review rules.

Design reviews should be held throughout the lifetime of a system. They should be held at early stages on samples of the design work to ensure initial problems and misunderstandings are detected as soon as possible. Reviews should also be held at appropriately frequent intervals, to avoid giving reviewers too much to review at one time.

For an evolutionary project, reviews should be held to decide/agree each ‘next’ evolutionary step. By utilising feedback, reviewers will learn more about their mistakes and successes, and any actions required to correct/improve project progress can be taken.

A design review should not be an informal management meeting to collect unfocused opinions under pressure.

References

Gilb, Tom and Dorothy Graham, Software Inspection, Addison-Wesley, ISBN 0-20163181-4, 471 pages, 1993. (13th printing in 2005).

Gilb, Tom, Competitive Engineering: A Handbook for Systems Engineering, Requirements Engineering,and Software Engineering using Planguage, Elsevier 2005. Both SQC, Design and Evo have their own chapters in this book.

Haley, T., B. Ireland, Ed. Wojtaszek, D. Nash, R. Dion, Raytheon Electronic Systems experience in Software Process Improvement, Raytheon, 1995. This paper is available online at http://www.sei.cmu.edu/publications/documents/95.reports/95.tr.017.html

Johnson, Jim, Karen D. Boucher, Kyle Connors, and James Robinson, “Collaborating on Project Success,” Software Magazine, February 2001.

Morris, Peter W. G., The Management of Projects, London, Thomas Telford, ISBN 0-72771693-X, 358 pages, 1994. Taylor, Andrew, “IT projects sink or swim,” BCS Review, 2001.

 

Leave a Reply

Your email address will not be published. Required fields are marked *