Well not all evils – but most project evils. Today I wear my Technical Project Manager hat – and I’m going to propose the source of most IT project ills. This is particularly relevant for greenfield IT projects. I’m also going to touch on why it is impossible to test everything.
I came across a question on a forum and someone had asked how many test cases were necessary. Someone responded – saying aiming for 100% coverage of the requirements would be good. Now 100% coverage (to me) should mean that have exhausted every possible outcome a system can produce. Requirements coverage is very different and means something else entirely – it also tends to lead to a false sense of security.
There are coverage tools available (e.g. Requiste Pro)- so you can link Test Cases to requirements. If you link all the Test Cases to the requirements then hey presto, you can report “100% coverage”. It became quickly apparent to me that just because you had a test case for each requirement didn’t mean you had 100% coverage. What it meant is that you had written a test case for that requirement – but actually the coverage largely depends on the knowledge and skill of the person testing it. In reality you need several test cases to cover a single requirement and in reality no one has time to exhaustively write and test all the possible outcomes. Lets take a hypothetical situation:
Project X has 10 business requirements
- Each of these business requirements can be decomposed into 10 functional specs
- On average each of these functional specs gives rise to 5 different Test cases.
- This means 10*10*5 = Test Cases to write. = 500 Test Cases.
- Each of these test cases takes .5 day to write = 250 man days
- Each test case take .5 day to execute = 250 man days
- Requirements change, its a fact – so lets say that we have to revise 30% of these. Thats 150 test case revisions.
We can see its all becoming unmanageable very quickly – and in most projects I’ve been in the test team tend to get overwhelmed very quickly when taking this approach.
So even is a project has 100% coverage of the requirements it doesn’t mean all outcomes have been tested. Requirements coverage is useless on its own – what its gives is an indication of the testing effort invested and give a confidence level in the testing. It really isn’t a strong quality indicator. Then I started thinking ….. how can we actually test all outcomes?
Why Everything Cannot be Tested – and more
Now lets take an abstract view on the problem. The reason it is impossible to test everything became apparent after I recalled a university lecture: I had opted for a mathematics module and was being subjected to a series of classes teaching Z notation. For those of you fortunate enough not to go through these lectures it was a way of defining and specifying requirements in a mathematical notation. The requirements specification ended up looking like hieroglyphics. I asked my professor (Brian Burrows) why this horrible series of lectures was happening. He said something that stuck with me “If you cannot decompose and define a system mathematically you can’t fully understand it and it is impossible to prove”. Wow, I understood. He was saying that unless we could define the requirements rigorously (and the only way to do this is through mathematics) then it was impossible to derive the exact behaviour of the system and test all possible outcomes. This is why Z notation came about – it was a mathematical way of defining the requirements of the system. So the takeaway here is – If you can’t define a requirement mathematically you don’t fully understand it and you can’t test it. (Theory over). Back in the real world, this never happens – requirements are never specified using Z notation. Click here for an example of Z notation.
Now I think in reality requirements are imprecise fundamentally because english is imprecise. What people mean and what people specify are two different things. It is practically impossible to define something perfectly in english even if you know what it is withdrawl from propecia you are trying to define. Also in the real world the majority of people do not speak mathematics, they speak english. Herein the problem lies. Many IT stakeholders think they know what they want – but actually have difficultly expressing and defining what they want. What they think and what they say can be two different things. IT systems that haven’t been built are an abstract concept to them, so this compounds the problem.
Lets take an example that everyone knows – lets say a motor car. Lets say you wanted a Ford Focus and you specified the requirement:
|Requirement Attempt ||Result ins|
| Wheels and motorized||Lorry, Bus, JCB Digger, Couch, Steam roller|
| 4 wheels, motorized and carries 4 people||Jeep, Electric Car, Bus, Mini Van|
| 4 wheels, motorized, 4 people only, petrol||Car (of a 1940 variety)|
| 4 wheels, motorized, 4 people only, petrol, boot, enclosed, capable of 60 mph||…..|
Even with the last requirement there are still a multitude of machines and cars that will fit the bill. Now the above example may seem like an exaggeration but what it is attempting to illustrate is that that even when stakeholders know what they want – the requirements will be prone to different interpretations. When asking users to specify requirements they will make an imprecise description, something imprecise will be built and testing will be fundamentally based on this series of events. This can be exacerbated as there are many layers of interpretation between initial definition and coding. So I arrive at the following conclusion: If it is impossible to state the system requirements precisely how can we possibly test all of the possible outcomes? The answer is we can’t.
The imprecise nature of requirements also could go some way to explaining the following:
- Constantly modifying requirements
- New requirements entering the system
- Incorrect or badly built systems – the system doesn’t do what we wanted it to
- Badly delivered systems that do not meet the customer expectations but meet specifications
- Failure of large IT projects
- Defects when the system goes live
- Failure of a large number of Waterfall development projects (months/years before delivery to end users)
- ….. the list goes on
I find that the majority of issues within projects can usually be traced back to the requirements. Hence the title ‘Requirements are the source of all project evil’. So whats the answer? We can’t just throw away stakeholder requirements – but if we accept that requirements are likely to be stated imprecisely we can then gear the project methodology around this and enable better delivery.
This for me is where SCRUM is a step in the right direction – One of the major strengths of SCRUM is the way it fundamentally attempts to force regular deliveries. Stakeholders cloudy requirements become less cloudy and more solid as they see the product being delivered. Validation knowledge is transferred between stakeholders and the delivery team by forcing regular education (and vice versa) between builds. ‘Dumb Prototyping’ of screens with stakeholders in the requirements stage is also another way to help firm up requirements as are Requirement Reviews. I will talk about more ways to make a project successful in another article. Imprecise requirements have an enormous impact on a project – man days and project durations are affected exponentially. The people it affects most are the people at the end of the IT project food chain – testers (testing timeframes squeezed) and users (attempting to make a system they don’t like work). More time needs to be spent getting requirements correct and validating understanding.
- The language in which initial requirements are specified is imprecise.
- Regular engagement between end users and application delivery will result is fewer project evils
- If we accept requirements are defined imprecisely we can plan accordingly and deliver a better product