I saw a couple of disturbing statistics about software testing in the keynote at the recent New Zealand Application Lifecycle Management conference.
-
The best techniques for defect detection only identify around 75% of all defects.
-
The average detection rate is around 45% of all defects.
If there was a single technique that identified 97% of all defects, I’m sure we’d all be learning it tomorrow. But, when there is no “silver bullet” technique, what do we do?
Surely the key takeaway here is that we need to use a variety of different testing techniques.
While improving our performance with a single technique will bring about an incremental improvement, it’s only by adopting other techniques in addition to what we’re already doing that we’ll gain significant ground.
Here are some ideas.
-
Write unit tests, working at the level of individual methods to test and verify behaviour and assumptions.
-
Write guard clauses to verify parameters and state at the start of all your public (and possibly private) methods. If you’re working in .NET 4.0, use the System.Diagnostics.Contracts namespace to do this and you’ll have the option to leave the tests out of the Release build.
-
Use the SOLID principles to structure your code, avoiding complex and obscure interdependencies.
-
Automated acceptance tests, using tools like SpecFlow and others, allow you to specify and verify tests of whole features using straightforward language that your users and customers will understand.
-
Test generation (such as that offered by Pex) is another way to exercise your system and expose defects.
-
Code reviews - get another experienced developer formally review any complex business logic, both at the design stage where it’s easy to change and when you’ve finished the code itself.
-
Code metrics can identify suspect areas that need further study or simplification.
What additional techniques would you suggest? Please comment below.
Comments
blog comments powered by Disqus