Testing quality into software costs us dearly – is there a better way?
by Prof Barry Dwolatzky
A certain company in Johannesburg has, over the past few years, been outsourcing its software testing to a large Indian company. The value of this contract is R400 million per annum. The Johannesburg company employs several hundred software developers who write applications that support its business operations. It is these applications that are tested in India.
The major aim of software testing is to expose “defects”. These defects are errors made by analysts, architects, designers and programmers during the software development lifecycle. Various international studies suggest that a piece of software going into system test contains more than 25 defects per thousand-lines-of-code [KLOC]. Data collected by Xerox in the USA concluded that the average time required to remove each defect found in system testing is 1405 minutes (or 23.4 hours). This time includes the time required to find the symptoms of the defect in testing … and then the re-work to be done by the original developer in locating and fixing the error.
Defects are therefore costing the Johannesburg company mentioned above much more than the cost of the R400 million outsourced testing contract. There is also the large amount of wasted time spent by their in-house developers on re-work and debugging. At 25 defects/KLOC the company is shipping many thousands of defects to India. Some of these (but certainly not all) are then reported back to Johannesburg where tens of programmer-hours are required to fix each one.
The waste in effort and money is almost mindboggling!! Surely there is a better way?
I believe that the answer lies in putting higher quality code into system testing. It is obvious that if a way could be found to reduce the number of defects per KLOC from 25 to (say) 10 the saving would be tremendous. Not only that – we would also make software development projects more predictable.
The reason for this is that the time it takes to find and fix a defect is very unpredictable. Simple defects are found and cleared in minutes. Others may take days or even weeks to resolve. It is therefore obvious that the fewer defects found in testing the more predictable software development projects would become.
Another key issue about finding defects in system test is that because it is extremely time-consuming and unpredictable the project usually runs out of time and budget before all defects are found. The development team knows that if more tests are run more defects will be found … but who will pay for this extra testing effort?
There is a proven way of dramatically reducing the number of defects in software before system testing starts. It lies at the heart of the “Team Software Process” (TSP) that is now being used with great success by companies in the USA, Mexico and elsewhere. We at the JCSE have just run a year long TSP pilot at Nedbank. The results in terms of quality have been extremely encouraging.
On Tuesday I will be unveiling the JCSE’s “Thousand Job Strategy” which aims to make a significant impact on the South African software development sector. TSP as a way of improving the quality and predictability of software development projects is a central element of the strategy.
If you are interested in hearing more and debating this strategy with me, please join us at the JCSE’s Annual Process Improvement Symposium on the morning of Tuesday 26th October 2010 (see www.jcse.org.za for more details). If you can’t join us then hopefully the debate will continue on this blog where I will post more details of the strategy after its launch on Tuesday.
Good luck with your endeavours, Barry. Great to hear something is being done to pro-actively upgrade the software industry in South Africa and we don’t continue to abdicate our quality to India.
We have to get into a global economony and cannot do it with Apartheid seige-economy hacking anymore. Sorry I can’t make the Symposium, but will keep in touch with what is happening anyway.
It is heart warming to here that some proactive measures are being taken to harness the technological capability of the software industry in South Africa, and hopefully the rest of Africa. Demand on the Global Software market is huge and would become bigger if Michael Weisser’s dream of ubiquitous computing is slowly but steadily becoming a reality.
And as the saying goes “an opportunity can only be seized if one plans for it.” Africa needs a concerted effort, such as the seed you are sewing at JCSE, to prepare to take advanatge of the opportunity.
It is unfortunate I cannot avail myself for the symposium, as I am busy working on my dissertation, out here in KAIST, South Korea. Best Wishes!!!
This problem has many aspects and facets, such that contributions towards its solution must also come from different directions. Barry’s contribution is strongly people-centred (TSP): If the people are working and cooperating in a better way, if they are more diligent, if they are more committed, etc., then their cooperative product should also be of higher quality; (this should be generally the case, not only in the profession of software construction). From a technology-centred perspective, many contributions have been made which can be subsumed under the motto: “correctness by construction”. Here the idea is to avoid defects by deriving executable software code by correctness-preserving refinements from strongly validated (ideally even: formally verified) specifications. Last but not least, from a third perspective, also the “business” of software testing itself can be considerably improved by adhering to scientific principles. In software testing, for example, it should make a significant difference whether we are only running black-box-tests on some random-generated input values, or if we operate our tests on the basis of mathematical logics and graph theory. Thus, in an ideal “software heaven” all those three threads (1: well-organized cooperation amongst diligent people, 2: correctness by construction and refinement, as well as 3: testing on the basis of scientific principles) would probably be woven together into one coherent methodological fabric. In a “software hell”, on the contrary, we would probably find none of those three at all.