My “Thousand Job Strategy” to be launched at JCSE’s Process Improvement Symposium
by Prof Barry Dwolatzky
So why do I write this blog? The answer is simple … I’m on a crusade. The sub-title of my blog makes it clear what this crusade is (broadly) about. It says I’m “passionate about the SA software industry”. My focus, however, is much sharper than that. Put quite simply … I’m on a crusade to ensure that the SA software sector grows in size and international reputation over the next 5 years. Furthermore I need to be able to accurately monitor and measure this growth.
Is this a pointless crusade? Am I a Don Quixote figure tilting at windmills?
Obviously I believe strongly that my mission is achievable. I also don’t, for one moment, underestimate the difficulties I face.
So – let me lay it down in front of you! Here is my action plan:
1. My first step is to clearly define what I mean by the “SA software development industry”.
2. Having agreed what the “industry” is I need to measure its current performance. After considerable thought I’ve decided that the performance of the industry will be determined by collecting a set of 5 numbers from as many software development projects as possible. These numbers are:
- Size: Number of people in the team.
- Schedule Performance: What was the difference (in days) between the promised completion date and the actual completion date?
- Cost performance: What was the difference (in Rands) between the promised budget and the actual cost?
- Project size/complexity: How big and how complex was the application developed in the project?
- Quality: How many defects (or “bugs”) were discovered during system testing?
These – per project – measures will then be averaged to give a measure of the state of software development in South Africa.
3. I will then implement a strategy (see below) to improve the performance of the industry. My strategy also aims to increase the number of people employed in developing software in South Africa.
4. On an ongoing basis the measures listed above will be collected and reported on.
5. If my crusade is to be a success, I would want to see improvements in both performance and the number of jobs.
Before you say that this is “pie-in-the-sky”, or “mission impossible”, let me ask what else we should do to sustain and grow our local software industry? We need to have ambitious plans, and (I believe) we need to monitor progress. I accept that it’s going to be difficult, but I’m ready to try!
I’ve developed a strategy (see point 3 above) that aims to achieve my mission. I call it the JCSE’s “Thousand Job Strategy”. It aims to create 1,000 new software development jobs in South Africa over the next 3 years. It also aims to achieve a significant and quantifiable improvement in the performance of local software development teams.
Are you interested in finding out more about the “Thousand Job Strategy”? It will be unveiled at the annual JCSE Process Improvement Symposium on 26th October 2010 from 8:45 to 12:45 at the Sunnyside Park Hotel, Johannesburg. I will be inviting comments, both supportive and critical.
The Symposium will also be addressed by the eminent international software engineer, Prof. Dr.-Ing. Manfred Nagl, Emeritus Professor of Software Engineering, RWTH Aachen University, Germany.
Visit www.jcse.org.za to find out more about the Symposium. Documents describing the “Thousand Job Strategy” will be posted on this blog after the Symposium.
Barry’s is surely an interesting survey project about which —when done— a number of significant papers could be published at various conferences and in software engineering journals.
RE: >>Quality: How many defects (or “bugs”) were discovered during system testing?<<
Well, if I would not do any software testing at all, then I could truly report that "zero defects were discovered during system testing"; and indeed it is well known that testing is generally not carried out as thoroughly as it could and should be done. Moreover, the number of defects detected in testing will not only depend on how thoroughly the testing phase is carried out, it will also depend on a systems "testability" which is a statistical figure which quantifies how "amenable" a system is to any testing approach at all, i.e. how sensitively the system's internal states react to variations in its input values, etc.
Those theoretical details about testing and testability aside, I fully agree with Barry's idea about benchmarking and about continuous measurements of statistically significant project data.
This ambitious survey project, however, does not only need a sophisticated database system as its techno-methodological backbone; it also relies very strongly on the (rather unlikely) willingness of commercial software houses to share their propriety statistical data (if they do any internal measurements at all) honestly and accurately (without "cosmetics") into Barry's survey database.
Moreover: In terms of CMMi/TMMi it is important to note that those IT houses, who already collect their own internal statistics (whether they are willing to share them with Barry or not) are already on a rather high level in the CMMi/TMMi scale, which means that the statistical data collected from those IT houses should not be "too bad" anyway. More concerning are those many "methodologically invisible" IT houses who are on a low CMMi/TMMi level because they do not even collect their own internal statistics (by the very definition of "low CMMi/TMMi level").
In other words: From a scientific point of view one must be carefully aware of the possibility of a methodological bias at the very root of Barry's survey project. We want to use statistics to measure the quality parameters of software production over a large domain, but at the lowest end of the quality scale (CMMi/TMMi level "1") the required statistical data are not even generated (and can thus also not be collected, surveyed and measured).
Hello again!
Thematically related to Barry’s “crusade” is the
EASE’2011
15th International Conference on Evaluation & Assessment in Software Engineering,
Durham, April 11-12, 2011,
URL: http://www.scm.keele.ac.uk/ease/
Its synopsis says: “For over a decade the International Conference on Evaluation & Assessment in Software Engineering has provided a forum where empirical researchers can present their latest research, and where issues related to all forms of empirical and evaluation studies in software engineering can be discussed. The main theme of EASE’2011 will be: Making an Empirical Impact, concerned with addressing the question of how empirical results might influence the software engineering community in terms of its research and practice.”
Topics for papers can include any relevant aspect of empirical studies of software engineering, including:
• experiments (laboratory and field)
• replications of empirical studies
• case studies
• surveys
• observational studies
• field studies
• action research
• methodological issues and practices
• systematic reviews, mapping studies and meta-analysis.
The organizers also state that “all attendees will receive a free copy of the forthcoming O’Reilly e-book Making Software: What Really Works, and Why We Believe It, edited by Andy Oram & Greg Wilson”.
Programme Chair: Barbara Kitchenham
Local Organisation Chair: David Budgen
Grey College, Durham University.
Dear Barry,
I applaud your initiative and hope to see your goal of 1,000 new software development jobs in South Africa in 3 years achieved.
My recent studies (in management) have focussed on Systems Thinking – where an organisation exists in an increasingly complex and changing environment, calling for holistic solutions that concentrate on the whole of an organisation (or system) rather than isolated parts. Systems Thinking reveals how optimising the performance of one part (sub-optimisation) could have a negative effect on the whole.
I would like to suggest an examination of all important variables in the “software engineering system” under investigation and how they all interact with one another – a systems thinking approach to the problem. I agree, software quality is a very important variable. On its own, however, it is not sufficient in solving our problem. Local companies need to support local software development – it’s no good creating jobs if local industry and government buy their software and services from overseas. SA needs to support, believe in and buy into local software the same way they support local music for example (which has developed and grown in leaps and bounds the last decade) – local music quotas on radio stations – tax rebates for buying local software.
Systems Thinking tools such as Affinity Diagrams and Causal Loop Diagrams are very useful in revealing which variable has the largest impact/drive on the others. This variable then has the highest chance of success on the problem system. Using Systems Thinking concepts, I hope to contribute to the development of a successful software development industry in South Africa.
Best regards,
Sonja Breet