#1: Becoming the Grand Geek (pt 1)

What do you know about Africa and South Africa, the exciting part of the world that I call home?

The continent of Africa isn’t only my home . it’s also currently home for 1,2 billion people, or about 17% of the world’s population. By 2050 our continent’s population will have doubled to nearly 2.5 billion people, or more than 25% of the world’s population. Sixty percent of Africans will live in large cities.

Over the same time period (that is the next 30 years) we will see rapid changes in digital technology and how it will impact every aspect of our economic, social, educational and political lives.

Our podcast focuses on the critically important issue of Africa’s future and how it deals with population growth and development. There are many possible scenarios for this … one of these is a vision in which we Africans take the lead in shaping digital transformation (what some call the Fourth Industrial Revolution) to meet the many challenges we are and will be facing over the coming decades.

Each episode in our podcast will aim to be experience-based. I will draw on my own experience covering more than 40 years in the digital sector. I’ll also interview others who have expertise and hands-on experience as doers and leaders.

My focus is not on the nuts and bolts of digital technology … instead I will look at HOW digital technology is being used as a way of transforming businesses and types of organizations.

Finally I will strive to adopt a positive attitude. I’m exploring solutions not merely highlighting the problems..

 In this, the first episode of our podcast, I will tell you something about myself and how I gained the expertise and experience to lead the conversation about Africa’s digital future.

HOW I BECAME INTERESTED IN DIGITAL TECHNOLOGY

My love affair with ‘digital’ started more than 40 years ago. Let me tell you something about it…

I was born in the early 1950s which makes me almost exactly the same age as that wonder of the modern world … the digital computer. While I was growing up in Orange Grove in the northern suburbs of Johannesburg the very first commercial computers … machines very different from their great-great-great-grandchildren we know today … were being built and used in Britain and the USA far away from my world on the Southern tip of Africa. At that time nobody understood the enormous impact that computers would have. Urban legend has it that in 1943 Thomas Watson, founder of IBM, predicted that the world would only ever need about 5 computers. Ever..

75 years later there are an estimated 7 billion computers in the world (if we call a device linked to the internet a ‘computer’) by the end of 2020 this will reach 10 billion and by 2025 50 billion.

My great and enduring love affair started in 1972 when I programmed my first computer. I was a second year electrical engineering student at Wits University. I was 20 years old. Wits had just become the first university in Africa to acquire a digital computer, an early IBM mainframe, for research and teaching purposes. It was housed on the ground floor of the North West Engineering Building home of the Electrical Engineering labs. I fell in love with it not because of its beauty but because of what it was capable of doing.

What did this computer look like?

Picture a large room. We called it the machine room. It’s very bright, very clean and very tidy. It has a number of almost identical cabinets neatly laid out. The room is mostly empty space between these carefully arranged cabinets. What you can’t see is that the cabinets stand on a raised floor ( a kind of platform) and under the floor a vast spider web of cables link all of these cabinets together. A handful of people work in the “machine room”. Apart from these operators, who wear white coats over their street clothes (I kid you not!) the machine room is strictly No Entry. A huge aircon unit keeps the machine room at a constant humidity and chilly temperature … not for the comfort of the operators but for well-being of the mysterious electronics in the cabinets. The shift leader (or chief operator) sits at a console in the centre of the room. It has a confusing array of switches, dials and flashing lights. The operators move silently around the machine room, from cabinet to cabinet, carrying huge disk packs and reels of magnetic tapes.

When I first peered into the machine room through it’s large window I thought of the story in the Bible about the holy of holys where an ark was built on God’s instructions to house the tablets that Moses brought from Mount Sinai. Only the high priests could enter this sanctum. Anyone else would be struck dead for trying. In my 20 year-old’s minds eye the computer sat in a modern day holy of holys tended by its very own high priests. Now you know why I now love being called the Grand Geek. It makes me think of a modern day shift leader …. a high priest in the digital world.

Although impressive that early computer hardware didn’t really interest me. My fascination grew out of making the hardware do something… in other words writing programmes. I’ve always been a software guy!

The programming language we learnt was FORTRAN 4. The first program I wrote was to solve a quadratic equation ( if ax^2 + bx + c =0 and I give you any values for a, b and c what is the value of x?). The second program sorted a list of numbers from smallest to biggest – and then biggest to smallest. Other, increasingly complex programs followed. I would often lie awake at night thinking of new and clever ways to get the computer to do things. I felt powerful and creative. The scope was infinite.

To feed our program into the computer we had to prepare a deck of punch cards. One cardboard card encoded each single line of the FORTRAN program with a maximum of 72 characters per line. Underneath each character, printed across the top of the card, was a column with punched holes representing the binary version of that character. Data to be used by the program was also punched onto cards. Other special cards contained codes to instruct the computer what to do with your program.

We prepared our cards on punch card machines in a big open area adjoining the machine room. We called it “the barn”. Decks of cards, held together with a rubber band, would be handed in at a ‘dispatch window’ between the barn and the machine room. They would be added to a queue of “jobs” (sometimes a very long queue). The operators fed these cards into card readers. The results of the program would be printed out on continuous printer paper, separated from the outputs of other jobs by tearing off at perforations. We could come later (usually a day later) to collect our output and deck of cards. If there was an error in your program… and there were almost always errors in your program … the printed output would say something terrifyingly unhelpful like “fatal errror”, or would print out tens of sheets of garbage text. In such cases you took your cards back and scratched your head trying to ‘debug’ the program… i.e. find the errors. Every time an error needed to be fixed delayed finishing the program by 1 or 2 days. Moral of the story? Check your program carefully before submitting it and AVOID making errors! This is such an important lesson for current-day programmers: modern languages and tools don’t prevent you from introducing errors. Many bugs remain undiscovered in modern software. It is these bugs that sometimes create security holes that cyber criminals find and exploit. Review your work or get someone to inspect it as early as you can so that you find the bugs as early as possible. More about this in a future podcast.

Anyhow I grew to love software development in the 1970’s. I loved the power of it, the art of it and the fact that something I wrote could make a computer, which after all was a pretty dumb machine, do something useful.

GOING INTO EXILE, PLASMA .. AND MY FIRST START-UP

My PhD at Wits was in the area of control engineering, but required me to do lots of programming. I got to work in the new language C on a mini-computer (a PDP-11) running Unix. The PhD dealt with some interesting mathematical functions called “Walsh functions”. The maths was hard but the programming was the fun bit. I got really clever about programming complicated matrix stuff.

In December 1979, with my PhD under my belt, I went into voluntary exile in the UK to avoid serving in the Apartheid army. I got a post doc research job at UMIST ( or the University of Manchester Institute of Science and Technology) working with Dr Peter Wellstead in the Control Systems Centre. Peter and I decided that it was just too messy to write programmes for matrix manipulation (the bread and butter of digital control systems .. which were then all the rage) . I decided to build, from scratch, a preprocessor to help with implementing matrix algebra. ( side bar: a preprocessor takes a program written in one language and converts it into a program for another language. In my case I invented a new language called PLASMA. My preprocessor converted a PLASMA program into Pascal. Plasma stood for Pascal Language Addition To Simplify Matrix Arithmetic).

PLASMA gave the programmer new built-in types called “matrix” and “vector”. It made writing complex digital control programs really easy. Writing a preprocessor is the same as writing a compiler.. a very challenging task in the field of computer science. After 6 months of obsessive hard (but fun) work PLASMA was tested and ready for use. My colleagues at UMIST loved it.

I then made my first foray into the world of tech start-ups (although nobody called it that). Remember it was the early 1980s. We still worked on mainframes and mini computers. Although the personal computer was just being invented our lab at UMIST didn’t have one. We computer professionals saw these “home computers “ as toys… not suited for the serious work we engaged in.

Peter, Denis Prager (one of my ex-Wits classmates who had also worked at UMIST but now worked for a company in London) and I registered a company and launched PLASMA as our first product. Like all start-ups we expected instant success because we had such a brilliant product! And.. like many startups … we were very disappointed. We sold about 10 copies of PLASMA and then we each moved onto other projects.

WORKING WITH A DIGITAL TWIN – THE TREASURY MUDDLE

I moved to London to take up a postdoctoral post at Imperial College, part of the University of London. I joined the PROPE group under Professor John Westcott, one of the fathers of control theory. PROPE was a multidisciplinary research group applying optimal control theory to the field of economics. PROPE stands for Programme of Research into Optimal Policy Evaluation. Apart from Prof Westcott there were 4 of us in the team. Elias, a Greek economist, Berc, a control theorist, and Robin, one of the very best software developers I’ve ever met. I was the sort of putty between these brilliant individual tiles.

While I was in the group PROPE was investigating ways of using control methods, usually used to automate electro-mechanical systems, to develop rational economic policies. Consider the following scenario: the British Chancellor of the Exchequer (that is the Minister of Finance) is making his budget speech in the House of Commons (parliament). He says that he is planning to reduce unemployment by x% and grow GDP by y% . He will do this by raising taxes and spending more on certain export incentives. How does he know this? In preparing his budget a team of economists and other experts at Her Majesty’s Treasury simulated hundreds of scenarios using a large macroeconomic model running on a mainframe computer. In today’s age ….. the age of the 4th Industrial Revolution… we would call this a “digital twin”. In the early 1980’s it was simply called The Treasury Model (or the “Treasury Muddle” as some of my friends called it). It was massive .., containing thousands of equations and variables. It had access to huge amounts of data.

PROPE used the Treasury Model, and another huge multi- country econometric model developed by the OECD, to implement optimal feedback loops around these models. The Chancellor could simply specify his targets for the national economy. Our software would come back with a list of budgetary measures he would need to announce to achieve these targets. It seems to be too good to be true .. and it was too good to be true! One of the problems was that although the models we used were the best ones available they were still filled with many simplifying assumptions and approximations. The other, more philosophical, problem is that unlike the model of an electro-mechanical system, these econometric models contain within them human behaviour. Human “agents” within the economy change their behaviour in response to government announcements. The models we used weren’t able to deal adequately with this, although we did venture into game theory to seek more accurate approaches.

The work PROPE did was not merely academic . We worked with the policy team at the Treasury and with a parliamentary working group headed by the Labour MP Jeremy Bray. Our work often helped to inform economic debate both in parliament and outside. This was the early 1980’s and Maggie Thatcher and Ronald Reagan were trying to convince the world that economic policy making was simple. All government had to do, they said, was control money supply and set the free market free.

My work at PROPE included a huge software maintenance task. I converted the poorly documented and dauntingly massive Treasury Model from FORTRAN 4 to FORTRAN 7 so that it could be moved off the mainframe to be run on a more user-friendly workstation. If you really want to learn about software engineering try doing this kind of maintenance work on a large legacy program! My conversion of the Treasury Model went off successfully. For years later I would listen the the UK budget speech with the warm satisfaction that my version of the Treasury Model had been used to develop the detailed assumptions. As far as I know it might still be in use today, and is being used by civil servants to develop scenarios for Brexit. Who knows?

ROBOTICS, AI, OO AND THE “SMART PRODUCT”

In 1985 I left academia and joined a corporate research lab . I went to work at the GEC-Marconi Research Centre in Chelmsford Essex. I joined the Industrial Automation Division. My lab did work in robotics … particular flexible assembly of small batches of products. We brought together a robot and some 1980’s artificial intelligence. The robot was a very cleverly designed assembly robot called the Tetrabot … the lab’s own invention.

At that time the European Union was funding collaborative research under a programme called ESPRIT. Our lab had secured funding as part of a partnership which included research teams from Spain, France, The Netherlands and West Germany (then a separate country). While working on this project I was introduced to a wonderful new paradigm for software development called object orientation or OO.

I became our ESPRIT team’s OO expert. I developed what I then thought of as a very, very elegant design for our project’s flexible assembly software based on OO principles. An interesting footnote about this piece of work is that about a year ago .. in 2018 … I was reading an article on Industrie 4.0, the German equivalent of the 4th Industrial Revolution. It was written by a group a German academics. They trace the origins of Industry 4.0 to the idea of “smart products”. These products, they say, “know their production history, their current and target state, and actively steer themselves through the production process by instructing machines to perform the required manufacturing tasks.” In other words the smart product knows how to build itself. This is exactly the concept I invented in 1986 when I applied OO design principles to our ESPRIT project’s flexible assembly software. “Gosh” I thought, when I read that article. “Did I invent Industrie 4.0 and 4IR?” Maybe I did? Pity I didn’t patent that idea.

While working at the GEC Marconi Labs, I got the chance to work on a state-of-the-art and extremely expensive special purpose AI work station called the Symbolics 3600 using the language Lisp. I also taught myself Smalltalk, one of the first genuine OO languages. We were right at the forefront of computing as it was in the late 1980s.

RETURN TO SOUTH AFRICA

The other part of the cutting edge I touched at this time was the first Apple Macintosh personal computer. We bought one for the lab and used it for preparing beautiful reports. At his time I also bought my first home computer … an IBM PC clone machine running an early version of MS Dos and the Word Perfect word processing package. I must be honest that I still didn’t take personal computing all that seriously. The stuff I did at work on the big powerful machines was where serious computing happened.

At home I did a lot of personal programming in Turbo Pascal on my home computer. The personal programming that I did at this time was part of a very important chapter in my life as a software guy and as a South African living in exile in London as Apartheid entered its darkest days. My story about this chapter warrants its own podcast episode.

In late 1989 I returned to South Africa and went back to Wits University as a senior lecturer in the department of Electrical Engineering. At first I taught digital electronics and microprocessor engineering. After a year or two I was put in charge of teaching a first course in programming to all 2nd year engineering students. More about this and what I did to change the SA software industry in the next two episodes of this podcast.


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.