top of page

Artificial Intelligence (AI): A Primer



The quest to create machines that mimic humans has been going on for a long time. Sometimes out of academic endeavor, sometimes out of boredom, sometimes to make money, benefit humankind, and other times to replace expensive or missing labor.


Over the centuries this quest has been funded by passionate individuals, research institutes, governments and now increasingly the private sector. Currently, there are huge sums of money being spent by the private, and government sector (lots in defense). No longer is academia the key house of AI – this is because the costs of these new generations of AI models are huge and to measure impact (especially societal impact), takes time. Does it matter – yes, because the results of the studies published, will not be peer-reviewed and this low transparency means that it will be difficult to regulate or govern effectively. Also, we use a lot of independent thinkers in the academic sector while in the private sector, where profitability depends on perception and impacts reputation and share prices, this may act as a deterrent for curtail critical thinking for these frontier technologies!


What is AI?


AI very simply is
1. Data
2. Hardware
3. Software

Starting from the latter point: software. Software is a series of instructions given to the machine on what to do with the data you have provided it. To speak to the machine, we need to write instructions or store data in a format that the machine can understand. This is binary code or a series of 0s and 1s (basically on-off switches). You can trace this development to Germany, to Gottfried Leibniz in 1689 who was inspired by an ancient Chinese text: the I Ching but can also be traced to work done by the ancient Indian scholar Pingala. While theory existed, it took an MIT Master student, Claude Shannon, in 1937, to realize that this could be implemented in circuits (his thesis was titled: A Symbolic Analysis of Relay and Switching Circuits). This thesis was based on his experience working with his professor’s mechanical computing device - the Differential Analyser. So out of a cumbersome experience, he was inspired to find an alternate method to the traditional gears and shifts. This idea was soon incorporated into telephone systems.


So, while the language we program a computer changes (there are more than 675 computer languages), all these languages still need to fundamentally “speak” in binary code (a type of Boolean algebra). The computer CPU still responds to these on-and-off switches or the binary language (Note: quantum computers allow 4 switches via the qubit). The shortcut the industry takes is to build software languages on top of other software languages (and this can be a future problem) as there are fewer people who know the older programming languages. You see this in failures of AI systems in airline, airport, banking, or government as an example.



In 1958, the first integrated semiconductor chip was developed for Texas Instruments (USA). This was one circuit on a germanium chip of the size of 11.1 mm by 1.6 mm. By 1970, we could put a couple of 1000s of transistors on a chip, by 2023 we were talking of 100 billion, and by 2030, Intel says 1 trillion and the size of each transistor is now measured in nanometers!!! An iPhone has about 15.8 billion transistors on a 5 nanometer chip! The computing power (volume or the number of simultaneous tasks) is increasing and the size we need of the machine is becoming smaller. At the same time, the infrastructure hardware is also improving – we now have 5G and soon 6G broadband which means sending more data, faster. We are being connected by telecommunication satellites which means better coverage. So, the speed and access of computing is increasing. You also need to store vast volumes of data and often these are done in data servers. USA, Germany, UK, China, and then Canada lead in the number of data servers! Of course, you also need energy infrastructure as these hardware systems are power-hungry (pun intended). So, there is an environmental cost to AI that is often not addressed.


Please read IEEE SA “Planet 2030: Strong Sustainability by Design.” We are looking for your feedback!


What about the data that fuels these machines? Most data we use for these machines is explicit – what we write and observe. This can be text, images, audio, and video. Video traffic is 70% of all network traffic! Are you posting videos? Then whose data is being captured? Most of the AI machines are trained in narrow domains (text, audio, etc.) which means it has not reached human-level intelligence as humans can do exhibit multiple types of intelligence (move, talk, write, see, and think) simultaneously with low power consumption. Training data sets can be immense, according to OurWorldInData, PaLM 2 used 2.7 trillion data points! Most of the data (90%) we produce has been done in the last two years (thanks to things like social media and cameras and smart city and smart product IoTs) and only 2% of data is saved and less than 10% is unique data! So data is useless unless you format it and make sense of it. While current models of generative AI can use large dumps of data, it still needs to be trained and the question becomes at whose costs. The human workers who are often paid below the industry average wages? The content moderators who need to manage the fallout? The miners who search for rare earth metals, and not always in the best circumstances? The recyclers who strip E-waste for precious metals in unsafe conditions? The employees whose data is being used to make them redundant?


Of course, if we choose to design AI to save jobs and give more free time to the employee at the same cost, pay for safe excavation and recycling, protect the planet, and teach the public about digital safety and global citizenship behaviour, perhaps we may have AI that benefits all mankind. But AI is a tool and the decision on how to use it is a choice we humans collectively need to make.




Recent Posts

See All

New AI proposal: INDIA

#EU has the #AIAct (first in the world). #USA has recommended the hiring of a Chief AI Officer for every Federal Agency. But, right now, I love the new India CAS framework proposal for AI. Most import

bottom of page