Is AI Intelligent?
This article is really not about the AI system debate on whether AI will reach general intelligence - but it is about the poor awareness of what AI is, and even the ignorance of what human intelligence is. AI is a tool and like any another tool -it needs responsible and supervised usage. Where AI is different is its impact (good and bad) and the level of control we have on it as it gets embedded into our daily life and the choices we have (like is this site selling my data?).
AI is a combination of hardware, software, and data and it is shaped by human ingenuity. If we go back to the term coined in 1955 - the problem the small group of researchers attempted to address was can machines be trained to be intelligent - "Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves."
The conference did not end well but it further fuelled the promise of AI, which has historically been driven by defence money. The problem here is you need to know what are different types of human intelligence and then train machines to do it.Human intelligence cannot be precisely defined yet!
Machine are not good at doing different types of things at the same time - hence they can recognise pictures (with human training) and we say they can "see". They can spew out a sentence to a question looking at huge amount of data that is already tagged using statistics, and we say it "converses". So called "smart" machines like an autonomous cars can have 150 Electronic Control Units (ECU) inside of them (and have a huge carbon footprint because of the data they collect). One autonomous car is equivalent to 2600+ internet users - you do the math on sustainability (and this also does not look at e-waste).
Human think differently as we have empathy, situational awareness and we can do so much more. My brain can do multiple things at one time, it auto regulates tasks - eating, breathing, digestion, circulation, healing my body, preventing infections, and even every cell in my body has more data stored into it than can fit in my computer (see this incredible report)! This incredible body (it is not a machine), and it does all this with a fraction of the power your computer uses. Machines are faster in some things than human are and we are not in competition. If a machine is to serve a human then we need to complement each other before it can augment us.
The so called advanced AI machine needs so many other machines and programs behind the scenes to work. You can't see it, but it is there - from the data cables under the sea, the energy plants and cooling plants that manage the "cloud" servers, the huge amount of data crossing hands that blur the lines of privacy, and the hardware components that criss cross the world.
AI systems convincingly mimic human intelligence but they are NOT intelligent. They give us data curated responses (thanks to the algorithm) and ideally should help us make decisions. Here are some questions I would like to ask:
The Delegation Question: What decisions should we be responsible and what should we delegate to a machine and why?
The Meaningfulness Question: What decisions are meaningful for the human? I am not sure just approving a machine decision adds value to my life.
The Physical Interaction Question: If humans are social being and we are facing an epidemic of loneliness and trust issues - how can we use AI to bring the community physically together to interact - so what requires physical presence and we should keep instead of removing (neighbourhood stores, office centres)?
The Knowledge Question: If most knowledge is tacit (in people's heads) - how do we encourage sharing of this information? Data that we have is often a small curation of this wisdom and experience people have (and no brain computer interfaces will not solve that yet and there are many ethical questions on this).
The Accountability Priority Question: How can we hold businesses and governments accountable to people in these levels: employees first, then customers, then shareholders and investors (we seem to have got the order wrong)? ESG metrics should ask the question - how many people did you fire and why? Each person we fire is a person with a family, responsibility and obligations - think of the impact when you are not firing them for poor performance but to beef up your Q1 report because the AI technologies you invested in are more expensive than you thought!
The Learning Experience Question: By using AI machines, am I also blurring the human capacity to learn - remember we learn experientially (and sometimes meaningful experience takes time - life takes time).
The Problem Identification Question: Sometimes we use AI because the system is broken - emergency doctors work long shifts - lets use AI - the problem here is what can't they work shorter shifts - what are changes we need in our regulations and education curricula? So intuitively we solve for the wrong problem.
If we want to make machines "more intelligent" then humans need to get more intelligent about the problems we are facing and the solutions required. This means creating an answer that works best for the person (the individual - and no - I do not mean the Board, the Senior Manger of the Investors). It means looking at the community - if people prefer talking to persons, why are there so many chatbots?
If we are worried about privacy - why are you recording customer conversations and how are you using this data and where is it stored and who deletes it?
How many third party vendors are you using who have access to my data and what data do you have access to?
How do these AI systems change human brains (especially children?) and what does it mean for humanity?
Let's start a discussion and reach out if you want to know more.
More on www.melodena.com
Â
Comments