top of page

Do We Need a Resilient Agile AI Governance System?



AI systems affect people, societies, industries, governments, and the planet. Despite having 800 AI policy initiatives it will never be enough. The economic potential of AI (US$2 trillion (Statistica) to US$15.7 trillion by 2030 (PwC*), and the association with national competitiveness are pushing the industry ahead, despite the red flags being raised by many stakeholders. The scale of adoption is too great to put the brakes on AI development. Also robust regulations (like the GDPR or the AI Act) take time to design and implement which makes them obsolete very quickly. Consider the EU AI Act and classifications at the time of Chat GPT. Yes, as a chatbot it was initially classified as an AI of the low risk category – this has been revised to high risk. Let’s go through seven reasons on why we need AI resilient and agile governance systems.



1. Lack of Common Sense Digital Skills

We collectively lack common sense skills on digital spaces. These skills are far more basic digital skills focusing on privacy, rights, safety and global citizenship. With the internet we have bow connected 60% of the world’s population. Hence, it is an exciting time when I connect to anyone and anyone can connect to me. Did you just receive a message that you won a million dollars? Did a rich (unknown) billionaire contact you? Maybe your bank detected a fraud and wants you to press the link and reset your password? Or even worse, you assume the video game has cartoons and so it’s OK for your child to play? Or maybe you just bought software for education (was it designed by an education, a sociologist and a psychology and behavioral scientist?)


Think of Twitter’s Blue tick strategy and how quickly it went wrong? Common sense in a digital world also requires diversity of thought – the ability to ask “what can go wrong?” and the ability to listen. No one person can have the answer when designing these AI systems. We need to get this stage right as we still need to onboard another 40% of the world and the soon to be born!


2. Hardware Backdoors

Hardware backdoors are code in the hardware or semiconductor chip that can allow a person or entity to bypass the normal security systems. This is feasible with printed circuit boards, and they are invisible to the eye, as a research paper proved. Most of the semi-conductor industry logistics is global. Semiconductor production has been an awesome example of technological revolution! The current transistors in a microprocessor are 1500 times smaller than the ones in 1945 and the transistor are 10,000 times smaller than a single human hair! You cannot visually inspect this!


Three countries dominate the manufacturing of semiconductors: China, South Korea and Taiwan (87%). With the CHIPS act in USA, new centers are being developed in USA, Thailand, Vietnam, India, and Mexico. Silicon chips are being embedded in everything from smart toasters, cars, mobile and missiles, basically over 169 industries, which raises interesting questions on security. A recent study on VR headsets found data leakage from the accelerometer or gyroscope (hardware). Many of the backdoors are design flaws or purposefully made for future updates (more here). The reason for these backdoors is testing and for updates, as hardware is too expensive to replace). How can you control for this?


3. Software Bugs

Bugs are here to stay. A software bug is a mistake in software code, a misalignment between codes where APIs (more below) need to be fixed, or issues with language compatibility. Many of the older computer systems like banking, governments, health and defense are built on older software languages. These bugs are often discovered when the AI system has been adopted by large number of people or when updates are not fast enough. You see this when you have to restart your computer or mobile. Because of the complexity of AI systems (an autonomous car has over 100 million lines of code) this situation will persist. A faster way to fix everything is to create a patch (not go through the millions of lines of code) and outsource it (oops there are challenges with this too).


A patch is not a software update, as it is more targeted and focused on fixing an issue. A key challenge is how quickly a vulnerability can be found? It takes 215 days to find a vulnerability. When you think of third party vendors, browsers, platform updates, this could add up to 20,000 vulnerabilities a year – which is a lot for any IT team. Think about how many times your mobile, the apps, the operating system in your computer updated itself? Did the terms and conditions change (go back to point number 1).


Big companies are as vulnerable as small ones. The Facebook outage in 2021 was when an update went wrong and denied access to any of Facebook products, even within Facebook. The only way Facebook was able to resolve this problem was to physically reset the servers (their smart key cards did not work). You needed a smart human to be physically present.


4. Dependency on APIs

APIs are short programs that are used as virtual middlemen to connect different software platforms both inside and outside the organization. McKinsey estimates that most firms have approximately 12 APIs though maybe a 100 are required. APIs calls (request for information) drive 85% of web traffic! This can be a problem! When Elon Musk took over Twitter, he complained there were 1200 of them slowing the services. A senior director analyst at Gartner Inc said, Many API breaches have one thing in common: the breached organization didn’t know about their unsecured API until it was too late. The first step in API security is to discover the APIs the organization delivers or consumes from third parties.” This was the case with a data hack of 3.6 million recordsthat exploited a third-party management provider. Amazon Web Services has 400 different services, kept discrete and that have their own APIs. When they update their APIs, all other users of the service need to also update. If not, there is a security vulnerability.


5. Data brokers

New industries are developing and some are borderline shady. Have you head of Experian, Equifax, Acxiom, Oracle America, CoreLogic and Epsilon? Many Fortune 500s use them. They create personal profiles of individuals by collating multiple sources of data. Some they buy and others…I am not sure. For example, you downloaded a new app? Where did that data go when the app went bankrupt? You trusted Instagram – then it was bought by Facebook. Did you have a choice? What about the Starbuck’s app that wants access to your location – what does that mean? This data brokerage industry is worth US$ 200 billion and has 4000 players. A bill was introduced in the USA Senate in 2015 and still has not been passed. GDPR has some laws that require data brokers to get consent but do you read all terms and conditions? Some data brokers let you update your profile but had you heard of them before this article?


6. Stealth Systems

The ability for technology companies to deploy self-learning software systems using your data seems a worry. ChatGPT raised a lot of questions – what about IP? Isn’t anything we create copyrighted? Were we asked permission? There is an unfairness to these systems that harvest our experiences and skills and hijack these for an AI, where the humans were not adequately compensated (or the wrong ones were). Governments are lagging in protecting people over machines. When Uber deployed its platform, it was for the gig worker – but in London it competed with the London Black Cab driver. An Uber driver with the ownership of a mobile and driver’s license was able to compete with the London Black Cab drivers who had to memorize 25,000 streets and 20,000 landmarks across a 25-mile radius of Charing Cross and pass a test after 2-4 years. Of course technology is great – but what happens when it fails? Who would you like to drive you – the Uber driver or the Black Cab Driver? By the way, a study recently showed this ability to memorize streets has given these Black cab drivers “brain flexibility.” Are we losing valuable skills because of these stealth systems? More on this in my article here.


7. Choice (or lack of it)

More and more we seem to have little choice. This is becoming a systemic thing. When a school, a university, an office place or a government asks us to use a system (iPad, software, or hardware), can we say no? If we say no, are we discriminated?


With the internet of all things, choices also are removed so you could walk into a store and your face, behavioral mannerisms (way you walk) and voice may belong to them. It takes just one photo, 60 seconds and few dollars to create a deep fake. Are we ready?


What can we do?

We need to build resilient agile AI governance systems. A resilient system bounces back from shocks and the reality as seen above is we cannot mitigate for all shocks. We need to be agile in handling what happens. Here are guidelines for the way forward:


1. Educate

You need to educate yourself and your family and friends. If you are a technology person, join the dialogue and help educate others. If you are a decision maker, make ethical choices and be honest about the intent of these AI systems and their vulnerabilities. AI can do tremendous good but to do good we need to know how it can go wrong.


2. Fund research that is inter-disciplinary

AI systems are not just about technology. It will change the way we live, socialize and work. We need more research on baseline and the changes to ensure that this new society and industries are good for humanity and the planet. What worries me is the shift to applied research and the decreased spending on basic research. Further, there may be important skills we want to retain for the future which are becoming rapidly obsolete (see my article here on human versus AI intelligence)


3. Hack yourself

This is a good practice. Get employees and early users to test your systems before deploying. Chat GPT deployed to the public with no limits and we saw the scale of adoption. It should never have been deployed to millions of people as that scale of adoption makes it had to limit its usage. I like what Apple has done – it will take a year to come out with a new system that is tested rigorously. Hacking is what the USA government is doing to its big tech systems. Estonia as a country prepares itself differently though scenario based cyberattacks. This diversity of thought and robust testing will let you be more agile when things go wrong!


4. Human-in-the Loop

Technology is a tool. If it is a tool for humans, then a human must be in the loop. This is for

(1) decision making: Where is human decision making required and where does the AI take the decision for a human (and why?)

(2) purpose of AI system: How does the AIS benefit the human (individual and collective, before the company)

(3) responsibility: If the AI system fails, what is the chain of command and who (not what) takes responsibility?


If you think of the Facebook example, a human back-up is needed at all times. What is the minimum number of humans? So if a plane can fly itself – should you have a tech savvy pilot and an old school pilot like Captain Sully, the pilot who landed on the Hudson river and saved the all lives, who understands his machine? But then, is the machine built for a human override (think of the MCAS Boeing example). These questions help you design better AI systems.


This human-in-the-loop focus depends on your values. Lots has been written on self-governance, but the challenge is to follow it through despite the factors that push and pull you to do otherwise. Be ruthless in transparency of your motives.


5. De-escalate the competition

AI systems are like a nuclear cold war but much more global with less oversight on consequences. It seems to be escalating beyond the point of no return. The AI agenda is dominated by a few countries – some because of manufacturing, some because of a dwindling population, others for security reasons, and still others for economic reasons. Since the AI impact is global, there is an urgency to involve more players and maybe de-escalate the competitiveness and the impact of large integrated platforms. The challenge for all decision makers (manager, investors, founders and policy makers) is that we work in ethical grey spaces (see here). So taking time to think of motives and accidental spillovers is a good practice. Once an AI System in out there, it can be hacked, reverse engineered, interfered with and the good and the unintended, will change the world for ever. We cannot go back and have a do-over.


Want to know more?

Read my new book: AI Enabled Business (coming soon)

My previous book (a condensed version) is here: AI Smart Kit also available from the publishers: InfoAge

Visit my blog: www.melodena.com

Picture on top: Dan Zel {Tree of Life}

Recent Posts

See All

New AI proposal: INDIA

#EU has the #AIAct (first in the world). #USA has recommended the hiring of a Chief AI Officer for every Federal Agency. But, right now, I love the new India CAS framework proposal for AI. Most import

bottom of page