Responsible AI and More: Cybersecurity, Crisis Management and AI Literacy
The State of Lebanon recently reported that its internet provider, Ogero, was under a 10-day cyber attack. A digital expert noted that since 2021, there have been no updates to the Lebanese counterdefense systems due to a lack of funding. When a government is impacted, it impacts all other systems. For example, planes in Lebanon could not use regular flight landing technology due to GPS jamming. Sometimes, state actors can try to access citizen, research, or company data, suggesting governments need to be aware of the risks of AI-enabled organizations that DO NOT have a plan or funds to manage AI responsibly. This phenomenon is not a developing country syndrome. Consider the following example: UK’s attack on the NHS in 2022), the EU’s and Ireland's Health Service in 2021,  Germany's 70 municipalities were attacked in 2023, affecting government services or the USA’s Solar Winds in 2020 when a through a vendor, hackers got access to government sites via an update. Though governments and Big Tech lead in investment in AI (directly or indirectly), we cannot neglect SMEs. Governments purchase services from AI vendors, many of which are SMEs. It is estimated that  ~22+/-% of government contracts in governments like Dubai, the UK, and the USA are via SMEs.
Â
SMEs typically invest 10-11% of their IT budget in cybersecurity. Still, the more you depend on AI (and get the human out of the loop), the more unprepared an organization is for a cyberattack or a technology failure. In Australia (similar to other countries), 60% of small businesses do not survive a cyber attack. Part of the reason is funding, but also because AI literacy and knowledge are not treated as competencies within an organization (and No – ChatGPT does not have all the answers).
Â
Often, cyber failures are not malicious external attacks; they are frequently caused by human error due to negligence, lack of knowledge and foresight, and poo organizational training on how to handle sensitive information. For example, UK airports had an issue where a duplicate flight plan, with multiple codes for different beacons, was registered on NATS, triggering an automatic safety measure, which led to a chain reaction impacting 700,000 flights. The system needed a manual hardware reboot. Take the example of Meta, when Facebook, Instagram, WhatsApp, and Occulus were down for 6 hours in 2021 because of a faulty BGP configuration. This problem required a manual reset (see story below).
Source: AI Enabled Business: A Smart Decision Kit (Melodena Stephens et al., 2024; page 203)
Â
What is this investment for?
·     Governing (Ethically Aligned AI systems – most Boards fail in this aspect (see the Harvard study - and AI as a topic needs to enter into ESG ratings)
·     Identify (current and potential threats, risk, proactive crisis management plans, and AI tradeoffs – see example of metaverse here)
·     Protect (look at variations of people-first, data, processes, operations, stakeholders, and reputation. Make a difference between zero-trust and zero-time security)
·     Detect (possible areas of vulnerability, attacks as soon as they happen, stakeholder impact – for example, are you using red-teaming?)
·     Respond (not just from an IT operations point of view, but to organization, stakeholders, and other factors like environment - AS SOON AS POSSIBLE)
·     Recover (learn and build better systems and governance structures) – 40% of companies don't have a recovery plan.
Â
Response time is critical; the sooner you find the problem, the more secure your systems will be. The faster you tell your impacted stakeholders, the more measures you can take to keep sensitive data safe. Uber had an issue: 600,000 driver's license plates and the personal data of 57 million users had been compromised via the cloud where the data was stored! Uber took one year to announce this (even paying a ransom registered as a bounty bug for $100,000), resulting in the loss of $148 million.
Â
The problem when you outsource is that you cannot do many steps effectively and transparently when it comes to security! IBM's report, the Cost of Data Break, reports that worldwide, the average global cost of data breaches was $4.45 million in 2023. Further, hackers often exploit freely available software, according to the ENISA Threat Landscape 2023 report (by the way, are you using GitHub CoPilot?)! You might think the simple solution is to hire more cybersecurity professionals – it may not be enough.
Â
Also, often, companies are unprepared for a crisis. In the example of NATS, the scenario that led to the AI malfunction was exceptional, as the organization posted -"This scenario had never been encountered before, with the system having previously processed more than 15 million flight plans over the five years it has been in service." So you need a backup plan involving human accountability – not a technology default system.
Â
Take the example of the Chatbot that seems to be invading our AI systems. Proactive Crisis  Management could limit unpreparedness by acknowledging simple facts: NO ONE WANTS TO SPEAK TO A CHAT BOTduring a crisis. But if you had removed the humans (as an economic measure), you would not have the capacity when you need it most, affecting your reputation. Chatbots themselves are not reliable. Researchshows how chatbots can be used to prey on the vulnerable.
Â
If you are using AI, the leadership and board responsibility is HUGE. What are you doing about it?
1. Start with AI Literacy; this differs from Digital Skills and is a central tenant of the EU AI Act.
2. Make sure that you have money allocated for cybersecurity and crisis management when you plan AI investments.
3. Know the difference between risks and trade-offs across the value chain.
4. Ensure you have enough human capacity if AI fails (and it is a machine, it will fail and examples above highlight, difficult to predict when and how).
5. Start reporting AI ethics and governance standards in your ESG reporting.
Comments