There have been some worrying trends and I am not sure where to get answers. We need @DylanCurran - to help us better reflect on what is going on there!
Excitement versus Children Data
Snapchat released an AI bot that kids are treating like a virtual friend. Deleting it and ensuring privacy requires you to clear data from Conversations, Contact Data, Lenses, Photoshoot Snaps, Search History, Scan History, Sticker Searches, Autofill, Shopping History, My Cameo Selfie, Top Locations, Cache and more. This update caught most parents by surprise as it was an automatic feature! A parent also recalled a conversation with her son, "Mom, why are you scared of new technology?" he asked. As if the answer was so simple. It is not about being scared but knowing in the future, employers or universities may profile a child and that data being collected now is not necessarily protected.
We all want convenience. But at what cost? Microsoft uses Open AI with Bing or Co-pilot which is integrated into Microsoft 365 (not sure what the privacy options are here but reading the fine print, like Alexa, it HAS ALL ACCESS TO DATA). By the way this Copilot focus on financial data was interesting.
While I love the fact that they are vetting third party suppliers (by the way they use Google Cloud) who access the data - is it enough? The integration with other apps and functions means data leaks.
A few months ago, I was really excited about Microsoft's new Ethics Principles, then in March they laid off their entire ethics and society team. The reason why this is significant is because according to their Ethics document, there were grey areas that needed to be escalated to the ethics team and other experts (legal, technical, etc.) who would jointly advice how to navigate complicated spaces. What happens now?
How can we make privacy by default easier and the abuse of our children's data harder? Too many times, it is these updates, which as exciting as they are, take away our choice.
If IBM and SAP (both very profitable) are cutting 1.5% - 2.5% of their current workforce to replace with AI, what are the spillover effects on governments who need to manage economy, unemployment, retraining, and pension funds? Microsoft has transparency reports - they need to add whose jobs will become obsolete as it affects the economy and people's lives.
IBM's pillars for AI ethics are explainability, fairness, robustness, transparency and privacy. Do read their page! My worry is that tech firms are handpicking issues they want to focus on - data, carbon offsetting etc. For example, fairness in AI is often related to biases but if you replace a human job with AI - is this fair?
SAP AI ethics principles are here:and while they say in their fist principle - they uphold human rights (read their statement), they skip the "right to work" part of the explanation.
The tech layoffs are by a majority of firms - so I have just highlighted a few. By May 2023, layoffs have surpassed 2022 for global tech. A total of 637 tech companies (techs biggest and brightest) laid off 186,020 employees according to Morningstar.com. Meta was pushing efficiency, Amazon and EA costs, Twitter chasing profitability, Google for inflation reasons, the list goes on.
What is the point of filing ESG reports about the environment and remote communities if you cannot be responsible for your closest stakeholders (your workers, suppliers and customers - in that order)? While shareholders and the bottom line are important, at what point should companies stick to their values and their purpose (if they know what that it)?
AI is a tool. A good friend once said:
"A fool with a tool is still a fool."
If we do not know how to use AI wisely, we may be using the tool wrong or worse, creating a bad precedent. With Stealth AI, we don't know the future we are moving towards, which is worrisome.