top of page

AI Risks versus Trade-offs Versis Guardrails



They do not mean the same thing though we need to look at all three. Risks are probabilities of events that impact your future outcomes. Unfortunately a probability means we have uncertainty in the event, and a tolerance for its outcome. 


I personally prefer AI tradeoffs as it is more transparent on what you are willing to give up to achieve something. For example, give up ethics for reduced liability, leave behind outliers for the majority, etc. 


Guardrails needs an understanding of risks and tradeoffs. Ideally you build guardrails to maximize benefit beyond the financial metrics to look at people and planet. 


A new popular word emerging is AI safety and security. Great words but they do not mean the same thing as what is listed above. Often when we are looking at AI safety we are hoping the interpretation is HUMAN safety but what does this mean - harms? Are harms things like mental harms, social harms or just physical harms? A very ambiguous word.


AI security is the design to protect the AI and then protect its users/stakeholders if the AI is harmed. Here the question is whether users come before the other stakeholders? So AI security's focus is AI first (though we often conflate it with national interest of shareholders as a key stakeholders). 




 
 
 

Comments


bottom of page