Every organization should have a baseline AI ethics code and processes in place to identify, assess and mitigate the potential harms of AI use, procurement, development and deployment
Before implementing AI in an organization, AI governance professionals must understand the potential reputational, cultural, economic, acceleration, legal and regulatory risks and harms
Machine learning and AI pose risks already well understood in existing sectors and practices, but the risks can be exacerbated by the scale, scope and speed of processing ML and AI
Given the fact that ML and AI continue to learn and evolve, it can be difficult to anticipate what form risks may take, particularly if they are novel risks
Any data outside the boundaries of the training dataset (e.g., edge cases can be errors when you have data that is incorrect, duplicative or unnecessary
Failure to identify and address harms can have a catastrophic impact on an organization, whether reputational, cultural, economic, acceleration or legal and regulatory
AI can produce a huge number of potential opportunities, such as being faster and more accurate in its results across a broader range of data, and being incredibly accurate in medical assessments