The OECD AI Principles call for AI actors to be accountable for the proper functioning of their AI systems in accordance with their role, context, and ability to act. Likewise, the OECD Guidelines for Multinational Enterprises aim to minimise adverse impacts that may be associated with an enterprise’s operations, products and services. To develop ‘trustworthy’ and ‘responsible’ AI systems, there is a need to identify and manage AI risks. As calls for the development of accountability mechanisms and risk management frameworks continue to grow, interoperability would enhance efficiency and reduce enforcement and compliance costs. This report provides an analysis of the commonalities of AI risk management frameworks. It demonstrates that, while some elements may sometimes differ, all the risk management frameworks analysed follow a similar and sometimes functionally equivalent risk management process.
Common guideposts to promote interoperability in AI risk management
Working paper
OECD Artificial Intelligence Papers
Share
Facebook
Twitter
LinkedIn
Abstract
In the same series
-
14 August 2024