As AI adoption continues to grow, successful risk mitigation will require a solid evidence base. The OECD's AI Incidents Monitor (AIM) documents AI incidents and hazards to provide better evidence to policymakers, practitioners and other stakeholders on AI risks and the context in which they materialise. Over time, AIM will help to identify any risk patterns and establish a collective understanding of AI incidents and their multifaceted nature.
AI risks and incidents
Artificial intelligence promises tremendous benefits but also carries real risks. Some of these risks are already materialising into harms to people and societies: bias and discrimination, polarisation of opinions, privacy infringements, and security and safety issues. Trustworthy AI calls for governments worldwide to develop interoperable risk-based approaches to AI governance and a rigorous understanding of AI incidents and hazards.
Key messages
As Artificial Intelligence continues to permeate societies and economies, it is inevitable for AI incidents to increase, making it vital to monitor and address risks. As AI knows no borders, stakeholders need a rigorous and transnational understanding of AI incidents and a consistent, interoperable way to report them.
As jurisdictions worldwide prepare to implement mandatory and voluntary incident reporting schemes, the OECD is working on a common reporting framework to help align terminology across jurisdictions, optimise interoperability, and minimise AI incidents, risks and hazards.
The OECD's AI Principle on "accountability" states those deploying AI must be responsible for their system’s proper functioning. The OECD offers a Catalogue of Tools and Metrics to help actors ensure AI is developed and used responsibly.
The High-level AI Risk Management Interoperability Framework outlines four common steps for effective risk management: defining the scope of AI, assessing risks, addressing them, and monitoring and communicating about them.
OECD evidence shows that if AI is not used in a trustworthy way, then there are significant risks to the rights and safety of workers, such as the privacy of their data, increased work intensity, bias and discrimination, and accountability. There are also risks of automation and increasing inequalities in the workplace. To help assess and mitigate these risks, the OECD is mapping risks and policy actions, and preparing a roadmap to address policy gaps.
Context
As AI use grows, so do reported incidents
AI incidents reported in the media have increased steeply since November 2022. Data on past incidents can provide the evidence to identify high-risk AI applications, and learning from past mistakes is essential to avoid repeating them. As incident reports grow in number, standard reporting methods become more critical to risk management worldwide.
AI privacy and data collection in the workplace
Most workers who reported AI-related data collection expressed concerns. In the finance and manufacturing sectors, 62% and 56% said they felt increased pressure to perform due to data collection, while 62% and 51% expressed privacy concerns. A majority also said they worry too much of their data was being collected (58% and 54%) and that data collection would lead to biased decisions against them (58% and 51%).
OECD Framework for the Classification of AI Systems
A common method for classifying AI systems is a crucial building block for a common AI incident reporting framework.
Latest insights
-
oecd.ai14 November 2023
-
oecd.ai19 October 2023
-
oecd.ai13 April 2023
Related publications
-
Policy paper14 November 2024
Related policy issues
-
As artificial intelligence grows, so does the need for the large-scale computer resources behind it. Resources for training and deploying AI, often called “AI compute”, require powerful hardware. But with growing demand for AI compute, questions about national capacities to achieve AI strategies are arising.Learn more
-
The OECD AI Principles are the first intergovernmental standard on AI. They promote innovative, trustworthy AI that respects human rights and democratic values.Learn more
-
As AI rapidly advances, it is crucial to understand how education will be affected. Important questions to consider include: How can AI be compared to humans? How do AI systems perform tasks from various capability domains such as language, reasoning, sensorimotor or social interaction domains? What are the implications for education and training?Learn more
-
The digital divide signifies unequal access to digital technologies, particularly concerning internet connectivity and device availability, alongside disparities in infrastructure, skills and affordability. These gaps result in unequal opportunities for information access and digital participation.Learn more
-
Succinct, straightforward, and clear, jargon-free, messaging is required here: what are the (global) challenges and what is at stake (for OECD countries) with respect to this policy sub-issue? Keep in mind user perspective logic by signposting the multiple angles/sectors that can be brought to bear on the issue. 180-300 chars (3-6 lines) is ideal.Learn more
-
Generative AI (GenAI) is a category of AI that can create new content such as text, images, videos, and music. It gained global attention in 2022 with text-to-image generators and Large Language Models (LLMs). While GenAI has the potential to revolutionise entire industries and society, it also poses critical challenges and considerations that policymakers must confront.Learn more