Shaping a human-centric approach to artificial intelligence
AI principles
The OECD AI Principles are the first intergovernmental standard on AI. They promote innovative, trustworthy AI that respects human rights and democratic values. Adopted in 2019 and updated in 2024, they are composed of five values-based principles and five recommendations that provide practical and flexible guidance for policymakers and AI actors.
About the OECD AI Principles
AI offers considerable benefits in areas like healthcare, productivity and scientific progress. It also brings risks with potential disinformation, data insecurity and copyright infringement. What’s more, AI knows no borders. In this context, the principles offer a foundation for international cooperation and interoperability with guidance designed to stand the test of time in the fast-paced world of AI.
By adhering to these principles, policymakers can guide the development and deployment of AI to maximize its benefits and minimize its risks. This is crucial for harnessing the potential of AI technologies for economic growth, social welfare, and environmental sustainability while protecting individuals and societal values.
By May 2023, governments reported over 1000 policy initiatives across more than 70 jurisdictions in the OECD.AI national policy database that follow the OECD AI Principles.
Countries that adhere to the Principles
To date, the OECD AI Principles have been adopted by OECD member countries and a number of partners worldwide.
Values-based principles
The OECD AI Principles promote use of AI that is innovative and trustworthy and that respects human rights and democratic values. Adopted in May 2019, they set standards for AI that are practical and flexible enough to stand the test of time.
Stakeholders should proactively engage in responsible stewardship of trustworthy AI in pursuit of beneficial outcomes for people and the planet, such as augmenting human capabilities and enhancing creativity, advancing inclusion of underrepresented populations, reducing economic, social, gender and other inequalities, and protecting natural environments, thus invigorating inclusive growth, well-being, sustainable development and environmental sustainability.
AI actors should respect the rule of law, human rights, democratic and human-centred values throughout the AI system lifecycle. These include non-discrimination and equality, freedom, dignity, autonomy of individuals, privacy and data protection, diversity, fairness, social justice, and internationally recognised labour rights. This also includes addressing misinformation and disinformation amplified by AI, while respecting freedom of expression and other rights and freedoms protected by applicable international law.
To this end, AI actors should implement mechanisms and safeguards, such as capacity for human agency and oversight, including to address risks arising from uses outside of intended purpose, intentional misuse, or unintentional misuse in a manner appropriate to the context and consistent with the state of the art.
AI Actors should commit to transparency and responsible disclosure regarding AI systems. To this end, they should provide meaningful information, appropriate to the context, and consistent with the state of art:
- to foster a general understanding of AI systems, including their capabilities and limitations,
- to make stakeholders aware of their interactions with AI systems, including in the workplace,
- where feasible and useful, to provide plain and easy-to-understand information on the sources of data/input, factors, processes and/or logic that led to the prediction, content, recommendation or decision, to enable those affected by an AI system to understand the output, and,
- to provide information that enable those adversely affected by an AI system to challenge its output.
AI systems should be robust, secure and safe throughout their entire lifecycle so that, in conditions of normal use, foreseeable use or misuse, or other adverse conditions, they function appropriately and do not pose unreasonable safety and/or security risks.
Mechanisms should be in place, as appropriate, to ensure that if AI systems risk causing undue harm or exhibit undesired behaviour, they can be overridden, repaired, and/or decommissioned safely as needed.
Mechanisms should also, where technically feasible, be in place to bolster information integrity while ensuring respect for freedom of expression.
AI actors should be accountable for the proper functioning of AI systems and for the respect of the above principles, based on their roles, the context, and consistent with the state of the art.
To this end, AI actors should ensure traceability, including in relation to datasets, processes and decisions made during the AI system lifecycle, to enable analysis of the AI system’s outputs and responses to inquiry, appropriate to the context and consistent with the state of the art.
AI actors, should, based on their roles, the context, and their ability to act, apply a systematic risk management approach to each phase of the AI system lifecycle on an ongoing basis and adopt responsible business conduct to address risks related to AI systems, including, as appropriate, via co-operation between different AI actors, suppliers of AI knowledge and AI resources, AI system users, and other stakeholders. Risks include those related to harmful bias, human rights including safety, security, and privacy, as well as labour and intellectual property rights.
Recommendations for policy makers
Governments should consider long-term public investment, and encourage private investment, in research and development, including interdisciplinary efforts, to spur innovation in trustworthy AI that focus on challenging technical issues and on AI-related social, legal and ethical implications and policy issues.
Governments should also consider public investment and encourage private investment in open datasets that are representative and respect privacy and data protection to support an environment for AI research and development that is free of inappropriate bias and to improve interoperability and use of standards.
Governments should foster the development of, and access to, an inclusive, dynamic, sustainable, and interoperable digital ecosystem for trustworthy AI. Such an ecosystem includes inter alia, data, AI technologies, computational and connectivity infrastructure, and mechanisms for sharing AI knowledge, as appropriate. In this regard, governments should consider promoting mechanisms, such as data trusts, to support the safe, fair, legal and ethical sharing of data.
Governments should promote an agile policy environment that supports transitioning from the research and development stage to the deployment and operation stage for trustworthy AI systems. To this effect, they should consider using experimentation to provide a controlled environment in which AI systems can be tested, and scaled-up, as appropriate. They should also adopt outcome-based approaches that provide flexibility in achieving governance objectives and co-operate within and across jurisdictions to promote interoperable governance and policy environments, as appropriate.
Governments should review and adapt, as appropriate, their policy and regulatory frameworks and assessment mechanisms as they apply to AI systems to encourage innovation and competition for trustworthy AI.
Governments should work closely with stakeholders to prepare for the transformation of the world of work and of society. They should empower people to effectively use and interact with AI systems across the breadth of applications, including by equipping them with the necessary skills.
Governments should take steps, including through social dialogue, to ensure a fair transition for workers as AI is deployed, such as through training programmes along the working life, support for those affected by displacement, including through social protection, and access to new opportunities in the labour market.
Governments should also work closely with stakeholders to promote the responsible use of AI at work, to enhance the safety of workers, the quality of jobs and of public services, to foster entrepreneurship and productivity, and aim to ensure that the benefits from AI are broadly and fairly shared.
Governments, including developing countries and with stakeholders, should actively co-operate to advance these principles and to progress on responsible stewardship of trustworthy AI.
Governments should work together in the OECD and other global and regional fora to foster the sharing of AI knowledge, as appropriate. They should encourage international, cross-sectoral and open multi-stakeholder initiatives to garner long-term expertise on AI.
Governments should promote the development of multi-stakeholder, consensus-driven global technical standards for interoperable and trustworthy AI.
Governments should also encourage the development, and their own use, of internationally comparable indicators to measure AI research, development and deployment, and gather the evidence base to assess progress in the implementation of these principles.
Latest insights
-
Press release22 July 2024
-
oecd.ai31 October 2023
Related publications
Related policy issues
-
As artificial intelligence grows, so does the need for the large-scale computer resources behind it. Resources for training and deploying AI, often called “AI compute”, require powerful hardware. But with growing demand for AI compute, questions about national capacities to achieve AI strategies are arising.Learn more
-
Artificial intelligence promises tremendous benefits but also carries real risks. Some of these risks are already materialising into harms to people and societies: bias and discrimination, polarisation of opinions, privacy infringements, and security and safety issues. Trustworthy AI calls for governments worldwide to develop interoperable risk-based approaches to AI governance and a rigorous understanding of AI incidents and hazards.Learn more
-
As AI rapidly advances, it is crucial to understand how education will be affected. Important questions to consider include: How can AI be compared to humans? How do AI systems perform tasks from various capability domains such as language, reasoning, sensorimotor or social interaction domains? What are the implications for education and training?Learn more
-
The digital divide signifies unequal access to digital technologies, particularly concerning internet connectivity and device availability, alongside disparities in infrastructure, skills and affordability. These gaps result in unequal opportunities for information access and digital participation.Learn more
-
Succinct, straightforward, and clear, jargon-free, messaging is required here: what are the (global) challenges and what is at stake (for OECD countries) with respect to this policy sub-issue? Keep in mind user perspective logic by signposting the multiple angles/sectors that can be brought to bear on the issue. 180-300 chars (3-6 lines) is ideal.Learn more
-
Generative AI (GenAI) is a category of AI that can create new content such as text, images, videos, and music. It gained global attention in 2022 with text-to-image generators and Large Language Models (LLMs). While GenAI has the potential to revolutionise entire industries and society, it also poses critical challenges and considerations that policymakers must confront.Learn more