The MNE Guidelines were first adopted in 1976 and most recently updated at the June 2023 OECD Ministerial Council Meeting. The chapter on Science, Technology and Innovation has seen the biggest changes to reflect the increasing need for coherent government-backed frameworks on responsible development and use of technologies.
The MNE Guidelines are aligned with and complimentary to the UN Guiding Principles for Business and Human Rights (UNGPs), which lay out the state’s responsibility to protect human rights and provide access to remedy, and also the corporate responsibility to respect human rights. These frameworks expect businesses to contribute to sustainable development and to conduct due diligence to prevent and address adverse impacts. They continue to be the international source code for understanding the role of business in society.
The MNE Guidelines are addressed to actors across the full technology value chain including the development, financing, sale, licensing, trade and use of technology; gathering and using data; and scientific research and innovation. Critically, the MNE Guidelines emphasize that actors in technology value chains should not only conduct due diligence on adverse impacts they themselves may cause, but also on adverse impacts that may be caused by other actors linked to their operations, products or services. These updates to the MNE Guidelines are a massive shift in the source code for RBC as it applies to science, technology and innovation.
What does this update mean for companies? A ‘whole-of-value chain’ approach to business responsibility
In practice, though to varying degrees, many of the largest and most impactful tech companies are already implementing their own policies on human rights due diligence. Companies are conducting and publishing human rights impact assessments of new tech; setting out principles for responsible design and innovation; engaging with stakeholders; participating in grievance mechanisms; and adapting sales processes to include human rights risk reviews.
Company action, however, is often limited to a narrow set of issues based on specific frameworks, targeted towards a few actors in the value chain (e.g. data protection, exporting surveillance equipment, content moderation, cybersecurity, gene editing, etc.). The MNE Guidelines provide an overarching framework that sets out the roles and responsibilities of all the actors in the value chain to address a broader set of issues affecting society.
This means that responsible business conduct isn’t just limited to highly visible tech companies. It also extends to actors directly linked to those technologies, such as actors supporting the development of new technology and users of products and services. For many technology value chains, this could include content creators, data curators, digital infrastructure providers, researchers, hardware manufacturers, and investors. It also includes actors outside the technology sector that rely on digital products and services such as in healthcare, retail, agriculture, extractives, and manufacturing.
The MNE Guidelines specify that all of these value chain actors are expected to build and use leverage (including collective leverage) to influence the entity causing the adverse impact, in order prevent, mitigate or remedy the impact. This makes the MNE Guidelines a unique and especially powerful tool in interconnected digital economies. In practice, for example, this could involve:
· Collective action and a common due diligence approach for information and communications technology (ICT) companies operating in high-risk jurisdictions;
· Investment firms supporting the adoption of RBC practices by AI companies in their portfolios; and
The MNE Guidelines also encompass a broad set of impacts; issues that go beyond – and may not neatly fit into – traditional human rights impact assessments, such as impacts on democracy, social cohesion, climate change, the environment, consumer safety and labour.
Emerging standards and policy coherence
There has also been global trend towards using – voluntary or mandatory – risk-based due diligence approaches to help govern business conduct in tech. Given the cross border nature of many tech products and services, interoperability, consistency and coherence between emerging frameworks and standards is critical.
Governance approaches will naturally differ according to policy priorities and contexts. However, a proliferation of measures which are not aligned can lead to fragmentation, overlap, and in some cases conflicting laws. This can mean unnecessary compliance costs for business, it can generate confusion in the market, and in the end possibly undermine the desired policy outcomes.
The recent OECD Declaration on Promoting and Enabling RBC in the Global Economy, signed by 50 governments and the EU in February, stresses the importance of coherence in RBC policies. On tech specifically, the G7 Action Plan for promoting global interoperability between tools for trustworthy AI, Leader’s Statement and Tech Minister’s Statement all emphasised the need for a convergence around a common, international standard on responsible AI.
The OECD is well placed to guide the development and use of tech in a way that benefits people and planet through a coordinated response, one that is based on multi-stakeholder cooperation and an existing government-backed framework.
How the OECD is already supporting RBC in tech, a look at 'RBC for Trustworthy AI'
AI in particular has captured significant attention from policy makers and regulators – and for good reason. With its incredible potential comes real risks to human rights, labour, and democracy, including bias, discrimination, manipulation of consumers, the accelerated polarisation of opinions, privacy infringement, and widespread surveillance.
In the EU, the Digital Services Act places risk-based due diligence obligations on very large online platforms and search engines for harms stemming from their services and algorithmic systems. With the proposed Artificial Intelligence Act (AIA) and Corporate Sustainability Due Diligence Directive (CS3D) both currently in trilogues, AI companies and their business relationships could soon see increased mandatory due diligence in the future.
As part of a new project on RBC for Trustworthy AI, the OECD Working Party on Responsible Business Conduct (WPRBC) and Working Party on AI Governance (AIGO), together with a multistakeholder Network of Experts on AI, will be developing a series of papers on how RBC can be practically applied by companies in the AI value chain.
The objective of this work is not to create an additional standard, but rather contextualise existing, government-backed frameworks in the context of the broader ‘whole of value chain’ approach taken by the MNE Guidelines. This will apply to the broadest number of stakeholders and act as a focal point for alignment for other frameworks, a ‘framework of frameworks’.
Ultimately, through developing a common understanding of the impacts of AI and expectations on all actors across the sector, the OECD can help meet the demands of our governments for policy coherence and also provide clarity to companies seeking to maximise the positive impact of new tech by first doing no harm.