How AI will ultimately impact workers and the workplace, and whether the benefits will outweigh the risks, will also depend on the policy action that we take. The advance of AI in the workplace, in itself, should not be halted because there are many benefits to be reaped. Yet we should also avoid falling into the trap of “technological determinism”, where technology shapes social and cultural changes, rather than the other way around. To paraphrase labour economist David Autor, instead of asking what AI can do, we must ask what we want it to do for us.
Urgent action is required to make sure AI is used responsibly and in a trustworthy way in the workplace.
On the one hand, there is a need to enable workers and employers in reaping the benefits of AI while adapting to it, notably through training and social dialogue.
Countries have taken some action to prepare their workforce for AI-induced job changes, but initiatives remain limited to date. Some countries have invested in expanding formal education programs (e.g. Ireland), or launched initiatives to raise the level of AI skills in the population through vocational training and lifelong learning (e.g. Germany, Finland and Spain). The OECD’s research also shows that outcomes are better where workers have been trained to interact with AI, and where the adoption of technologies is discussed with them.
On the other hand, there is an urgent need for policy action to address the risks that AI can pose when used in the workplace – in terms of privacy, safety, fairness and labour rights – and to ensure accountability, transparency and explainability for employment-related decisions supported by AI.
Governments, international organisations and regulators must provide a framework for how to work with AI. This includes setting standards, enforcing appropriate regulations or guidelines, and promoting proper oversight of these new technologies. The OECD has played a pioneering role in this area by developing the OECD AI Principles for responsible stewardship of trustworthy AI, adopted in May 2019 by OECD member countries – forming the basis also for the G20 AI Principles – and since then also by Argentina, Brazil, Egypt, Malta, Peru, Romania, Singapore and Ukraine.
Many countries already have regulations relevant to enforce some of the key principles of trustworthy use of AI in the workplace. Existing legislation, including on data protection, includes provisions relevant to AI. However, a major development in recent years has been the proposal of specific-AI regulatory frameworks that address AI high-risk systems or impacts, albeit with key differences in approach across countries.
Anti-discrimination legislation, occupational safety and health regulation, worker privacy regulation, freedom of association all need to be respected when AI systems are used in the workplace. For instance, all OECD member countries have in place laws that aim to protect data and privacy. Examples include the requirement to prior agreement with workers’ representatives on the monitoring of workers using digital technologies (e.g. Germany, France and Italy), and regulations requiring employers to notify employees about electronic employee monitoring policies. In some countries, such as Italy, existing anti-discrimination legislation has been successfully applied in court cases related to AI use in the workplace. But regulations that were not designed specifically for AI will, in all likelihood, need to be adapted.
Using AI to support decisions that affect workers’ opportunities and rights should also come with accessible and understandable information and clearly defined responsibilities. The ambition to achieve accountability, transparency and explainability is prompting AI-specific policy action with direct implications for uses in the workplace.
A notable example is the proposed EU AI Act, which takes a risk-based approach to ensure that AI systems are overseen by people, are safe, transparent, traceable and non-discriminatory. In the United States in October 2022, the White House Office of Science and Technology Policy published a Blueprint for an AI Bill of Rights, which laid out a roadmap for the responsible use of AI. In June 2022, Canada introduced in Parliament the Artificial Intelligence and Data Act (AIDA) which requires “plain language explanations” of how AI systems reach their outcomes. Many countries, organisations and businesses are also developing frameworks, guidelines, technical standards, and codes of conduct for trustworthy AI.
When it comes to using AI to make decisions that affect workers’ opportunities and rights, there are some avenues that policy makers are already considering: adapting workplace legislation to the use of AI; encouraging the use of robust auditing and certification tools; using a human-in-the‑loop approach; developing mechanisms to explain in understandable ways the logic behind AI-powered decisions.
A general concern by many experts is that the pace of the policy response is not keeping up with the very rapid developments in generative AI and that the policy response still lacks specificity and enforceability.
Indeed, there have been many calls to act on generative AI. The European Union announced plans to introduce a voluntary code of conduct on AI to be adopted rapidly. The US-EU Joint Statement of the Trade and Technology Council in May 2023 decided to add special emphasis on generative AI to the work on the Roadmap on Evaluation and Measurement Tools for Trustworthy AI and Risk Management, and the UK Prime Minister announced a summit on AI safety to be held late 2023. AI-related regulation also raises new challenges in relation to their international interoperability, which calls for international action to promote alignment of key definitions and of their technical implementation where appropriate.
Many of these calls are addressed by the “Hiroshima AI Process” launched by G7 Leaders in May 2023 with the objective of aligning countries (including the EU) to an agreed approach to generative AI. The OECD has been convoked to support this process that is underway.
Such action will need to be quickly complemented by concrete, actionable and enforceable implementation plans to ensure AI is trustworthy. International co‑operation on these issues will be critical to ensure a common approach that will avoid a fragmentation of efforts that would unnecessarily harm innovation and create a regulatory gap that might lead to a race to the bottom.
Stefano Scarpetta,
Director for Employment, Labour and Social Affairs,
OECD