Deployment of AI applications across the full spectrum of finance and business sectors has progressed rapidly in recent years such that these applications have become or are on their way to becoming mainstream. AI, i.e. machine-based systems able to make predictions, recommendations or decisions based on machine or human input for a given set of objectives, is being applied in digital platforms and in sectors ranging from health care to agriculture. It is also transforming financial services. In 2020 alone, financial markets witnessed a global spend of over USD 50 billion in AI, and a total investment in AI venture capital of over USD 4 billion worldwide, accompanied by a boom in the number of AI research publications and in the supply of AI job skills.
AI applications offer remarkable opportunities for businesses, investors, consumers and regulators. AI can facilitate transactions, enhance market efficiency, reinforce financial stability, promote greater financial inclusion and improve customer experience. Banks, traders, insurance firms and asset managers increasingly use AI to generate efficiencies by reducing friction costs and improving productivity levels. Increased automation and advances in “deep learning” can help financial service providers assess risk quickly and more accurately. Better forecasting of demand fluctuations through data analytics can help to avoid shortages and overproduction. Consumers also have increased access to financial services and support thanks to AI-powered online customer service tools like “chat-bots”, credit scoring, “robo-advice” and claims management.
As AI applications become increasingly integrated into business and finance, the use of trustworthy AI becomes more important for ensuring trustworthy financial markets. Increasing complexity of AI-powered applications in the financial sector, as well as the functions supported by AI technologies, pose risks to fairness, transparency, and the stability of financial markets that current regulatory frameworks may not adequately address. Appropriate and transparent designs and uses of AI-powered applications are essential to ensuring these risks are managed, including risks to consumer protection and trust, as well as AI’s ability to introduce systemic risk for the sector.
Explainability, transparency, accountability and robust data management practices are key to trustworthy AI in the financial sector. Explaining how AI algorithms reach decisions and other outcomes is an essential ingredient of fostering trust and accountability for AI applications. Outcomes of AI algorithms are often unexplainable, however, which presents a conundrum: the complexity of AI models that can hold the key to great advances in performance is also a crucial challenge for building trust and accountability. Transparency is another key determinant of trustworthy AI. Market participants should be able to know when AI is being used and how it is being developed and operated in order to promote accountability and help minimise the risks of unintended bias and discrimination in AI outcomes. Data quality and governance are also critical as the inappropriate use of data in AI-powered applications and the use of inadequate data can undermine trust in AI outcomes. Failing to foster these key qualities in AI systems could lead to the introduction of biases generating discriminatory and unfair results, market convergence and herding behaviour or the concentration of markets by dominant players, among other outcomes, which can all undermine market integrity and stability.
This edition of the OECD Business and Finance Outlook focuses on these four determinants of trustworthy AI in the financial sector. It examines these determinants in the key areas of finance, competition, responsible business conduct and foreign direct investment, as well as their impact on initiatives by regulators to deploy AI-powered tools to assist with supervisory, investigative and enforcement functions.
Explainability, transparency, accountability and robust data management practices are key components of the OECD AI Principles adopted in May 2019. Chapter 1 introduces these Principles and how they can be used to frame policy discussions on AI in the financial sector alongside two alternative and complementary frameworks – the AI system lifecycle and the OECD framework for the classification of AI systems.
Explainability poses a defining challenge for policy makers in the finance sector seeking to ensure that service providers use AI in ways that are consistent with promoting financial stability, financial consumer protection, market integrity and competition. Chapter 2 focuses on these issues. Difficulty in understanding how and why AI models produce their outputs can affect financial consumers in various ways, including making it harder to adjust their strategies in times of market stress. This chapter identifies recommendations for financial policy makers to support responsible AI innovation in the financial sector, while ensuring that investors and financial consumers are duly protected and that the markets around such products and services remain fair, orderly and transparent.
Robust data management practices can help to mitigate potential negative impacts of AI-powered applications on certain human rights. Chapter 3 highlights the importance of robust and secure AI systems for ensuring respect of human rights across a broad scope of applications in the financial sector, focusing on the rights to privacy, non-discrimination, fair trial and freedom of expression. It sets out practical guidance to help mitigate these risks and illustrates how OECD Due Diligence Guidance for Responsible Business Conduct can assist financial service providers in this regard.
Better accountability and less opacity in the design and operation of AI algorithms can help limit anticompetitive behaviour. Chapter 4 explores the implications of AI for competition policy. It examines the potential anticompetitive risks that AI applications could create or heighten. These include collusive practices, but also strategies by firms to abuse their market dominance to exclude competitors or harm consumers. Anticompetitive mergers may also pose concerns, for instance when they combine AI capacity and datasets. The chapter further discusses the detection, evidentiary and enforcement challenges related to AI that policy makers and competition authorities are starting to address.
AI-powered applications developed for the public sector also need to be explainable, transparent and robust. Chapter 5 analyses how regulators and other authorities are turning to AI applications to help them supervise markets, detect and enforce rule breaches and reduce the burdens on regulated entities. Supervisory technology (SupTech) tools and solutions face many similar challenges to private sector AI innovations, not least of all the need for quality data inputs, algorithm designs and outcomes that public officials understand, investment in skills and public-facing transparency regarding use and outcomes. Each of these factors must inform governments’ SupTech strategies.
Governments also seek to strike a balance between transparency, openness and security imperatives in the context of policies to guard against possible impacts of foreign acquisitions of some AI applications. Chapter 6 analyses recent developments in policies to manage risk for essential security interests that may result from transfer of AI technologies to potentially malicious actors or hostile governments through foreign direct investments. This chapter also explores related security concerns arising from financing of research abroad as a parallel legal avenue to acquire know-how that is unavailable domestically without requiring the acquisition of established companies.