Artificial intelligence (AI) is transforming many aspects of our lives, including the way we provide and use financial services. AI-powered applications are now a familiar feature of the fast-evolving landscape of technological innovations in financial services (FinTech). Yet we have reached a critical juncture for the deployment of AI-powered FinTech. Policy makers and market participants must redouble their engagement on the rules needed to ensure trustworthy AI for trustworthy financial markets.
New technologies often pose risks and challenges alongside their potential benefits, and AI applications in the finance sector are no exception. For all of their remarkable promise, AI applications can amplify existing risks in financial markets or give rise to new challenges and risks. These concerns increasingly preoccupy policy makers as more financial firms turn to AI-powered FinTech and expand the scope of its uses. Growing complexity in AI models, and difficulty – or in some cases, impossibility – in explaining how these models produce certain outcomes, presents an important challenge for trust and accountability in AI applications. Complexity, and the need to train and manage AI models continually, can create skills dependencies for financial firms. Data management is another key challenge, as the quality of AI outcomes depends in large part on the quality of data inputs, which in turn need to be managed in line with privacy, confidentiality, cyber security, consumer protection and fairness considerations. Dependencies on third-party providers and outsourcing of AI models or datasets raise further issues related to governance and accountability.
There is growing awareness that existing financial regulations, based in many countries on a technology-neutral approach, may fall short of addressing systemic risks presented by wide-scale adoption of AI-based FinTech by financial firms. Some of these challenges are not unique to AI technologies. Others are intimately linked to singular characteristics of AI, especially the growing complexity, dynamic adaptability and autonomy of AI-based models and techniques. While many countries have adopted dedicated AI strategies at the national level, few have introduced concrete rules targeting the use of AI-powered algorithms and models, let alone rules that apply specifically to AI applications in the finance sector.
Today, many countries find themselves at an important crossroads in these policy fields. Financial regulators are considering whether and how to adapt existing rules or create new rules to keep pace with technological advances in AI applications. At this critical juncture, it is incumbent upon us all to recall certain pillars of good policymaking. Stakeholder engagement in an inclusive policy process is key. Public-private dialogues can help to identify mutually acceptable solutions that nurture innovation and experimentation in AI-based FinTech while also addressing shared risks and challenges to long-term market stability, competition and the primacy placed on consumer protection and trust. Governments must explore ways to incentivise firms to develop trustworthy AI, responsibly and transparently, thereby aligning broader public interests with business interests. A candid assessment of the suitability of existing rules and skill bases in the public sector will also be indispensable.
At the international level, the OECD AI Principles, adopted in May 2019, became the first international standard agreed by governments for the responsible stewardship of trustworthy AI. The OECD, together with international partners working to support financial markets and financial sustainability, must reinforce efforts to facilitate multilateral engagement on implementing the OECD AI Principles in the context of financial markets and other business sectors. The Principles recall that:
AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being;
AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society;
there should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them;
AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed; and
organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.
With these reflections in mind, this year’s OECD Business and Finance Outlook on Artificial Intelligence offers the OECD’s latest contribution to a global dialogue on the uses, risks and rules needed for new technologies like AI in financial markets. It puts forward considerations for policy makers and market participants charting a course towards ensuring trustworthy AI for trustworthy financial markets. It is part of the OECD’s ongoing commitment to promote international cooperation and collaboration to ensure that these technologies develop in a way that supports fair, orderly and transparent financial markets and, by extension, better lives for all.
Further impetus is needed, however, to apply these values-based principles to the specific challenges facing regulators, participants and consumers of AI-powered FinTech. The OECD stands ready to serve as a forum and knowledge hub for data and analysis, exchange of experiences, best-practice sharing, and advice on policies and standard-setting on these issues.
This year’s Outlook forms part of broader OECD work to help policy makers better understand the digital transformation that is taking place and develop appropriate policies to help shape a positive digital future. This includes updating and revising many of our standards for business and markets to ensure that they remain fit for purpose and adequately address this digital transformation. These efforts ensure that OECD instruments reflect the needs and priorities of today and tomorrow, and support policy makers as they grapple with the myriad implications of digital transformation.
Dr. Mathilde Mesnard
Acting Director, OECD Directorate for Financial and Enterprise Affairs