Advances in artificial intelligence (AI) models, including the advent of models offering content generative capabilities and user-friendly interfaces, have increased interest around AI innovation by the general public in Asia and globally. Although the deployment of fully automated generative AI tools in finance is slow-paced, the wider deployment of AI in finance could amplify risks already present in financial markets and give rise to new challenges. This chapter provides a sentiment analysis of interest in AI innovations in finance in major Asian economies using Machine Learning (ML) techniques; presents recent developments in AI in finance and the potential use cases and associated benefits of such tools for ASEAN member states, analyses potential risks from a wider use of such tools in ASEAN financial markets; examines policy developments and discusses associated policy implications.
Mobilising ASEAN Capital Markets for Sustainable Growth
3. Artificial intelligence (AI) and finance in ASEAN economies
Abstract
3.1. Introduction
Advances in artificial intelligence (AI), including the advent of models with content generating abilities, have sparked public interest and increased direct usage of AI tools by non-technical users. For example, generative AI models have the ability to produce ‘original’ content that closely resembles human-generated output. In addition to their advanced computational capabilities, such tools have a user-friendly, accessible conversational interface that has been one of the main drivers of rapid public adoption of such models, particularly given the availability of some free-of-charge. Such developments have marked a breakthrough in the ability of non-technical users to engage with complex technologies in a way that aligns with human thinking.
This chapter discusses potential benefits and risks of the use of AI in finance and presents trends in AI in finance in ASEAN economies, in terms of the sentiment and deployment trends, use cases, and implications for financial market participants and policy makers. It includes original analysis based on a machine learning (ML) model that uses natural language processing techniques provides evidence of important and increasing interest around AI in finance in major Asian economies, such as Japan and Korea. Finally, it examines national AI strategies that have been developed in seven ASEAN member states and provides policy considerations and recommendations for ASEAN economies.
3.2. AI in finance: Asian trends
3.2.1. Using AI models to examine interest in AI in finance in major Asian economies
Asian economies have emerged as key hubs for the development of AI, particularly given the role of some Asian economies in the semiconductor markets (e.g. China, Chinese Taipei), while the region has been at the forefront of AI adoption. The central role of the Asian region's AI activity contributes to economic growth and the digital transformation of the regions’ economies.
OECD analysis based on a ML-based model that uses Natural Language Processing (NLP) techniques1 provides evidence of important and increasing interest around AI in finance in major Asian economies such as Japan and Korea (Figure 3.1). AI-related topics were covered in almost 2.5% of articles examined in the period January 2021 – October 2023 in the respective samples examined for each of the countries’ financial press.2 A graphic representation of the frequency of words in the abovementioned samples, in the form of WordClouds, demonstrates the importance of ‘Generative AI’ or ‘GenAI’, ‘investment’, and ‘technology’ in the discussion and reporting around AI in finance (Figure 3.2). In the case of Japan, ‘risk’ is also a prominent word appearing in financial press, and this could be related to the G7 policy discussions, the Hiroshima Process International Guiding Principles for Organizations Developing Advanced AI Systems and the Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems (G7, 2023[1]; 2023[2]; 2023[3]). The Korean financial press has concentration of interest on the investment perspective of AI (Figure 3.2), which can also be explained by the increasing investment of Gen AI applications (Table 3.1), and other FinTech companies in Korea (Table 3.2).
3.2.2. Evolution of sentiment on AI in Finance in Japan and Korea
Sentiment analysis performed on the basis of the abovementioned sample of financial press in Japan and Korea provides some evidence of the sentiment towards AI in finance in the countries examined, and indicates the temporal evolution of such sentiment and its direction (Figure 3.3 for Japan and Figure 3.4 for Korea).
The labelling of the articles focusing on AI in the sample examined was based on three sentiments, positive, neutral, and negative, and the probabilities of each sentiment was generated with the pretrained NLP model, BERT. The indices on sentiment and polarity were generated based on a daily basis frequency by integrating those probabilities (see more in the Annex). The sentiment in Japan seems to be mostly neutral in the period January 2021 – October 2023. Over time, the sample exhibits some peaks of negative sentiment which could be attributed to the increase in the discussions around AI and generative AI challenges and risks, inter alia during the G7 meetings in Japan throughout 2023. The same peaks in negative sentiment are reflected in the negative peaks of the polarity index for the sample discussed, although there is great volatility in the sentiment observed. Such large volatility could be explained by periods of increased reporting around AI regulation or policy discussions more broadly in the financial press, as well as by discussions on investment opportunities on AI in Japan3.
On the other hand, the analysis of the Korean financial press shows a similar absolute prevalence of neutral sentiment in financial press. However, the discussion in the financial press seems to be focusing more on the opportunities of AI, rather than the challenges, and the overall sentiment expressed is always positive in the period examined, as evidenced by the positive values of the polarity index throughout this period. The low volatility of the sentiment indices also stands for consistent positive tone of AI articles in the Korean financial press, and it also demonstrates the Korean news articles contain more positive tones, compared to the Japanese.
Overall, the results of the sentiment analysis for Japan and Korea may indicate a different tone in the discussion around AI in finance, that may be driven by the multilateral policy discussions that took place in Japan over 2023 during the G7 Presidency. A more negative sentiment may be related to the risk implications from the use of AI and the policy discussions on mitigating such risks. On the other hand, Korean News articles tend to focus on more neutral to positive implications of AI, focusing on investment opportunities and AI applications in finance and beyond.
3.2.3. Investment into AI and financial sector acquisitions of AI companies in Asia
Analysis of mergers and acquisitions (M&A) of AI-related companies by financial service providers shows activity in Asia as well as in ASEAN member states over the year 2023 (Figure 3.5). Given the enormous amounts of compute power and data required to develop and train AI models, banks and financial institutions tend to acquire companies developing AI-based models, particularly those with a first mover advantage or with the resources available to undertake design, training and maintenance of models. Although the number of M&A deals in both Asian and ASEAN markets fluctuated during the year, activity is recorded right after the announcement of ChatGPT in November 2022 and following the ChatGPT model updates, announced in February and July 2023. Activity in ASEAN is concentrated on three member states: Malaysia, Singapore, and Thailand.
Looking at the latest trends, the volume of generative AI M&A deals involving financial entities in Japan, Korea, and China gradually increased between December 2022 and March 2023. In ASEAN, a peak is observed in February 2023 when a Thai firm acquired a holding company of semiconductor manufacturing services in Singapore, representing the largest transaction in this sector in the region (The Business Times, 2023[4]). Activity in Asia is concentrated in three countries, Japan, Korea, and China, which together account for almost 87% of total deal volume. Singapore has also recorded important activity, which amounted to 11.6% of the transaction volume.
Large Asian countries also recorded semiconductor-related deals, with chips being a critical component of the AI model development (Section 3.3.1). Since the chip production is mostly led by the three Asian countries, Japan, Korea, and China (World Population Review, 2023[5]), an important number of semiconductor M&A transactions were recorded in these countries over the past year. In particular, 97% of semiconductor deals in Asia involved Japanese, Korean or Chinese entities (Figure 3.6).
3.3. Recent developments in AI: the advent of Generative Artificial Intelligence
Generative Artificial Intelligence (GenAI) is a subset of AI comprising models that generate new content in response to user-based inputs or prompts by using neural networks and deep learning (OECD, 2023[6]) (Figure 3.7). Examples of output include text (produced from LLMs like ChatGPT), visual outputs (Sunthesia), audio (Speechify), and code (GitHub CoPilot). These outputs are informed by models built on neural networks such as Generative Adversarial Networks (GANs), 4 which process and transform input data based on pre-processed data collected from massive, unstructured datasets.5
GenAI has garnered widespread popularity as a result of their wide array of potential use cases and their ease of use, particularly when it comes to LLMs that have caught particular attention as a subset of the wider AI advances and tools. These include models such as ChatGPT (OpenAI), Bard (Google), Bing Chat (Microsoft), Claude (Anthropic), Ernie Bot (Baidu). The conversational character of such models, that bring them closer to human cognition than any other previous AI model, coupled with their computational power, have driven to a large extent their notoriety with the general public since the release of ChatGPT in November 2022.
3.3.1. Drivers of fast AI adoption in non-finance applications
AI has developed quickly over the last decade; indicatively, more advances in deep learning have been made in the last ten years alone compared to the last forty years (Kotu and Deshpande, 2019[8]). Such advances are due to three key drivers: significant progress in computational power (GPU and TPU), the rapid growth of available online data, and improved cost efficiency for underlying data processing capacity (Ahmed et al., 2017[9]; OECD, 2021[10]). The exponential growth of datasets is a result of both increasing reliance on internet and online data as well as progress in synthetic data generation, which makes it simpler to produce the volume of data needed to train AI models. Another driver is increased private funding for GenAI, with USD 2.6 billion raised across 110 deals in 2022 (OECD, 2023[7]).
Private and public investment flows into projects involving the development of AI tools have been growing over the past years in Asia, including across ASEAN member states. In 2023, government funding to the generative AI projects in Asia Pacific supported almost two-thirds of the regional organisations looking into the potential use cases of Generative AI (IDC, 2023[11]). On the private investment side, the total value of private equity funding for GenAI projects in Asia stood at USD 428.2 million as of 30 November 2023, recording a significant increase relative to previous years (Figure 3.8). A similar increase is observed in the corresponding number of transactions underlying this funding. In terms of the number of venture capital (VC) investments, AI accounted for a very small percentage (3%) of total venture capital investments in Asia in 2012, reaching 23% of all VC investments by H1 2023 (Monetary Authority of Singapore, 2023[12]). Important investments are also flowing in the semiconductor industry of the Asian region, with Japan, Korea, China and Chinese Taipei accounting for around 41% of global market share in 2021 and 36 % of the global R&D expenditure (as a percent of sales), respectively (SIA, 2023[13]). Those investments are related to semiconductor subproduct sectors such as chips are to a large extent driven by demand and urgency in sourcing logic chips used to build large AI models (Section 3.2.3).
AI and LLMs have gained increased popularity and demand from the public given they are designed to be user friendly, accessible, and intuitive, while also having significant technical capabilities. Unlike other types of AI, like deep neural or ML models, GenAI models product outputs that are easy to grasp by the average user without any specific technical knowledge and resonate with human cognition, thereby driving fast adoption (Figure 3.9). The cost-free availability of online LLM models such as ChatGPT and their conversational abilities contribute to such a demand trend.
3.3.2. Slow-paced deployment of advanced AI models in finance
Despite the popularity of AI, implementing advanced AI solutions such as generative AI models that involve full end-to-end automation in financial markets continues to be in the testing and development phase (OECD, 2023[7]). So far, such tools are mostly deployed in process automation, which is designed to enhance productivity at both the back-office (operations) and middle-office (compliance and risk management) of financial service providers. These tasks include content generation, summarisation of documents used by financial advisors, and human resource processes. As AI advances, its use in front-end use cases and the increased use in the back-end is expected to accelerate.
Part of the reason why advanced AI tools such as generative AI models have slow uptake in finance is that financial market activity is highly regulated. Risk management and model governance rules and regulations are already in place, and the technology neutral approach of regulation renders them applicable irrespective of the type of technology used (OECD, 2023[7]). As such, the more advanced AI techniques may not be fully compatible with regulatory frameworks that try to ensure market integrity, consumer protection, financial stability and require risk management, model governance, transparency, and other obligations.
Given the important costs of developing and training large AI models such as LLMs, in most cases the use of such models by financial market participants will involve the outsourcing of a model that is then tailored to the specific needs of the user firm. Such model will then be trained with private proprietary data in ‘offline’ environments, for example within the private cloud of the firm.
Furthermore, due to the high risks of security and data breaches, the use of public AI tools is most likely incompatible with data protection frameworks in place. The use of open-source or off-the-shelf reusable models (e.g. foundation models6) can pose a significant risk of data breaches of financial market participants’ sensitive and confidential client data. Additionally, some AI models such as LLMs, are known for lack of transparency behind their decision-making processes. This can be problematic for financial markets where transparency is crucial for regulatory compliance and trust (Section 3.5.4). As such, financial market participants that use AI typically deploy restricted and bespoke LLM models that operate within the firewalls or at the private cloud of the firm in order to ensure data sovereignty and security.7
Also, given the legal responsibility and fiduciary duty of financial service providers to act in the best interests of the clients, financial service providers must work to protect clients from the risks of misleading outputs, misinformation, and other risks posed by advanced AI tools (e.g. deceptive model outcomes, deepfakes etc. Section 3.5.4). Risks related to the use of advanced AI models discussed in this chapter may be an additional impediment to the wider use of such tools by the financial sector at this stage. Incompatibilities with applicable rules and requirements, such as the ones posed by the lack of explainability of model outputs, may further impede their widespread usage in finance.
Smaller financial service providers will likely face greater challenges in the implementation of AI tools, including advanced AI models such as LLMs, related to their capacity. Although large financial institutions may have challenges related to their existing governance structures and legacy infrastructure, smaller players may not have the financial resources and capacity of managing datasets in order to be able to implement large AI models. For instance, successful deployment of AI depends on both the availability and quality of data. Smaller financial institutions may not have sufficient data management structures in place to support the vast amounts of unstructured data they own for the purpose of AI use. While the above risks regarding training data are salient for supervised ML models in finance, AI models such as LLMs that are fully autonomous and self-supervising do not need to label training data. This is because such models can identify complex relationships and learn from unstructured data. Furthermore, effectively using AI tools to keep up with modern work trends will require AI skills across the board. Accordingly, AI skills should be present at all levels and functions that use AI for service provision and may require additional organisational manoeuvring.
One example of the phased introduction of AI in finance is the use of AI in trading: instead of completely automating the entire trading process, the use of AI is limited to specific tasks, mostly to analyse the large, noisy and complex datasets at hand to identify insights for trading decisions. It is possible that AI-based algorithms may eventually be fully automated and equipped to adjust their own decisions without requiring human intervention. However, the use of AI in trading can nevertheless exacerbate the risk of prohibited or illegal trading strategies such as spoofing and front-running (OECD, 2021[10]).
Financial market participants are currently experimenting with customized, offline or private versions of LLMs and other advanced AI tools (OECD, 2023[7]). Presently, these models use public data to primarily act as sources of information and as tools for internal processes and operations. As AI continues to advance, it can be anticipated that financial market participants may implement new use cases of these models emerging from experimentation or third-party provision. The wider adoption of AI mechanisms may expose financial service providers, users and the markets to important risks, warranting policy discussion and possible action.
3.3.3. Direct vs. indirect scope of use of AI in finance
Different types of AI models interact with financial service providers and/or the end customer in different ways, and each level of interaction comes with a different level of associated risks. These different levels may also underpin the slow-paced and phased deployment of AI in finance, which is currently used primarily to assist operations as opposed to full automation and direct interaction of the model with the financial consumer.
AI models, particularly those with generative capabilities, can be employed to assist customers without directly interacting with them. For instance, they can be used to generate portfolio allocation recommendations that are customised to their financial profile. Such output can be used as input to inform the financial service provider in the delivery of his recommendation. But they can also be used more directly, to provide direct personalised recommendations to the customers and/or to execute suggested recommendations without any human involvement. The latter case has increased risks and anecdotal evidence by the financial services industry indicates limited full end-to-end deployment of AI tools at their current stage of development.
Risks related to the use of AI in finance increase given that users (whether financial service providers or end-customer) may not be fully aware of the limitations of the AI models. These risks are even more pronounced if and when the model interacts directly with the customers and executes its own recommendations in a fully automated manner without any ‘human-in-the-loop’ and therefore such direct scope of use poses significant risks to both the customers and the service provider. It could be anticipated, however, that the use of AI in finance will in the future evolve to include such direct interaction of financial consumers with the model, and to that end trust and safety of such applications will be of paramount importance.
3.4. Use cases of AI in finance and associated benefits for ASEAN countries
AI tools are used across a vast array of use cases in financial markets, including multiple parts of the value chain and multiple verticals, such as asset management (e.g. stock picking; risk management and operations); algorithmic and high-frequency trading (e.g. liquidity management and execution with minimal impact); retail and corporate banking (e.g. onboarding, creditworthiness analysis, customer support) and payment institutions (e.g. AML/CFT, fraud detection) (OECD, 2021[10]). The performance of such AI-based services is expected to be improved through the use of AI tools, particularly for areas such as sales and marketing, customer support and operations, including data/information management, as well as translation, coding and software development.
Operations and back-office functions is one of the most widely reported use cases of AI in finance today (Figure 3.10), with the potential to increase both efficiency and accuracy of operational workflows and enhance performance and overall productivity. This can be an important benefit for financial market participants in ASEAN countries, as it can allow for cost reduction given that these AI tools can replace manually intensive and repetitive P&L and other reconciliations with less expensive and more efficient automated ones. To the extent that such cost savings are passed on to the end customer, these can alleviate any potential cost burden associate with formal financial services.
AI, and generative forms of AI in particular, can also be used to make data analysis and reporting firm data more human-like for both internal and external purposes, facilitating regulatory reporting and compliance by small financial services firms in particular. External purposes include customer service analytics, human resource tasks (e.g. generation of summaries of management reviews), translation or summarisation of contracts, or other reporting. AI models with generative capabilities can also enhance individualised communication for customer-facing purposes, including tasks related to product creation, marketing and sales, and improved customer support.
AI-based anomaly detection tools can also improve both AML/CFT processes fraud detection across various types of financial market participants, especially in the area of payments, with important potential contribution to improving trust and confidence in the formal economy for ASEAN consumers. This, in turn, can increase customer trust and satisfaction from the formal financial system and their willingness to participate in the formal economy. These models can automatically identify outliers that deviate from expected data points and behaviour within given datasets (Kotu and Deshpande, 2019[15]), thereby potentially implying fraudulent activity. It can also be beneficial for automating client onboarding, making KYC checks for banking clients more efficient, and improving compliance functions for financial market participants. The performance of such tasks can be further augmented by generative forms of AI, which can use company data to generate reporting and other necessary outputs needed to facilitate compliance.
AI is also used to enhance risk management for asset managers and institutional investors. AI-based risk models have the capacity to quickly assess portfolio performance under various market and economic scenarios by considering a range of consistently monitored risk factors. Investment strategies, such as quantitative strategies or fundamental analysis of systematic trading, have always relied heavily on structured data. However, AI-based models use raw or unstructured/semi-structured data to give investors an informational advantage. This data-driven approach enhances sentiment analysis and provides additional insights using pattern recognition (OECD, 2021[10]).
ML models in particular inform decision-making for portfolio allocation and/or stock selection by using pattern recognition, NLP8 to make predictions (Table 3.1). Such models have recently gained significant attention due to the ability of models like neural networks to capture non-linear relationships between stock characteristics and future returns by learning from data. This has opened up the potential for including ML‑based stock-selection strategies in informing portfolio construction. Academic studies examine whether or not such ML-based strategies can generate “alpha”, a measure of investment performance, with mixed results (Freyberger et al., 2020[16]; Moritz and Zimmermann, 2016[17]). Interestingly, ML-based strategies with longer time horizons tend to focus on slower signals and rely more on traditional asset pricing factors, which can lead to poorer performances compared to short-term strategies (Blitz et al., 2023[18]).
AI models in lending could reduce the cost of credit underwriting and facilitate the extension of credit to ‘thin file’ clients, potentially promoting financial inclusion (OECD, 2021[10]). The use of AI can create efficiencies in information management and data processing for the assessment of creditworthiness of prospective borrowers, enhance the underwriting decision-making process and improve the lending portfolio management. It can also allow for the provision of credit ratings to ‘unscored’ clients with limited credit history, supporting the financing of the real economy (e.g. SMEs) and potentially promoting financial inclusion of underbanked populations. As with any AI application in finance, the potential benefits come also with important risks, such as possible discrimination and bias; and with challenges, such as the difficulty in interpreting the model’s output (explainability) (Section 3.5.1).
The most significant opportunities for Gen AI are expected to lie in customer-facing financial services and in the delivery of new, highly customised products. ‘Traditional’ AI classes were used to power chatbots and automated call centres for customer support (Weizenbaum, 1966[19]). In comparison, GenAI introduces a conversational element that closely resembles human interaction. GenAI is also expected to support the production and delivery of new products, from investment advice to robo-advisors, by using product-feature optimisation, and improving targeted sales and marketing (Table 3.2). GenAI can also help brokerage firms and other investment advisors tailor their recommendations at an individual level, delivered in a human‑like and conversational manner, through improved customer segmentation at the individual level in an efficient manner.
Table 3.1. Select types of AI applications by Asian financial services firms
Name |
Segment |
Service |
Description |
---|---|---|---|
TROVATA |
FinTech |
Treasury tool |
Generative AI Finance & Treasury Tool |
Kakaobank |
FinTech |
R&D center |
With the basis for judgment on the results made by AI, explaining the decision-making process and results from the user's perspective |
Shinhan Bank |
Commercial Bank |
Financial assistant |
Recognition of AI customer answer, real-time AI consultation analysis, tablet handwriting verification, and full text subtitle implementation |
Kookmin Bank |
Commercial Bank |
Financial assistant |
The AI model automatically generates the overall financial status and analysis, and system judgment results, which is for the corporate goddess in charge |
Hana Bank |
Commercial Bank |
Financial advisor assistant |
Building AI chatbot and callbot services for banks and cards based on NLP engines, enabling AI to quickly determine customer requests and suggest ways to respond directly or process them on their own |
Nonghyup Bank |
Commercial Bank |
Sales and marketing, information management |
The 12 different AI channels, including customer service, consultation support, quality control, and big data analysis |
MUFG Bank |
Commercial Bank |
Financial assistant |
Using AI chatbots to increase productivity responding the customers’ inquiries for their satisfaction |
SMBC Bank |
Commercial Bank |
Financial assistant |
Using the chatbot as a personal teller service where customers can make inquiries via a messaging style interface |
MIZUHO Bank |
Commercial Bank |
Financial assistant |
With Fujitsu’s generative AI technology, using the streamline of the maintenance and development of its systems |
Daiwa Securities |
Corporate and Investment Banking |
Financial assistant, information management |
Free use of ChatGPT to the employees to identify financial products and collecting information and drafting |
VietABank |
Commercial Bank |
Financial assistant, information management |
Using AI for foreign currency transactions, personal credit, and digital banking by transaction monitoring to detect fraud and risk through VPDirect |
Vietcombank |
Commercial Bank |
Financial assistant, information management |
Collaborated with FPT Smart Cloud Company to develop VCB Digibot, a customer care chatbot platform |
Note: Non-exhaustive and based on reported information by financial market participants.
Source: OECD based on web research.
However, the greatest short-term impact of AI for finance could come in the areas of financial analysis assistance and communication, especially in light of the proliferation of platformisation and embedded finance. AI tools’ ability to make predictions and produce information is critical for product support. Recommendation engines, a class of ML techniques, can predict user preference, especially when bolstered by methods like content-based filtering (Kotu and Deshpande, 2019[20]). AI’s ability to generate content complements this by tailoring sales strategies and marketing campaigns to individual customers, thereby potentially promoting financial inclusion if the customisation aims at that objective.
Coding represents another highly impactful domain for generative AI, which can support software development for a wide array of financial services/products. It can serve as a dedicated coding assistant and generate new code, provide troubleshoot scripts, offer solutions to coding errors, and test code. Despite its significant potential and relatively low associated risks, this use case is more unexplored compared to others. Similar considerations are also applicable in the use case of translations supporting customisation and communication around financial services and products. Related, AI can produce custom, large-scale synthetic data that is customised for specific market scenarios. Synthetic data is artificial data created from an original dataset and a model that is trained to mimic its characteristics and structure. It offers potential advantages in terms of privacy, cost and fairness (EDPS, 2021[21]). In the financial sector, the most pertinent use case is generating simulated financial market data for scenario analysis and creating datasets to test, validate and calibrate AI-based models.
Finally, AI can support sustainable financing and ESG investing, particularly through NLP for real-time ESG assessments based on firms’ communications, like corporate social responsibility reports (ESMA, 2023[22]). In investment strategies, AI tools are mainly deployed to process unstructured and complex ESG-related data that typically require more sophisticated analysis (Papenbrock, GmbH and Ashley, 2022[23]). As ESG continues to gain prominence, asset managers are also advocating for ethical AI use by companies they invest in. For example, the world’s largest sovereign wealth fund, located in Norway, is introducing AI use standards for its portfolio companies to align with its responsible investment framework and ESG commitments (FT, 2013[24]).
Table 3.2. Select AI companies offering AI and generative AI applications for finance in Asia
Name |
Service |
Description |
---|---|---|
Active.Ai |
Customer support |
Using AI to provide conversational finance and banking services and help financial companies integrate virtual intelligence assistants into their services |
Boltzbit |
Synthetic data generation and analysis |
Offers database linking, portfolio optimization and enhanced prospect profiling through the generation of synthetic financial data. |
KryptoGO |
Financial analysis |
Fast identity verification, risk assessment, blockchain address analysis, and periodic reviews, ensuring your business remains highly compliant and secure |
QRAFT |
Synthetic data generation and analysis |
Easy access to the translated and summarized overseas disclosures with its pioneer AI-driven investment solutions |
INNOFIN |
Financial analysis |
Data collection, refinement, and preprocessing the scattered financial data with AI and big data technologies |
ALCHEMI LAB |
Synthetic data generation and analysis |
AI-Guided trading solution that visually displays the risks associated with each trade, empowering traders with insight, based on asset allocation |
AI ZEN |
Financial Advisory |
AI-based banking services for financial institutions easy access to data by securing more customers with better automatic financial decision-making |
Syfe |
Financial Advisory |
With its Robo-Advisor by accessing diversified, institutional-grade funds and optimizing the portfolio’s equity component to outperform the markets over time |
bambu |
Financial Advisory |
SaaS-based Robo-advisor with full transactional capabilities, customizable to your products, portfolios, personalized branding |
KRISTAL |
Financial Advisory |
Building a customized financial plan tailored to the investment needs and managing the funds across various asset classes, investment styles, and geographies |
AUTOWEALTH |
Financial Advisory |
Institutional grade Robo-advisory available to retail users and the WealthTech automating the investment plans by offering professionally managed portfolios to cater to the investment needs |
WEINVEST |
Wealth Management |
Quant strategies augmented with AI/ML capabilities of digital wealth management and asset management |
StashAway |
Wealth Management |
Easy investment on autopilot managed by experts or customizing the portfolio by earning returns with fair and transparent fees and unlimited transfers and withdrawals |
WINKSTONE |
Wealth management |
By existing financial institutions to AI/ML-based models, public and actual transaction data other than credit data used for financial benefits |
ADVANCE.AI |
Direct Lending / Credit Scoring |
Managing risk efficiently across the industry by preventing fraud and automating workflow to reduce the cost |
credolab |
Direct Lending / Credit Scoring |
Scoring Risk, detecting fraud, improving marketing for better decisions with advanced behavioral analytics |
LenddoEFL |
Direct Lending / Credit Scoring |
Offering alternative credit scores based on the consumer's digital footprint, from social media posts to geotagged photos, and behavioral data derived from psychometric tests |
TurnKey Lender |
Direct Lending / Credit Scoring |
Providing a B2B AI-powered lending automation platform, and decision management solutions and services |
VALIDUS |
Direct Lending / Credit Scoring |
Supervised and unsupervised machine learning can predict potentially fraudulent and anomalous transactions, and protect customer databases to provide SME working capital loan services |
CrediLinq.Ai |
Direct Lending / Credit Scoring |
Disrupting credit underwriting for businesses using embedded finance and Credit-as-a-Service |
funding societies |
Direct Lending / Credit Scoring |
Pairing AI processes with a reliable funding option to allow businesses to focus on expanding their roots |
aspire SYSTEMS |
Direct Lending / Credit Scoring |
Businesses assessments for the effectiveness of different service approaches, optimize resource allocation, by simulating real-world scenarios to identify potential challenges and opportunities, which ultimately leads to more informed and robust service strategies |
SILOT |
Direct Lending / Credit Scoring |
AI platform for intelligent financial decisions by enabling banking software suite and merchant banking solutions |
cynopsis.co |
Regulatory and Compliance |
Offering the RegTech solutions designed to automate KYC/AML processes |
Handshakes |
Regulatory and Compliance |
Performing entity search and gain useful insights to support your due diligence with data analytics solutions |
SILENT EIGHT |
Regulatory and Compliance |
Leveraging AI to create custom compliance models for the world's leading financial institutions to combat money laundering and terrorist financing |
SHIELD |
Anti-Fraud |
Provides software security solutions which is related to the online fraud management solutions enabling enterprises to manage risk from fraudulent payments and accounts |
URBANZOOM |
Quantitative & Asset Management |
AI-enabled research tool for homeowners, buyers, sellers, landlords and tenants |
value 3 |
Quantitative & Asset Management |
B2B FinTech offering Capital Markets AI-platform for independent, predictive, and automated credit ratings, research and analytics |
Note: Non exhaustive list.
Source: OECD compilation based on public sources.
3.5. Risks and challenges of AI applications in finance
The use of AI tools in finance has the potential to amplify risks identified in the use of more ‘traditional’ AI mechanisms in financial markets, while it also gives rise to novel risks (e.g. related to the authenticity of outputs of LLMs) (OECD, 2023[7]). This section identifies such risks, focusing on the most pertinent for ASEAN economies idiosyncrasies.
3.5.1. Lack of explainability
Lack of explainability, which can be described as the capacity to understand or clarify how AI-based models arrive at decisions, can increase risks and incompatibilities for financial applications. While such risks already existed for ML models and other AI techniques, they are significantly amplified when AI models are used. Recent advances in generative forms of AI demonstrate its ability to generate highly complex, non-linear, and multidimensional outputs, which, while providing potential benefits, makes it harder for humans to understand or interpret their decision-making processes. This is made more difficult by the dynamic nature of AI models, which adapt based on feedback on a dynamic, autonomous manner9.
The significant lack of explainability for AI decisions makes it harder to mitigate risks associated with their use. Limited interpretability of AI models makes it harder to identify instances where inappropriate or unsuitable data is being used for AI-based applications in finance. This magnifies the risks of bias and discrimination in the provision of financial services, which is particularly pertinent in countries with ethnic minority groups, as is the case in some ASEAN countries (UN, 2012[25]). This also creates challenges when it comes to adjusting investment or trading strategies due to the nonlinear nature of the model or lack of clarity around the parameters that influenced the model’s outcome. Overall, lack of explainability of AI‑based models can also lead to low levels of trust in AI-assisted financial service provision for both customers and particularly market participants, limiting its potential beneficial impact.
3.5.2. Risk of bias and discrimination
Risk of bias and discrimination in the outcomes of algorithms has been well-known since machine learning first began being used in finance given the quantity of data required to train AI-based models. For example, if data containing gender-based variables or information on protected categories, like race or gender, is used as input for the AI-based model, it can lead to biased outputs. These results are not necessarily intentional because algorithms may analyse seemingly neutral data points but can nevertheless use such data points as proxies for protected characteristics like race or gender or infer these from the datasets. This can lead to biased decisions that may circumvent existing laws against discrimination. Bias can also be intentional when datasets used to train the model are manipulated to intentionally exclude certain groups of consumers.
A pertinent example of risk of bias and discrimination that could be relevant for ASEAN countries lies in credit allocation and discriminatory lending practices when creditworthiness is assessed using AI-based models and alternative data (OECD, 2021[10]). When such models are exclusively used for credit allocation decisions, this can risk disparate impact in credit outcomes, i.e., different outcomes for different groups of people, and can make it more challenging to identify instances of discrimination due to the machine’s lack of transparency and the limited explainability of AI models, exacerbated in case of generative forms of AI. Such lack of explainability also makes it impossible to justify the outcomes to declined prospective borrowers, a legal requirement in certain jurisdictions. Consumers are also limited in their ability to identify and contest unfair credit decisions. Even when the decision is fair, it is difficult for prospective borrowers to understand how their credit outcomes can be improved in a future request for credit.
Advanced AI models can be trained on any data source available online, intensifying the risk of discrimination as the model can learn from possibly already biased data, such as data that includes hate or toxic speech. Furthermore, imbalanced datasets, where some data is underrepresented or excluded while other data is more dominant, can negatively impact the model’s accuracy and thereby distort results. Such was the case with the Gender Shades project for facial recognition (Buolamwini, 2018[26]). Since advanced AI models have the opportunity to learn from user feedback, including through user prompts, this risk is accentuated as the model outputs could reflect prejudices demonstrated by the individual users post-training of the model.
3.5.3. Data-related risks
Data plays a critical role for both financial systems and AI, which is why quality data is central to quality output of any AI model, and even more so of advanced AI models, such as the ones with generative capabilities, given the massive amount of data required for their training, as well as the dynamic self-learning capabilities and the feedback loops with user input (OECD, 2023[7]). The quality of outputs is also impacted by the level of representativeness of data, which must cover a comprehensive and balanced representation of a population of interest in order to minimise risk of bias or discrimination and promote the accuracy of the model outputs.
Risks to data privacy and confidentiality would increase with the possible integration of plug-ins in private AI models, allowing for access to a wide array of content. Accordingly, this can increase the volume of data flowing into AI systems, which will further amplify the risk of data breaches by making it more challenging to protect such vast swathes of information. User inputs (e.g. prompts) can also contain private or proprietary information that would heighten the risks involved with data leaks. While specificity of user inputs can improve the quality of output, this may come at the cost of potential data privacy breaches.
In addition to quality and privacy, authenticity of data and intellectual property risks are also significant concerns for AI models built on large amounts of unstructured, public data. Given the vastness and diversity of this data, there are risks that training datasets may contain information protected by intellectual property rights, potentially without proper authorisation or copyright permissions. Consequentially, there is also an inherent doubt about the authenticity of such outputs due to the uncertainty around origin and permission status of the data used for training. Data provenance, or the origin and complete history of data, and data location, or the physical location of data, are also important considerations. AI-model related data management and data sharing frameworks, which allow third parties to access customer data, must consider the implications of data provenance and location when exploring challenges around intellectual property and data ownership. Financial market participants should also consider who holds the ownership of the data used to train their private AI models. This involves examining the intellectual property rights of the models used and their outputs.
3.5.4. Cyber-security risks
Similar to other digitally enabled financial products, the use of AI techniques exposes markets and their participants to increased cyber-security risks. AI models exacerbate such risks as they could be used by bad actors to tailor individualised fraud attacks on a large scale and with fewer resources required. For example, AI tools could be used for social engineering, email phishing, and attacks that compromise access to firms’ systems, emails, databases, and technology services (Federal Reserve Board, 2023[27]). Integrating external models, such as third-party software or open-source systems, amplifies the risk of cyber-security breaches by introducing vulnerabilities to a firm’s security infrastructure. Such vulnerabilities can stem from the inherent risks associated with using externally sourced software, and such risks are further exacerbated when dealing with open-source systems due to their broader accessibility and potential for security gaps to be identified by bad actors.
The misuse AI techniques can easily cause market financial disruption, which in some cases also involves difficulties in market participants understanding of whether the information they receive is true or constructed. For instance, Deepfake pictures or other content generated by AI may be used to manipulate the market. For example, a fake viral image of an explosion at the Pentagon in 22th of May, 2023, which is possibly generated by AI, induced the market fluctuation to the US stock market (NPR, 2023[28]). Another example in the Asian region involved a false essay entitled "Warning Article on Major Risks in iFLYTEK”10 that was widely circulated in the market, which was eventually confirmed as having been written by generative AI.
Cyber-security risks could also include state-sponsored cyber-attacks leveraging on advanced AI tools, such as generative AI, to disrupt financial markets by disseminating sophisticated disinformation. State-sponsored cyber-attacks are linked to or sponsored by states and aiming at both financial profit and/or geopolitical goals (e.g. hacks by North-Korean-affiliated Lazarus Group).11 Hackers in these cases are trained as national projects, with systematic and sophisticated attack methods. AI could magnify the risk of financial market manipulation by state-sponsored hackers or other malicious actors given the significant capabilities of such tools that could be used for massive manipulation of markets and their participants. For example, deepfakes (e.g. voice spoofing or fake images generated by AI) could be used to spread disinformation that is difficult to detect and identify as false and misleading given the capabilities of AI (e.g. rumours or disinformation that could cause market instability or panic). Furthermore, with the development of quantum mechanics-based computational power, there are risks of increased cyber-security risks involved, including with geopolitical motives (The Institute of World Politics, 2019[29]; NATO, 2022[30]).
Bad actors can therefore utilise AI to conduct market manipulation at a large scale. This could involve dissemination of false information about stocks and other investments or provision of deceptive advice to potential investors and other financial consumers. Regular, real-time input of web information, such as social media data, into AI-driven financial models can increase such risk of market manipulation.
To address the various risks highlighted above, large financial institutions currently report the use of private, restricted versions of AI models that operate offline within the firm’s firewalls or private cloud. This setup promotes greater security over the operation of the AI-based application, thereby allowing financial institutions to better protect client data and proprietary information. It also allows them to better oversee and ensure compliance of AI use with regulatory standards. A future scenario in which use of plug-ins enable input of real time internet data for these proprietary models may see an increase in market manipulation risk as it can enable bad actors to spread rumours through social media, thereby impact financial markets.
3.5.5. Model robustness and resilience, reliability of outputs and risk of market manipulation
According to the OECD AI Principles, it is essential that AI systems consistently function in a robust, secure and safe way while continuously managing related risks (OECD, 2019[31]). If AI-driven models lack reliability and accuracy, there's a heightened risk of poor outcomes, particularly for financial applications like inadequate investment advice. Models that lack robustness and resilience may not function as intended, posing potential harm in unforeseen scenarios or environments. In essence, these models are unable to handle unexpected events or changes effectively, impacting end users negatively (NIST, 2023[32]). Concerns regarding data quality, discussed above, as well as model drifts and overfitting pose risks to the accuracy and reliability of machine learning models use in finance (OECD, 2021[10]). When unexpected events cause disruptions in the data used for model training, for example, this can cause model drifts that negatively impact the models’ predictive capability, especially during market turbulence or periods of stress.
Lack of resilience of AI models and their potentially limited reliability can impact trust among retail investors and financial consumers, which could be even more concerning in economies with important part of the population being unbanked - as is the case in several ASEAN member states. In advanced AI models, such as generative AI, user interactions and the feedback loops used for self-learning models can reduce the model’s accuracy: recent empirical analyses demonstrate that there can be significant change in the behaviour of the same LLM model over a short time, thereby requiring ongoing monitoring of LLMs (Chen, Zaharia and Zou, 2023[33]). It is challenging to discern whether changes in the model’s accuracy stems from model updates or from interactions with users, where poor quality inputs may affect the model’s autonomous learning process. AI also introduces risks related to a model output’s quality and reliability, potentially leading to the risk of ‘hallucinations’12 or other kinds of deception or misinformation.13 False information or advice provided by AI-driven financial models can damage the credibility of financial market practitioners responsible for this service provision among financial consumers. AI models have the potential for deception that be either unintentional, such as when AI generates content that does not have any real-world basis, or intentional, such as in fraudulent use cases like identity theft. Such kinds of deception can be subtle, like encouraging use of opaque methods for discretionary pricing based on client attributes such as purchasing power among financial advisors. Differentiating between accurate information and inaccurate or deceptive information is crucial in mitigating such intentional or unintentional risks. The potentially limited awareness of limitations of AI-models by both users and recipients of such financial services can exacerbate concerns about the trustworthiness of the models as well as the services in question.
3.5.6. Governance-related risks, accountability and transparency
Financial institutions that use AI-based models adhere to their established model governance frameworks, model risk management and oversight arrangements. This involves defining clear lines of responsibility for the development and supervision of AI-based systems across their entire lifecycle, from creation to implementation, and assignment of accountability for any negative outcomes that result from the model’s operation. However, accountability hinges on transparency (NIST, 2023[32]), which can only be advanced in AI models by disclosing a comprehensive amount of information about the model and its data. This includes information on data sources, copyrighted data, compute- and performance-related information, model limitations, foreseeable risks and steps to mitigate risks (such as evaluation) and the environmental impact of these models.
The environmental aspect of advanced AI model usage is particularly important for financial market participants who aim to harmonise AI applications with ESG practices they may follow. Achieving high levels of transparency for AI models might face challenges based on their specific characteristics. For instance, disclosing copyright status of training data sourced from unstructured internet information may prove difficult (Bommasani R. et al., 2023[34]). Similar to the case of DLTs (OECD, 2023[6]), accurately measuring energy usage and emissions may prove challenging. Furthermore, given their influence on downstream use, it may prove difficult to establish accountability for a model’s downstream14 applications.
The lack of awareness regarding the associated risks might heighten the risk profile for both AI tools and for the end users. As use of AI solutions continue to become more widespread, the AI-driven tools and applications are also likely to proliferate in the financial industry. As such, non-qualified practitioners may also unknowingly begin to use these tools and therefore, governance frameworks for financial market practitioners may need to consider human capacity requirements (e.g. awareness and skills).
Governance issues are amplified in the case of outsourcing AI models and third-party provision of AI‑related services and infrastructure (such as cloud providers), which is particularly important in the case of smaller financial institutions active in ASEAN member states given possible limits in their in-house capacity to develop/maintain large models. Governance hurdles may be associated to the assignment of accountability for adverse outcomes to third parties involved in model creation and training. Questions about intellectual property also emerge as financial providers, even if they purchase “off-the-shelf” AI models, may not necessarily own the intellectual property rights. Simultaneously, these providers do input valuable proprietary data to these models, which the third-party service provider can access. The distinction between the roles of model provider and model deployer may also need to be considered for matters related to oversight and enforcement.
3.5.7. Systemic risks: Herding and volatility, interconnectedness, concentration and competition
The use of AI-based models in finance, including GenAI, could pose potential systemic risks with regards to one-way markets, market liquidity and volatility, interconnectedness and market concentration (OECD, 2021[10]). The widespread use of the same AI model among numerous finance practitioners may induce herding behaviour and one-way markets, affecting liquidity and system stability, especially during stressful periods (OECD, 2021[10]). For example, AI in trading could potentially exacerbate market volatility by initiating large and simultaneous sales or purchases, thereby introducing new means for vulnerabilities (FSB, 2017[35]). When trading strategies converge, they risk creating self-reinforcing feedback loops that lead to significant price shifts and pro-cyclicality. In addition, investor herding behaviour can cause liquidity issues and flash crashes during times of stress, as seen recently in algo-high frequency trading.15 Such convergence also heightens cyber-attack risks, allowing bad actors to influence agents that act similarly. These risks are prevalent in all algorithmic trading and are accentuated in AI models that autonomously learn and adapt, notably unsupervised learning-based AI models.
The use of AI in financial market activity, such as trading, may cause financial markets and institutions to further connect financial markets and institutions to each other in unforeseen ways, including interconnections between previously unrelated variables (FSB, 2017[35]). It can also result in higher network effects, potentially causing unexpected changes in the magnitude and direction of market movement. This may be further increased by the advent of AI-as-a-Service providers, especially those that provide bespoke models (Gensler and Bailey, 2020[36]).
AI models amplify concerns about market concentration and dominance by a few model providers, potentially risking market concentration and a range of systemic implications (OECD, 2023[7]). These risks are compounded by the concentration of data (Gensler and Bailey, 2020[36]) while they could also be associated with infrastructure providers enabling the use of AI models (e.g. cloud services). The risk of operational failures by dominant players can have systemic effects for the markets based on the level of dependence of financial market participants on such providers and models. With regards to outsourcing, the reliance on third-party model providers adds an extra layer of vulnerability in addition to existing infrastructure dependence on these third-party providers, such as cloud services.
Related to these systemic implications for financial stability, AI models can also raise competition-related challenges. Indicatively, a possible refusal of access to models or data and barriers to switching by dominant providers could have important implications for financial market participants in a market with distorted competition conditions. Since AI models require a significant level of resources and computing power to be developed and trained, there is indeed a risk of market concentration amongst a small group of players, especially those with first mover advantage or with the number of resources needed to design, train and maintain models. Financial institutions that deploy models from dominant third parties may face the burden of reduced competition, which can also impact their customers (such as associated costs).
The current stage of AI development also poses challenges for countries that lack the economic resources to develop, train and maintain their own models, as could be the case in some ASEAN member States. With regards to users, large-scale AI models may primarily benefit those equipped to invest in such technologies, such as larger financial market participants. Data concentration is another risk related to dominance of incumbents with cheaper or easier access to datasets (e.g. social platforms). Access to data is crucial for the success of AI models such as LLMs, and data concentration by BigTech or other platforms could exacerbate the risk of dominance of few large companies with excess power and systemic relevance. Furthermore, AI models could be exploited to bolster monopolies or oligopolies and stifle competition, thereby undermining market dynamics. For instance, AI models can be used to influence investor preferences based on their specific role.
3.5.8. Other risks: employment and skills, environmental impact
While the current and future impact of AI in the labor market remains uncertain, there is currently little evidence of significant negative employment effects due to AI to date according to OECD analysis (OECD, 2023[37]). This could be due to low AI adoption rates and firms opting for voluntary workforce adjustments, possibly delaying the materialization of any negative employment effects from AI (OECD, 2023[37]).
In the long-term, new employment challenges and opportunities could arise through a wider adoption of AI tools by financial institutions, with implications also in terms of capacity and skills development. Widespread usage of such tools in finance may help increase available resources needed for higher-value tasks while also posing risks to the job market. AI in particular has the potential to automate a wide array of back-office and middle-office functions in finance (Section 3.4). AI’s impact on employment is also anticipated to cause pressure for industries to consider how skillsets may need to evolve (OECD, 2023[37]). Insufficient skills for using AI can pose risks from both an industry and regulatory perspective, thereby potentially leading to employment issues for financial institutions. Using AI for finance will demand skillsets currently possessed by only a small segment of financial practitioners, and inadequate capacity or awareness of the risks associated with models, especially with easily accessible AI models, can have adverse effects for financial market participants and their clients.
The increasing computational needs of AI systems may also raise sustainability concerns (OECD, 2023[38]). GenAI and LLMs, for example, necessitate extensive computational resources for training, consuming significant energy for development, training and inference processes, with potential environmental impact that requires deeper examination. This also pertains to data centres, considering their pivotal role in model training. Like other innovative financial technologies, there is not enough reliable information on AI’s impact on the environment that can inform policy discussions around is environmental risks (OECD, 2023[37]; 2022[39]).
3.6. Policy considerations on AI in finance
The use of AI in finance has the potential to deliver important benefits to financial consumers and market participants in ASEAN member states and beyond, by promoting efficiencies and enhancing productivity, but comes with important risks and challenges. Rapid developments in AI and its increasing relevance to financial markets calls for policy discussion and potential action to ensure the safe and responsible use of such tools in finance. Financial regulators and supervisors must ensure that the use of AI in finance remains consistent with the policy objectives of securing financial stability, protecting financial consumers, promoting market integrity [and fair competition].
The OECD Principles on AI, adopted in 2019, which constitute the first international standard agreed by governments for the responsible stewardship of trustworthy AI, remain highly relevant for the application of AI, including GenAI, tools in finance (OECD, 2019[31]). At the G20 level, the financial stability implications of artificial intelligence and machine learning in financial services have been discussed by the Financial Stability Board in 2017 (FSB, 2017[35]), while the G7 in 2020 has analysed the cyber risks posed by artificial intelligence in the financial sector. Most recently, the G7 Leaders welcomed the Hiroshima Process International Guiding Principles for Organizations Developing Advanced AI Systems and the Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems (G7, 2023[1]; G7, 2023[2]; G7, 2023[3]).
A number of national or regional initiatives have also been launched with the aim of providing guidance or promoting guard rails for the safe and trustworthy development of AI across sectors globally, including in ASEAN member states.
3.6.1. Policy developments on AI in finance in ASEAN countries
National AI strategies have been developed in seven ASEAN member states, namely Indonesia, Malaysia, Myanmar, Singapore, Thailand, Philippines and Viet Nam. Furthermore, at the ASEAN-level, ongoing discussions are currently taking place concerning the preparation of a Guide to AI Ethics and Governance, which is anticipated to be released in 2024 (Reuters, 2023[40]). Although such framework can be influential in providing guidance to the national legislators, its application would remain voluntary. Unlike the EU AI Act (European Commission, 2021[41]) the ASEAN document will not include a strict risk categorisation and will take a more business-oriented approach, allowing for flexibility related to cultural differences across member countries. In particular, when it comes to national AI strategies:
The Indonesian AI strategy has been implemented in 2020, with an end date of 2045. The end goal of this strategy involves the transformation of the country in line with an innovation-based approach, including through the encouraging of AI research, and the amelioration of data infrastructure (Nasional Kecerdasan Artifisial Indonesia, 2020[42]). In addition, the AI Ethical Guidelines incorporate setting up a data ethics board and suggesting AI innovation regulation, which is to be expected in the nearby future (Arkyasa, 2023[43]). Certain aspects of Fintech, digital banking and capital markets fall under the AI regulation in Indonesia.
Malaysia has developed its National AI Roadmap for the years 2021-2025, which includes establishing AI governance, as well as advancing R&D, digital infrastructure and a national AI innovation system (Ministry of Science, 2021[44]). Focus is also placed on ethics by incorporating seven principles of responsible AI into the roadmap. Furthermore, the Responsible AI Framework Guidelines, published in 2023, formulate further guidance related to the ethical dimension of AI use by the Malaysian organisations more broadly (Ariffin et al., 2023[45]).
Singapore’s National AI Strategy dates back to 2019 and aims to promote Singapore as an AI leader by 2030. This framework is developed with the focus placed on specific national projects, such as improving efficiency of municipal services and of border clearance operations, for instance (Smart Nation Singapore, 2019[46]). Singapore has also developed a Model Governance AI Framework in 2020, which incorporates the principles of inter alia fairness, accountability and explainability into the AI governance models across sectors of activity (Info-communications Media Development Authority, 2020[47]).
Thailand’s AI Strategy and Action Plan was launched in 2022 and has 2027 as its end date. The strategy includes regulatory readiness, national infrastructure development, education, innovation development and promotion of AI use (AI Thailand, 2022[48]). Currently, the second phase of the Action Plan is being implemented, with the focus placed on the expansion of research and development of AI applications to enhance competitiveness of industries (AI Thailand, 2022[48]). The Royal Decree on AI System Service Business of 2022 introduces a risk-based approach to AI, with differentiation of certain AI systems as high risk and including some prohibitions (His Majesty King Maha Vajiralongkorn Phra Vajiraklaochaoyuhua, 2022[49]). At the same time, the Draft Act on the Promotion and Support of AI Innovations in Thailand of 2023 seeks to enhance the innovation within the AI ecosystem by granting businesses with access to an AI sandbox, AI clinic and training AI database (The Electronic Transactions Development Agency, 2023[50]).
The National AI Strategy Roadmap of the Philippines was issued in 2021. The overall objective of the roadmap is ensuring AI readiness through the dimensions of development of infrastructure, research and development, workforce development and regulation (The Department of Trade and Industry, 2021[51]).
Viet Nam’s National Strategy on R&D and Application of AI was launched in 2021 and has an end date of 2030 (Prime Minister of Vietnam, 2021[52]). The framework sets out the strategic directions to be taken, which include building of AI-related regulations, computing infrastructure, promotion of AI application and international cooperation. The draft National Standard on AI and Big Data, released in 2023 is focused on the AI quality standards in the realms of safety, privacy and ethics, as well as risk assessments and addressing unintentional biases (The Ministry of Information and Communication, 2023[53]).
Cambodia and Lao PDR have not developed any national AI strategies. However, the Cambodian Ministry of Industry, Science, Technology & Innovation has issued its first AI-specific report in May of 2023 (The Ministry of Industry, 2023[54]). The AI Landscape in Cambodia Report discusses the importance of imminent development of national AI regulation and guidelines that would harmonise the existing laws. Such policies are to be in line with principles of human-centricity and sustainable development, while taking into consideration the issues of ethics and privacy.
Almost all ASEAN countries have provided some form of guidance around the use of AI in finance, in most cases as part of their broader policy action on AI across sectors.16 Specific policies related to the use of AI in the field of finance can be found in Indonesia, Malaysia, Myanmar, Singapore, Thailand, Philippines, Viet Nam and Cambodia. In particular:
The Indonesian AI strategy explicitly distinguishes finance as one of the key sectors relevant to long-term development of AI (Nasional Kecerdasan Artifisial Indonesia, 2020[42]). It underlines the importance of the use of financial data for the development of AI applications, and it refers to financial use cases such as credit scoring or financial forecasting. This policy is focused on pursuing four strategic targets, namely of service improvement, cost optimization, improved products, and a reliable risk management. The framework is set forth to include stages of exploration of AI in finance, optimization (implementation of the findings) and transformation (practical support of the finance sector). The framework relevant to financial data may be further amended in the near future, as the upcoming Presidential Regulation implementing the Personal Data Protection Law will regulate the protection of data for artificial intelligence uses (Rochman and Adji, 2023[55]).
Malaysian policies on digital economy, articulated within the National 4IR Policy and the Digital Economy Blueprint of 2021, enlist finance as one of the key sectors for the digital transformation of Malaysia (MyDIGITAL Malaysia, 2021[56]). The National 4IR Policy introduces strategies such as adoption of an anticipatory regulatory approach that allows for the innovation acceleration, as well as promotion of uniform data protection standards for the finance industry (Ministry of Science, 2021[57]). Actors within the finance industry are encouraged to ensure that their workforce possesses the skills and knowledge necessary to the digital economy. Furthermore, financial service providers are to adopt an anticipatory regulatory approach, that includes both the necessary risks management policies and innovative initiatives (Ministry of Science, 2021[57]). Furthermore, MDEB established concrete strategies aiming at fostering innovation in the sector. Namely, the policy created a Fintech Innovation Accelerator Programme, to support the local fintech development (MyDIGITAL Malaysia, 2021[56]). On the regulatory side, a special task force has been formed in July 2023 to review current laws in the realm of investment and business in Malaysia, inter alia in light of the AI developments (New Straits Times, 2023[58]).
In Singapore, the Monetary Authority of Singapore (MAS) published in 2018 broad Principles to promote Fairness, Ethics, Accountability and Transparency (FEAT) in the use of AI in Singapore's financial services sector (MAS, 2018[59]). In 2022, MAS conducted a thematic review on selected financial institutions’ implementation of the Fairness Principles in their use of AI, reviewing policies and governance frameworks against the FEAT Principles, and their implementation effectiveness in actual AI/ML use cases (MAS, 2022[60]). In 2022, MAS also released five whitepapers, setting forth guidelines applicable to financial service providers, aimed at the use promotion of responsible use of AI (MAS, 2022[61]). The white papers detail assessment methodologies for the FEAT principles and include a comprehensive FEAT checklist for financial institutions to adopt during their AI and data analytics software development lifecycles; an enhanced fairness assessment methodology, which enables financial institutions to define fairness objectives, as well as identifying personal attributes of individuals and any unintentional bias; a new ethics and accountability assessment methodology, which provides a framework to carry out quantifiable measurements of ethical practices, in addition to the qualitative practices currently adopted; and a new transparency assessment methodology, which helps determine the extent of internal and external transparency needed to interpret the predictions of ML models. The white papers were advanced on the basis of a public-private collaborative model, promoting risk management and sustainable good governance principles in the use of AI in finance. Currently, MAS is working on GenAI and plans to publish a risk framework for the use of such models by the financial sector. In 2022, MAS has launched Project NovA! – a tool helping financial institutions to predict financial risks relevant to their organisations (Monetary Authority of Singapore, 2023[12]). MAS has also launched Project MindForge – a Generative AI risk management framework, developed in a collaborative manner with the finance industry players (MindForge Consortium, 2023[62]). During the first phase of the Project, completed in November 2023, the main risks areas were identified, inter alia in the areas of accountability, monitoring, transparency and data security. Next, the Project is to include insurance and asset management financial entities in its scope and to expand the use of GenAI in areas of compliance with anti-money laundering, sustainability, and cyber-security policies.
The AI and ICT Roadmap of the Philippines details the application of AI in the banking and finance sector (Department of Science and Technology, 2020[63]). The central role is given to the Bangko Sentral ng Pilipinas to develop strategies in the fields of anti-money laundering and fraud risks mitigation. The scope of application of such AI solutions is then planned to be extended to commercial banks. BSP is also tasked with the development of other innovative policies in the finance sector, such as the creation of an open finance framework, which has been adopted in the Bank’s roadmap for 2021-2024 (Bangko Sentral Ng Pilipinas[64]). Furthermore, there are plans to enact a new Artificial Intelligence Development Authority, which will be developing a national AI framework, focused on the practical use of new technologies by the businesses (Republic of the Philippines House of Representatives, 2023[65]).
Thailand’s AI Strategy and Action Plan details the use of AI in the financial sector (AI Thailand, 2022[48]). It specifies the application of the Strategy and Action Plan in banking (credit checks, risk analysis, customer base expansion), trade (analysis of product offerings, sales boosting) and investment (stock analysis, investment advice and strategies). A proposal of guidelines for the use of AI in financial sector is currently reported to be prepared by the Bank of Thailand (Suchit, 2023[66]). Financial actors may also anticipate new rules regarding data sharing, especially in relation to AI development, as the Electronic Transactions Development Agency (ETDA) has announced draft changes in personal data protection laws (Mungkarndee and Nantananate, 2023[67]). These amendments are to establish a closer collaboration between the regulator and businesses, that fosters the spirit of AI innovation, inter alia via the use of an AI Sandbox.
Viet Nam’s National Strategy on R&D and Application of Artificial Intelligence includes provisions on promotion of the AI in finance (Prime Minister of Vietnam, 2021[52]). The task of the Ministry of Finance is to allocate the necessary funds for the implementation of strategies promoting the development and application of AI in the financial sector. Similarly, the State Bank of Viet Nam is to engage in the AI development in the banking field. This includes application of AI to the use for loan predictions and analysis, fraud detection and amelioration of the customer service. In the insurance sector, newly enacted Law on Insurance Business encourages the application of AI technologies in the insurance field to improve its products and services (National Assembly of Vietnam, 2022[68]).
The AI Landscape in Cambodia report enlists finance industry as one of the key sectors of interest (The Ministry of Industry, 2023[54]). It focuses on the possible applications of AI to boost the efficiency of financial transactions and prevent fraud and money-laundering activities. Further attention is to be paid to the use of AI tools to ensure compliance, enhance customer support, execute smart contracts, and enhance the business ecosystems. The report also underlines the importance of data security, especially to financial institutions that process large data related to personal finance. Cambodia Digital Tech Roadmap of 2023 also features AI and ML as one of the most important technologies for the future development in Cambodia (The Ministry of Industry, 2023[69]). Specific strategies listed within the financial sector refer to the support of development of start-up programmes and decentralised financial systems.
Among ASEAN member states, policies explicitly targeting generative forms of AI can only be found in Singapore. Singapore has formulated a Model Governance AI Framework in 2020 that is to be amended by including GenAI risks (Info-communications Media Development Authority, 2020[47]). The Infocomm Media Development Authority (IMDA) has released a discussion paper on Gen AI, in which it includes suggestions on the incorporation of Gen AI within the business ecosystem (Info-comm Media Development Authority, 2023[70]). It proposes a risk-based approach that includes acknowledging of the risks related to privacy, disinformation, copyright infringement issues and embedded biases.17 On the other hand, the paper focuses on enhancing trust within the AI ecosystem by addressing immediate rather than future risks. Thus, the proposed approach remains practical, business-oriented and leaves space for latter developments.
A number of ASEAN member states are planning or pursuing public-private cooperation projects or other initiatives with the AI industry with a view to advance safe GenAI development. Indicatively:
Indonesia is collaborating with Open AI to stimulate the local development of ChatGPT (Nur, 2023[71]). In a few other countries (Thailand, Philippines, Viet Nam), there are works in progress related to the development of national LLM models, capable of using local languages in an efficient and accurate manner.
Similarly, although Indonesia has not developed concrete generative AI policies as of now, it has established a partnership between its Artificial Intelligence Research and Innovation Collaboration (KORIKA) and Open AI, announced in June of 2023. The aim of this partnership is the development of an AI system that takes into account Indonesian cultural values. Such collaboration is indicative of the considerations to experiment and support the use of GenAI within the country (Nur, 2023[71]).
The Malaysian National 4IR Policy enlists Generative AI as one of the technologies of future that Malaysia should focus on developing at the national level (Ministry of Science, 2021[57]). An indication of further use of ChatGPT specifically within the government services has been given by the Ministry of Science, Technology & Innovation in August 2023 (Yeoh and Fam, 2023[72]). Also, there are considerations of enacting an AI Bill that would impose higher standards of transparency, data security and accountability. That would entail labelling any Gen AI-generated content in line with the transparency requirements (digwatch, 2023[73]).
Thailand has not formulated GenAI specific policies, however, its National Electronics and Computer Technology Center has collaboratively developed OpenThaiGPT Project – a LLM capable of processing Thai language at a higher efficiency and speed than Chat GPT (Leesa-Nguansk, 2023[74]). This project is based on public data and remains an open source. A similar development can be observed in the Philippines, where the Department of Science and Technology has expressed its intention in September of 2023 to create a local language-focused Chat GPT (Quismorio, 2023[75]). However, GenAI specific policies are still in the announcement, rather than implementation stage in the Philippines. In Viet Nam, a LLM featuring Vietnamese language was announced by a private enterprise, VinBigdata, a part of the Vingroup conglomerate (Phuong, 2023[76]).
Another category of private-public partnerships involves practices of private sector participants (e.g. Google and Microsoft) in the ASEAN region. Partnerships of national government entities with Google Cloud have been announced in Indonesia, Malaysia, Singapore, Thailand, Philippines, and Viet Nam. Google’s partnerships involve granting access to Vertex AI – a Gen AI platform for businesses, GenAI-related skilling programmes and Google Cloud services (MAS, 2023[77]). Business-oriented GenAI products are also offered by Microsoft that has partnered with state-owned Telkomsel in Indonesia, UOB Bank of Singapore, Vietnamese AI Fintech Trusting Social, and also has announced a strategic cooperation with the Thai government (Tanner, 2023[78]; Viet Nam News, 2023[79]; Sullivan, 2023[80]; Ho Chi Minh, 2023[81]).
3.6.2. Policy considerations and recommendations for the use of AI in Finance in ASEAN
The importance of the responsible use of AI within the financial sector when providing financial products and services cannot be underestimated. Risks that stem from use of AI tools in finance will need to be identified and mitigated to support and promote the use of responsible and safe AI, without stifling innovation. The use of advanced forms of AI models, such as GenAI, in finance exacerbates some of the ‘generic’ AI-related risks given its enhanced capabilities, although it also raises a number of additional novel challenges associated with its specificities (e.g. deepfakes).
The application of existing guard rails applicable in AI models may need to be clarified and potentially adjusted to effectively address some of the novel challenges of advanced AI tools, if and where needed. Any perceived incompatibilities of existing arrangements with developments in AI may also need to be considered, such as the case of explainability in AI models.
Policy consideration and potential action could be considered from a contextual and proportional framework, using a risk-based approach depending on the criticality of the application and the potential impact on the consumer involved (OECD, 2021[10]). Any guidance or policy will also need to be future proof to withstand the test of time given how rapidly AI technology advances.
Policy makers may need to consider reinforcing policies and strengthening defences and guard rails against risks emerging from, or exacerbated by, the use of AI in finance, focusing on a number of overarching areas. In particular:
Strengthen data governance practices by model developers and deployers: It is evident that data is critical to training AI models and their usage by financial market participants. Best practices for data management and governance practices may be considered to ensure data quality, data adequacy as needed, data privacy when financial consumer data is fed into the model, and data authenticity and appropriate source attribution/copyrighting when applicable. This could include increased transparency and reporting about the data used to train the model and any other data introduced into the model, including their location, origin and source attribution for copyrighted data used. Depending on the model, the feasibility of data deletion options or obligations from models after a certain period of time could also be considered (similar to the ‘right to be forgotten’ of GDPR). This would need to include any data inputted in the model through prompts or otherwise, and the output of the model itself, keeping in mind its integration of feedback loops for its self‑training.
When private data are being used, consumers should have the right to opt-out from the use of their data for the training of AI models. This becomes particularly important in case the model can scrape data off the internet if it has web browsing capabilities or it if can link to the web in any way. The same considerations around data governance apply on databases purchased by third party providers, and on synthetic data generation based on public and private data.
Safeguards should be in place to overcome risk of bias and discrimination: Firms using such models should ensure that pre-existing fairness frameworks in financial services continue to apply. This could also involve proactive equity assessments of the models, impact assessment of model outputs, their sense checking against baseline datasets and other tests to ensure that protected classes cannot be inferred from other attributes in the data. The validation of the relevance of variables used by the model and of datasets used for training in terms of their representativeness are additional possible tools to reduce sources of potential biases. The latter applies in particular to LLMs, given potential current under-representation of minority languages in the training of language models (OECD, 2023[6]). Risks diagnosed should be followed by mitigating action, and reporting of all the above could be conducive to strengthening user trust.
Encourage efforts to improve levels of explainability: Limited or outright lack of explainability, as likely the case in AI models, poses significant risks associated with the use of such models in finance (e.g. inability to adjust strategies in times of market stress). It may even be incompatible with existing laws and regulations, for example, the requirement to explain the basis for denial of credit extension to a prospective borrower in some jurisdictions. Progress made in the area of explainability of relatively simpler ML models will need to also be pursued in generative AI models, which have even greater complexity and lack of explainability. Improved explainability levels will be crucial for building trust around the deployment of such tools in finance (and beyond).
Foster transparency and consider disclosure requirements depending on the case: Financial consumers should be informed about the use of AI techniques in the delivery of a product, when these have an impact on the customer outcome, as well as about machine-generated content and any potential interaction with an AI system instead of a human being. Financial consumers should also be informed about any collection and/or processing of their data for the purposes of the model and informed consent could be sought to that end. Customers should be offered the option to engage with a human if they so prefer. Active disclosure by financial market participants deploying such tools could be considered to ensure maximum awareness of the customer.
Disclosure requirements could include clear information, in plain language, around the AI system’s functionalities and performance, including capabilities and limitations, as well as mitigating action taken to address such limitations. Description of the datasets used to train the model, including any copyrights, could help address data governance risks. Description of the results of any internal testing and independent external evaluation of the model and any impact assessment made (e.g. for disparity testing) could be considered as part of reporting to users. Manuals could be provided for downstream uses of models. Datapoints on the energy requirements for the model (for its training or use) could also be considered in light of limited data around their environmental footprint. The governance framework of the model’s development and deployment could also be integrated into reporting. Transparency and disclosure will be even more critical for advanced forms of AI models as a way to partly compensate for their lack of explainability.
Strengthen model governance and promote accountability mechanisms: Currently applicable frameworks for model governance in finance may need to be enhanced or adjusted to address incremental risks emerging from advances in AI. Solid governance arrangements and clear accountability mechanisms are fundamental in AI models deployed in high-value use-cases (e.g. in determining access to credit or investment advice). Parties involved in the development and deployment and such models should be held accountable for their proper functioning (OECD, 2019[31]). Explicit governance frameworks could include clear lines of responsibility and oversight throughout their lifecycle18 and minimum standards or best practice guidelines to be followed. Documentation and audit trails for oversight and supervision should not be limited to the development process, and model behaviour and outcomes need to be monitored and tested throughout the model’s lifetime.
Governance arrangements may need to include explicit attribution of accountability to a human irrespective of the level of automation of the model, with a view to also help build trust in AI-driven systems. In other words, this involves explicit accountability of the actor deploying the model for any harm caused by the model they are deploying. Contingency and security planning may also need to be considered to ensure business continuity. This could include the introduction of kill switches or other automatic control mechanisms, and back-up plans, models and processes in place to ensure business continuity in case the models fails or acts in unexpected ways (OECD, 2021[10]). Additional guard rails could be considered for the accountability of third- party providers of (foundation) models that are being adapted for downstream use cases or in other cases of outsourcing. Questions around recourse and legal liability of developers of such models could be also examined.
Promote safety, robustness and resilience of AI models (including for cyber risk) and mitigate risks of deception and market manipulation: Frameworks for appropriate training, retraining and rigorous testing of AI models, and their ongoing rigorous monitoring and validation could be the most effective ways to improve model resilience, prevent and address drifts, and ensure the model performs as intended. Monitoring and validation could include independent reviews and external audits both at the development and during deployment, and documentation of each such processes could facilitate supervision. Ongoing monitoring is particularly important for AI models that are based on autonomous unsupervised learning and where false or inaccurate information introduced in the model post deployment continues to inform model in future loops (e.g. user prompts). Also, datasets used for training, especially when synthetic, need to be large enough to capture non-linear relationships and tail events in the data to cover for unprecedented events. Stress testing for such scenarios could be performed.
Testing for dangerous or harmful capabilities of a model before its deployment could be used to understand the ability of the model to act in adversarial ways (e.g. proliferation of misinformation) and to adjust the models’ behaviour to account for the results of such tests. Depending on the capabilities of the model, and the results of such impact assessments prior to deployment, content filtering and other restrictions could be introduced upfront to the model based on safety thresholds (e.g. refusal of harmful requests by design). Alternative options to be considered could include positive permission forms of design (i.e. do not do unless it is permitted). AI-generated output needs to be explicitly disclosed as such in order to limit the risk of deepfakes and promote the truthfulness of the model’s output. In case of large models above a certain level of capabilities that could be considered systemically important, adherence to commonly agreed sets of safety requirements could be envisaged.
Encourage a human-centric approach and place emphasis in human primacy in decision making, particularly for higher-value use cases (e.g. lending): An appropriate degree of human involvement in AI-assisted financial market activity may need to be ensured to minimise the risk of harm to individual customers, depending on the criticality of the use case. End customers need to be informed about the involvement of AI in the provision of their service and could have the right to object to its use, opt out of AI-assisted products or services and of the AI model’s reach (e.g. for data usage). Customers may need to be given the right to request a human intervention or challenge the outcome of the model and seek redress. In addition to a mandatory human alternative option for the end customer, humans would also need to be ready to act as a human safety net in case of model disruption to ensure business continuity, avoiding over-reliance of firms in AI-based systems. Keeping the ‘human in the loop’ can also help build confidence and trust in the use of AI in finance.
Invest in R&D, skills and capacity to keep pace with advances in AI, raise awareness of the perils of AI and create tools to mitigate some of the associated emerging risks (e.g. hallucinations). Both the public and the private sector will need to invest in research, build skills and raise awareness for financial market participants and policy makers around the risks of advanced AI models such as GenAI and LLMs. R&D investment could provide solutions and tools to mitigate issues of explainability and mitigate risks of AI models (e.g. identify and prevent deceptive outputs). Research is also important to ensure safety of future scenarios of fully autonomous models (e.g. AGI). Investment in education and skills in the industry could enable effective AI model governance, while also guiding practitioners and consumers towards safer deployment of such models. Policy makers would also need to keep pace with advancements in AI technology in order to be technically able and prepared to oversee such activity in finance and/or intervene as required. Importantly, the upskilling of policy makers will also allow them to benefit from RegTech/SupTech solutions for effective and efficient supervision of financial market activity more broadly.
References
[9] Ahmed, E. et al. (2017), “The role of big data analytics in Internet of Things”, Computer Networks, Vol. 129, pp. 459-471, https://doi.org/10.1016/J.COMNET.2017.06.013.
[48] AI Thailand (2022), AI Strategy and Action Plan, https://ai.in.th/en/about-ai-thailand/.
[45] Ariffin, A. et al. (2023), “Formulation of AI Governance and Ethics Framework to Support the Implementation of Responsible AI for Malaysia”, Res militaris, https://resmilitaris.net/index.php/resmilitaris/article/view/3826.
[43] Arkyasa, M. (2023), Ministry Communication and Informatics prepares AI ethics guidelines guaranteeing security and rights | INSIDER - Indonesia Business Post, Indonesia Business Post, https://indonesiabusinesspost.com/insider/ministry-communication-and-informatics-prepares-ai-ethics-guidelines-guaranteeing-security-and-rights/ (accessed on 4 December 2023).
[64] Bangko Sentral Ng Pilipinas (n.d.), Open Finance PH, https://www.bsp.gov.ph/Pages/InclusiveFinance/Open%20Finance/Open%20Finance.aspx#Pilot.
[18] Blitz, D. et al. (2023), “The Term Structure of Machine Learning Alpha”, SSRN Electronic Journal, https://doi.org/10.2139/SSRN.4474637.
[34] Bommasani R. et al. (2023), Do Foundation Model Providers Comply with the Draft EU AI Act?, https://crfm.stanford.edu/2023/06/15/eu-ai-act.html (accessed on 2 August 2023).
[26] Buolamwini, J. (2018), “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification *”, Proceedings of Machine Learning Research, Vol. 81, pp. 1-15, https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf (accessed on 1 August 2023).
[33] Chen, L., M. Zaharia and J. Zou (2023), “How is ChatGPT’s behavior changing over time?”, https://arxiv.org/abs/2307.09009v3 (accessed on 13 December 2023).
[63] Department of Science and Technology (2020), AI and ICT Roadmap, https://pcieerd.dost.gov.ph/images/pdf/2021/roadmaps/sectoral_roadmaps_division/etdd/Draft-1_AI--ICT-Roadmap-as-24.3.2021.pdf.
[73] digwatch (2023), Malaysia considers enacting law on AI, https://dig.watch/updates/malaysia-considers-enacting-law-on-ai.
[21] EDPS (2021), Synthetic Data | European Data Protection Supervisor, https://edps.europa.eu/press-publications/publications/techsonar/synthetic-data_en (accessed on 3 August 2023).
[22] ESMA (2023), “ESMA TRV Risk Analysis Artificial intelligence in EU securities markets”, https://doi.org/10.2856/851487.
[41] European Commission (2021), Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain union legislative acts, https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence.
[27] Federal Reserve Board (2023), “Cybersecurity and Financial System Resilience Report - Board of Governors of the Federal Reserve System”, http://www.federalreserve.gov/aboutthefed.htm. (accessed on 7 August 2023).
[16] Freyberger, J. et al. (2020), “Dissecting Characteristics Nonparametrically”, Review of Financial Studies, Vol. 33/5, pp. 2326-2377, https://doi.org/10.1093/RFS/HHZ123.
[35] FSB (2017), Artificial intelligence and machine learning in financial services Market developments and financial stability implications, http://www.fsb.org/emailalert (accessed on 1 December 2020).
[24] FT (2013), Norway’s $1.4tn wealth fund calls for state regulation of AI | Financial Times, https://www.ft.com/content/594a4f52-eb98-4da2-beca-4addcf9777c4 (accessed on 26 July 2023).
[1] G7 (2023), G7 Leaders’ Statement on the Hiroshima AI Process, https://www.mofa.go.jp/files/100573466.pdf (accessed on 7 November 2023).
[3] G7 (2023), “Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems On the basis of the Hiroshima Process International Guiding Principles for Organizations Developing Advanced AI systems, the Hiroshima Process International Code of Conduct for Organizations”, https://www.mofa.go.jp/files/100573473.pdf (accessed on 7 November 2023).
[2] G7 (2023), “Hiroshima Process International Guiding Principles for Organizations Developing Advanced AI System The Hiroshima Process International Guiding Principles for Organizations Developing Advanced”, https://www.mofa.go.jp/files/100573471.pdf (accessed on 7 November 2023).
[36] Gensler, G. and L. Bailey (2020), “Deep Learning and Financial Stability”, SSRN Electronic Journal, https://doi.org/10.2139/ssrn.3723132.
[83] HAI (2023), Stanford CRFM, https://crfm.stanford.edu/ (accessed on 27 July 2023).
[49] His Majesty King Maha Vajiralongkorn Phra Vajiraklaochaoyuhua (2022), Royal Decree on Artificial Intelligence System Service Business.
[81] Ho Chi Minh (2023), Trusting Social brings AI-powered agents to enterprises, backed by Microsoft Cloud and AI technologies, Trusting Social, https://trustingsocial.com/blog/trusting-social-brings-ai-powered-agents-to-enterprises-backed-by-microsoft-cloud-and-ai-technologies (accessed on 8 December 2023).
[11] IDC (2023), “Exploring the Opportunities and Challenges of GenAI: Implications for Asia/Pacific Governments”, IDC Perspective, https://www.idc.com/getdoc.jsp?containerId=AP50548123 (accessed on 10 December 2023).
[70] Info-comm Media Development Authority (2023), “Generative AI: Implications for Trust and Governance”, https://aiverifyfoundation.sg/downloads/Discussion_Paper.pdf.
[47] Info-communications Media Development Authority (2020), Singapore Model Governance AI Framework, https://www.pdpc.gov.sg/-/media/files/pdpc/pdf-files/resource-for-organisation/ai/sgmodelaigovframework2.pdf.
[85] Ji, Z. et al. (2023), “Survey of Hallucination in Natural Language Generation”, ACM Computing Surveys, Vol. 55/12, https://doi.org/10.1145/3571730.
[15] Kotu, V. and B. Deshpande (2019), “Anomaly Detection”, Data Science, pp. 447-465, https://doi.org/10.1016/B978-0-12-814761-0.00013-7.
[8] Kotu, V. and B. Deshpande (2019), “Deep Learning”, Data Science, pp. 307-342, https://doi.org/10.1016/B978-0-12-814761-0.00010-1.
[20] Kotu, V. and B. Deshpande (2019), “Recommendation Engines”, Data Science, pp. 343-394, https://doi.org/10.1016/B978-0-12-814761-0.00011-3.
[14] KPMG (2023), “Generative AI: From buzz to business value”, https://kpmg.com/kpmg-us/content/dam/kpmg/pdf/2023/generative-ai-survey.pdf (accessed on 28 July 2023).
[74] Leesa-Nguansk, S. (2023), Creating a Thai language AI tool, Bangkok Post, https://www.bangkokpost.com/business/general/2565696/creating-a-thai-language-ai-tool.
[77] MAS (2023), MAS Partners Google Cloud to Advance Capabilities in Generative AI Technology, https://www.mas.gov.sg/news/media-releases/2023/mas-partners-google-cloud-to-advance-capabilities-in-generative-ai-technology#:~:text=Singapore%2C%2031%20May%202023%E2%80%A6,grounded%20on%20responsible%20AI%20practices.
[60] MAS (2022), “Implementation of fairness principles in financial institutions’ use of artificial intelligence / machine”, https://www.mas.gov.sg/news/media-releases/2021/veritas-initiative- (accessed on 6 December 2023).
[61] MAS (2022), MAS publishes assessment methodologies for responsible AI use by financial institutions | News post | DataGuidance, https://www.dataguidance.com/news/singapore-mas-publishes-assessment-methodologies (accessed on 6 December 2023).
[59] MAS (2018), “Monetary Authority Of Singapore Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT) in the Use of Artificial Intelligence and Data Analytics in Singapore’s Financial Sector Principles to Promote FEAT in the Use of AI and Data Analytics in Singapore’s Financial Sector Monetary Authority of Singapore”, https://www.pdpc.gov.sg/Resources/Discussion-Paper-on-AI-and-Personal-Data (accessed on 6 December 2023).
[62] MindForge Consortium (2023), “Emerging Risks and Opportunities of Generative AI for Banks : A Singapore Perspective”, https://www.mas.gov.sg/-/media/mas/news/media-releases/2023/executive-summary---emerging-risks-and-opportunities-of-generative-ai-for-banks.pdf.
[44] Ministry of Science, T. (2021), Malaysia National Artificial Intelligence Roadmap 2021-2025 (AI-RMAP), https://airmap.my/wp-content/uploads/2022/08/AIR-Map-Playbook-final-s.pdf.
[57] Ministry of Science, T. (2021), National Fourth Industry Revolution Policy.
[86] Molly Lesher, Hanna Pawelec and Arpitha Desai (2022), Disentangling untruths online : Creators, spreaders and how to stop them | OECD Going Digital Toolkit Notes | OECD iLibrary, https://www.oecd-ilibrary.org/science-and-technology/disentangling-untruths-online_84b62df1-en (accessed on 2 August 2023).
[12] Monetary Authority of Singapore (2023), “ASEAN, Alternative Energy, and Artificial Intelligence” - Keynote Speech by Mr Ravi Menon, Managing Director, Monetary Authority of Singapore, at 61st ACI World Congress on 21 September 2023, https://www.mas.gov.sg/news/speeches/2023/asean-alternative-energy-and-artificial-intelligence (accessed on 1 December 2023).
[17] Moritz, B. and T. Zimmermann (2016), “Tree-Based Conditional Portfolio Sorts: The Relation between Past and Future Stock Returns”, SSRN Electronic Journal, https://doi.org/10.2139/SSRN.2740751.
[67] Mungkarndee, R. and D. Nantananate (2023), Thailand’s Draft Laws for the Regulation and Promotion of AI Products and Services, Lexel, https://lexel.co.th/thailands-draft-laws-for-the-regulation-and-promotion-of-ai-products-and-services/ (accessed on 8 December 2023).
[56] MyDIGITAL Malaysia (2021), Malaysian Digital Economy Blueprint, https://www.mida.gov.my/mida-news/blueprint-to-help-malaysia-achieve-digital-economy-aspirations/.
[42] Nasional Kecerdasan Artifisial Indonesia (2020), Strategi Nasional Kecerdasan Artifisial, https://ai-innovation.id/images/gallery/ebook/stranas-ka.pdf.
[68] National Assembly of Vietnam (2022), Insurance Business Law, https://lawnet.vn/en/vb/Law-08-2022-QH15-insurance-business-80016.html#:~:text=The%20National%20Assembly%20herein%20passes%20the%20Law%20on%20Insurance%20Business.&text=Scope-,1.,management%20of%20insurance%20business%20activities.
[30] NATO (2022), Quantum Computing and Artificial Intelligence Expected to Revolutionize ISR - NATO’s ACT, https://www.act.nato.int/article/quantum-computing-and-artificial-intelligence-expected-to-revolutionize-isr/ (accessed on 15 December 2023).
[58] New Straits Times (2023), “Task force to look into law reform in line with Madani economy: Azalina”, https://www.nst.com.my/news/government-public-policy/2023/08/938344/task-force-look-law-reform-line-madani-economy-azalina (accessed on 1 December 2023).
[32] NIST (2023), “Artificial Intelligence Risk Management Framework (AI RMF 1.0)”, https://doi.org/10.6028/NIST.AI.100-1.
[28] NPR (2023), AI was likely behind faked images of an explosion at the Pentagon : NPR, https://www.npr.org/2023/05/22/1177590231/fake-viral-images-of-an-explosion-at-the-pentagon-were-probably-created-by-ai (accessed on 19 December 2023).
[71] Nur, A. (2023), “KORIKA, Open AI to develop an AI with Indonesian values | RISK & OPP - Indonesia Business Post”, Indonesia Business Post, https://indonesiabusinesspost.com/risks-opportunities/korika-open-ai-to-develop-an-ai-with-indonesian-values/.
[6] OECD (2023), AI language models: Technological, socio-economic and policy considerations, OECD Publishing, Paris, https://doi.org/10.1787/13d38f92-en.
[7] OECD (2023), Generative artificial intelligence in finance, OECD Publishing, Paris, https://doi.org/10.1787/ac7149cc-en.
[38] OECD (2023), Measuring the environmental impacts of artificial intelligence compute and applications: The AI footprint, OECD Publishing, Paris, https://doi.org/10.1787/7babf571-en.
[37] OECD (2023), OECD Employment Outlook 2023: Artificial Intelligence and the Labour Market, OECD Publishing, Paris, https://doi.org/10.1787/08785bba-en.
[39] OECD (2022), Environmental impact of digital assets: Crypto-asset mining and distributed ledger technology consensus mechanisms, OECD Publishing, Paris, https://doi.org/10.1787/8d834684-en.
[10] OECD (2021), Artificial Intelligence, Machine Learning and Big Data in Finance Opportunities, Challenges and Implications for Policy Makers, https://www.oecd.org/finance/financial-markets/Artificial-intelligence-machine-learning-big-data-in-finance.pdf.
[31] OECD (2019), Artificial Intelligence in Society, OECD Publishing, Paris, https://doi.org/10.1787/eedfee77-en.
[23] Papenbrock, J., N. GmbH and J. Ashley (2022), “Accelerated Data Science, AI and GeoAI for Sustainable Finance in Central Banking and Supervision 1”, https://www.bis.org/ifc/publ/ifcb56_23.pdf (accessed on 28 July 2023).
[76] Phuong, H. (2023), VinBigdata successfully develops artificial intelligence technology - Vietnam.vn, Vietnam.vn, https://www.vietnam.vn/en/vinbigdata-phat-trien-thanh-cong-cong-nghe-ai-tao-sinh/ (accessed on 1 December 2023).
[52] Prime Minister of Vietnam (2021), National Strategy On R&D and Application of Artificial Intelligence, https://en.nhandan.vn/vietnamese-government-ranked-39th-in-ai-readiness-report-post133310.html#:~:text=On%20January%2026%2C%202021%2C%20Vietnamese,making%20artificial%20intelligence%20a%20crucial.
[75] Quismorio, E. (2023), ChatGPT Pinoy version from DOST right up Baguio solon’s alley, Manila Bulletin, https://mb.com.ph/2023/9/5/chat-gpt-pinoy-version-from-dost-right-up-baguio-solon-s-alley (accessed on 1 December 2023).
[65] Republic of the Philippines House of Representatives (2023), An Act Establishing a Regulatory Framework for a Robust, Reliable, and Trustworthy Development, Application, and Use of Artificial Intelligence (AI) Systems, Creating the Philippine Council on Artificial Intelligence, Delineating the Roles of Various Government Agencies, Defining and Penalizing Certain Prohibited Act, https://hrep-website.s3.ap-southeast-1.amazonaws.com/legisdocs/basic_19/HB07913.pdf (accessed on 12 December 2023).
[40] Reuters (2023), Exclusive: Southeast Asia eyes hands-off AI rules, defying EU ambitions | Reuters, https://www.reuters.com/technology/southeast-asia-eyes-hands-off-ai-rules-defying-eu-ambitions-2023-10-11/ (accessed on 1 December 2023).
[55] Rochman, F. and R. Adji (2023), “Ministry advises people not to share personal data on social media”, Antara news, https://en.antaranews.com/news/292128/ministry-advises-people-not-to-share-personal-data-on-social-media.
[13] SIA (2023), “2023 Factbook semiconductor industry association”, https://www.semiconductors.org/resources/factbook/ (accessed on 13 December 2023).
[46] Smart Nation Singapore (2019), National Artificial Intelligence Strategy, Smart Nation Digital Government Office, https://www.smartnation.gov.sg/nais/.
[84] Stephanie Kay Ashenden (2021), “The Era of Artificial Intelligence, Machine Learning, and Data Science in the Pharmaceutical Industry”, The Era of Artificial Intelligence, Machine Learning, and Data Science in the Pharmaceutical Industry, https://doi.org/10.1016/C2019-0-01262-9.
[66] Suchit, L. (2023), Key sectors keen on generative AI, Bangkok Post, https://www.bangkokpost.com/business/general/2694518/key-sectors-keen-on-generative-ai.
[80] Sullivan, B. (2023), Microsoft’s Plan to Establish Thailand as an AI Hub, Thailand Business News, https://www.thailand-business-news.com/companies/113558-microsofts-plan-to-establish-thailand-as-an-ai-hub (accessed on 8 December 2023).
[78] Tanner, J. (2023), Telkomsel and Microsoft to collaborate on generative AI, Developing Telecoms, https://www.telkomsel.com/en/about-us/news/telkomsel-expands-ai-collaboration-microsoft-enhance-customers-digital-lifestyle#:~:text=Through%20this%20collaboration%2C%20Telkomsel%20and,utilizing%20AI%20to%20detect%20and.
[4] The Business Times (2023), “Thailand-listed buyer gets green light to issue shares for purchase of Asti shares from ex-CEO, Companies & Markets - THE BUSINESS TIMES”, The Business Times, https://www.businesstimes.com.sg/companies-markets/thailand-listed-buyer-gets-green-light-issue-shares-purchase-asti-shares-ex-ceo (accessed on 7 December 2023).
[51] The Department of Trade and Industry (2021), National AI Strategy Roadmap, https://innovate.dti.gov.ph/wp-content/uploads/2021/05/National-AI-Strategy-Roadmap-May-2021.pdf.
[50] The Electronic Transactions Development Agency (2023), The Draft Act on the Promotion and Support of AI Innovations in Thailand, https://www.dataguidance.com/opinion/thailand-update-proposals-ai-regulations.
[29] The Institute of World Politics (2019), “How Artificial Intelligence and Quantum Computing are Evolving Cyber Warfare - The Institute of World Politics”, Cyber Intelligence Initiative, https://www.iwp.edu/cyber-intelligence-initiative/2019/03/27/how-artificial-intelligence-and-quantum-computing-are-evolving-cyber-warfare/ (accessed on 15 December 2023).
[54] The Ministry of Industry, S. (2023), AI Landscape in Cambodia: Current Status and Future Trends, https://www.researchgate.net/publication/376720233_AI_Landscape_in_Cambodia_Current_Status_and_Future_Trends.
[69] The Ministry of Industry, S. (2023), Cambodia Digital Tech Roadmap, https://misti.gov.kh/public/file/202307291690603726.pdf.
[53] The Ministry of Information and Communication (2023), Draft National Standard on Artificial Intelligence and Big Data, https://www.dataguidance.com/news/vietnam-mic-requests-comments-draft-ai-and-big-data.
[25] UN (2012), Ethnic minority development in China and ASEAN countries, https://www.undp.org/sites/g/files/zskgke326/files/migration/cn/UNDP-CH-HD-Publications-Ethnic-Minority-Development-in-China-and-ASEAN-countries.pdf (accessed on 5 December 2023).
[79] Viet Nam News (2023), UOB pioneers trial of Microsoft 365 Copilot Generative AI tool across multiple business functions to enhance productivity and collaboration, https://vietnamnews.vn/media-outreach/1594674/uob-pioneers-trial-of-microsoft-365-copilot-generative-ai-tool-across-multiple-business-functions-to-enhance-productivity-and-collaboration.html.
[19] Weizenbaum, J. (1966), “ELIZA a computer program for the study of natural language communication between man and machine”, Communications of the ACM, Vol. 9/1, pp. 36-45, https://doi.org/10.1145/365153.365168.
[5] World Population Review (2023), “Semiconductor Manufacturing by Country 2023”, World Population Review, https://worldpopulationreview.com/country-rankings/semiconductor-manufacturing-by-country (accessed on 7 December 2023).
[82] Yang, D. et al. (2020), “Segmentation using adversarial image-to-image networks”, pp. 165-182, https://doi.org/10.1016/B978-0-12-816176-0.00012-0.
[72] Yeoh, A. and C. Fam (2023), Mosti to consider integrating ChatGPT into government services | The Star, The Star, https://www.thestar.com.my/tech/tech-news/2023/08/10/mosti-to-consider-integrating-chatgpt-into-government-services (accessed on 1 December 2023).
Notes
← 1. Analysis using BERT deep learning architecture for NLP analysis which harnesses the notion of self-attention to weigh-in the context in the computation of word embedding. For more on the methodology used see Annex.
← 2. Analysed 44 222 press articles from the Japanese financial press and 436 509 articles from the Korean financial press. The difference in the number of articles examined relate to the availability of non-subscription-based financial press and the related limitations.
← 3. This is also because of the lower number of articles that were scraped in the Japanese financial press given accessibility constraints.
← 4. Generative Adversarial Networks (GAN) emanate in the category of Machine Learning (ML) frameworks, and use deep neural networks to generate (after training) content that aims to preserve the likeness of the original data (Yang et al., 2020[82]).
← 5. Deep neural network architectures are inspired by the brain structure and functionality and are designed for unsupervised Machine Learning in the fields such as computer vision, NLP and recommendation engines.
← 6. Foundation models (e.g. LLM models such as ChatGPT), are models trained in an unsupervised way on a huge amount of unstructured data and which can be adapted to many applications or use cases. The term has been coined by the Center for Research on Foundation Models (CRFM) of the Stanford Institute for Human-Centered Artificial Intelligence (HAI) (HAI, 2023[83]). A foundation model can be used in infinite downstream AI systems.
← 7. Still, in case of use of cloud services, the possibility of signals captured by the foundational model cannot be dismissed at this stage.
← 8. Natural language processing (NLP) is an interdisciplinary AI domain aiming at understanding natural languages as well as using them to enable human–computer interaction. It differs from text mining in that it takes into consideration the surrounding information and is concerned with processing the interactions between source data, computers, and human beings (Stephanie Kay Ashenden, 2021[84]).
← 9. Interpretability refers to the meaning of the model’s output in the context of their designed functional purposes, while explainability refers to a representation of the mechanisms underlying the AI system’s operation (NIST, 2023[32]).
← 10. iFLYTEK is an AI company based in China.
← 11. Latest example of hack by the Lazarus Group includes USD 1.7 billion crypto-assets stolen involving the North Korean Lazarus Group; the 2016 Bangladesh Bank FX reserves attack; and the 2018 Chilean bank attack.
← 12. Artificial hallucination refers to the phenomenon of a machine, such as a chatbot, generating seemingly realistic sensory experiences that do not correspond to any real-world input. This can include visual, auditory, or other types of hallucinations. Artificial hallucination is not common in chatbots, as they are typically designed to respond based on pre-programmed rules and data sets rather than generating new information. However, there have been instances where advanced AI systems, such as generative models, have been found to produce hallucinations, particularly when trained on large amounts of unsupervised data (Ji et al., 2023[85]).
← 13. Disinformation in AI LLMs defined as deliberate fabrication of untrue content designed to deceive (e.g. writing untrue texts and articles), while misinformation involves the false or misleading information that does not intend to harm (e.g. creating falsehoods for entertainment) that can damage public trust in democratic institutions (OECD, 2023[6]; Molly Lesher, Hanna Pawelec and Arpitha Desai, 2022[86]).
← 14. Referred to as the impact of the output of the model on subsequent actions, for foundation models.
← 15. Algorithmically driven high frequency trading strategies appear to have contributed to extreme market volatility, reduced liquidity and exacerbated flash crashes that have occurred with growing frequency over the past several years (OECD, 2021[10]). Spoofing and other illegal market manipulation strategies, as well as collusion of ML models are additional risks of AI use in high frequency trading (OECD, 2021[10]).
← 16. With the exception of Lao PDR which features policies related to the digital economy without making explicit reference to AI (National Digital Economic Development Strategy for 2021-2030, and the National Digital Economic Development Plan for 2021-2025).
← 17. Malaysia also features Gen AI in its national digital economy policies.
← 18. Design, development, deployment of the model.