This chapter illustrates opportunities in several sectors where artificial intelligence (AI) technologies are seeing rapid uptake, including transport, agriculture, finance, marketing and advertising, science, healthcare, criminal justice, security the public sector, as well as in augmented and virtual reality applications. In these sectors, AI systems can detect patterns in enormous volumes of data and model complex, interdependent systems to generate outcomes that improve the efficiency of decision making, save costs and enable better resource allocation. The section on AI in transportation was developed by the Massachusetts Institute of Technology’s Internet Policy Research Institute. Several sections build on work being undertaken across the OECD, including the Committee on Digital Economy Policy and its Working Party on Privacy and Security, the Committee for Scientific and Technological Policy, the e-leaders initiative of the Public Governance Committee, as well as the Committee on Consumer Policy and its Working Party on Consumer Product Safety.
Artificial Intelligence in Society
3. AI applications
Abstract
AI in transportation with autonomous vehicles
Artificial intelligence (AI) systems are emerging across the economy. However, one of the most transformational shifts has been with transportation and the transition to self-driving, or autonomous vehicles (AVs).
Economic and social impact of AVs
Transportation is one of the largest sectors in economies across the OECD. In 2016, it accounted for 5.6% of gross domestic product across the OECD (OECD, 2018[1]).1 The potential economic impact of introducing AVs into the economy could be significant due to savings from fewer crashes, less congestion and other benefits. It is estimated that a 10% adoption rate of AVs in the United States would save 1 100 lives and save USD 38 billion per year. A 90% adoption rate could save 21 700 lives and reduce annual costs by USD 447 billion (Fagnant and Kockelman, 2015[2]).
More recent research has found significant cost differences per kilometre for different transportation modes with and without vehicle automation in Switzerland (Bösch et al., 2018[3]). Their findings suggest that taxis will enjoy the largest cost savings. Individuals with private cars will receive smaller cost savings (Figure 3.1). Not surprisingly, the savings for taxis are largely due to elimination of driver wages.
Market evolution
The state of transportation is in flux due to three significant and recent market shifts: the development of AV systems, the adoption of ride-sharing services and the shift to electric power vehicles. Traditional automobile manufacturers struggle to define their strategies in the face of two trends. First, ride-sharing services are increasing viable transportation options for users, particularly younger generations. Second, there are questions about the long-term viability of traditional car ownership. High-end manufacturers are already experimenting with new business models such as subscription services. Examples include “Access by BMW”, “Mercedes Collection” and “Porsche Passport” where users can pay a flat monthly fee and exchange cars when they like.
Technology companies, from large multinationals to small start-ups, are moving into AV systems, ride-sharing services or electric vehicles – or some combination of the three. Morgan Stanley recently estimated Alphabet’s Waymo division to be worth up to USD 175 billion on its own, based largely on its potential for autonomous trucking and delivery services (Ohnsman, 2018[4]). Zoox, a recent start-up focused on AI systems for driving in dense urban environments, has raised USD 790 million. This gives it a valuation of USD 3.2 billion2 before producing any revenues (see also Section “Private equity investments in AI start-ups” in Chapter 2). These actions by technology companies complement the investment of traditional automakers and parts suppliers in AI-related technologies for vehicles.
Given the complexity of AV systems, companies tend to focus on their specific areas of expertise and then partner with firms specialising in others. Waymo is one of the leading firms in AV given its specialisation in massive data sets and ML. However, it does not build its own cars, choosing instead to rely on partners such as General Motors (GM) and Jaguar (Higgins and Dawson, 2018[6]).
Large auto manufacturers have also partnered with smaller start-ups to gain access to cutting-edge technology. For example, in October 2018 Honda announced a USD 2.75 billion investment in GM’s Cruise self-driving venture (Carey and Lienert, 2018[7]). Ride-sharing firms such as Uber have also invested significantly in AVs and set up partnerships with leading technical universities (CMU, 2015[8]). This, however, has introduced questions of liability in the case of accidents, particularly when multiple stakeholders are in charge of multiple parts.
The diversity of market players investing in AV capabilities can be seen in the number of patent filings related to AVs by different groups of firms (Figure 3.2). Large automakers have considerable investments in intellectual property (IP); they are closely followed by auto suppliers and technology companies.
Technology evolution
At the basic level, AVs have new systems of sensors and processing capacity that generate new complexities in the extract, transform and load process of their data systems. Innovation is flourishing amid high levels of investment in all key areas for AV. Less expensive light detection and ranging systems, for example, can map out the environment. In addition, new computer vision technologies can track the eyes and focus of drivers and determine when they are distracted. Now, after pulling in data and processing it, AI is adding another step: split-second operational decisions.
The core standard for measuring the progress of AV development is a six-stage standard developed by the Society of Automotive Engineers (SAE) (ORAD, 2016[9]). The levels can be summarised as follows:
Level 0 (no driving automation): A human driver controls everything. There is no automated steering, acceleration, braking, etc.
Level 1 (driver assistance): There is a basic level of automation, but the driver remains in control of most functions. The SAE says lateral (steering) or longitudinal control (e.g. acceleration) can be done autonomously, but not simultaneously, at this level.
Level 2 (partial driving automation): Both lateral and longitudinal motion is controlled autonomously, for example with adaptive cruise control and functionality that keeps the car in its lane.
Level 3 (conditional driving automation): A car can drive on its own, but needs to be able to tell the human driver when to take over. The driver is considered the fallback for the system and must stay alert and ready.
Level 4 (high driving automation): The car can drive itself and does not rely on a human to take over in case of a problem. However, the system is not yet capable of autonomous driving in all circumstances (depending on situation, geographic area, etc.).
Level 5 (full driving automation): The car can drive itself without any expectation of human intervention, and can be used in all driving situations. There is significant debate among stakeholders about how far the process has come towards fully autonomous driving. Stakeholders also disagree about the right approach to introduce autonomous functionality into vehicles.
Two key discussions focus on the role of the driver and the availability of the technology:
a) The role of the driver
Eliminating the need for a human driver: Some firms developing AVs such as Waymo and Tesla believe it will soon be possible to eliminate the need for a human driver (owner or safety monitor). Tesla sells cars with level 3 autonomy. Waymo had plans to launch a fully autonomous taxi service with no driver in Arizona by the end of 2018 (Lee, 2018[10]).
Supporting the driver: Other system developers believe the best use of AV systems for the near term will be to avoid accidents rather than to replace drivers. Toyota, the world’s most valuable automaker by market capitalisation, is emphasising development of a vehicle that is incapable of causing a crash (Lippert et al., 2018[5]).
b) The scope of availability: There are two emerging approaches for initiating the deployment of automation in vehicles as described by Walker-Smith (2013[11]) and ITF (2018[12]).
Everything somewhere: In this approach, very-high-level functionality is possible only in certain geographic areas or on certain roads that have been mapped in detail. Cadillac’s Super Cruise, for example, is only available in certain places (e.g. it will only work on divided highways that have been mapped).
Something everywhere: In this approach, functionality is only introduced to an AV system when it can be deployed on any road and in any situation. The result is a limited set of functions that should work in all locations. This appears to be the preferred approach of many automobile manufacturers.
Among more optimistic firms, the years 2020 and 2021 seem to be key targets for delivering AVs with level 4 functionality. Tesla and Zoox, for example, have set 2020 as a target, while Audi/Volkswagen, Baidu and Ford have targeted 2021. Renault Nissan has targeted 2022. Other manufacturers are also investing heavily in the technology. However, they are focused on preventing accidents from human drivers or believe the technology is not sufficiently developed for level 4 driving in the near team. These include BMW, Toyota, Volvo and Hyundai (Welsch and Behrmann, 2018[13]).
Policy issues
The rollout of AVs raises a number of important legal and regulatory issues (Inners and Kun, 2017[14]). They concern specifically security and privacy (Bose et al., 2016[15]), but also touch more broadly on economy and society (Surakitbanharn et al., 2018[16]). Some more important areas of policy concern for OECD countries can be grouped as follows:
Safety and regulation
In addition to ensuring safety (Subsection “Robustness, security and safety” in Chapter 4), policy issues include liability, equipment regulations for controls and signals, driver regulation and the consideration of traffic laws and operating rules (Inners and Kun, 2017[14]).
Data
As with any AI system, access to data to train and adjust systems will be critical for the success of AVs. AV manufacturers have gathered immense data over the course of their trials. Fridman (8 October 2018[17]) estimates that Tesla has data for over 2.4 billion kilometres driven by its Autopilot. These real-time driving data that AV developers collect is proprietary and not shared across firms. However, initiatives such as the one by the Massachusetts Institute of Technology (MIT) (Fridman et al., 2018[18]) are building accessible data sets to understand driver behaviour. Their accessibility makes them particularly important for researchers and AV developers looking to improve systems. Policy discussions could include access to data collected by various systems and the government’s role in funding open data collections.
Security and privacy
AV systems require large amounts of data about the system, driver behaviour and their environment to function reliably and safely. These systems will also connect to various networks to relay information. The data collected, accessed and used by AV systems will need to be sufficiently secured against unwanted access. Such data can also include sensitive information such as location and user behaviour that will need to be managed and protected (Bose et al., 2016[15]). The International Transport Forum calls for comprehensive cybersecurity frameworks for automated driving (ITF, 2018[12]). New cryptographic protocols and systems also offer the promise of protecting privacy and securing the data. Yet these systems may slow down processing time for mission-critical and safety-critical tasks. In addition, they are in their early stages and not yet available at scales and speeds required by real-time AV deployments.
Workforce disruption
The shift to AVs could have a significant effect on freight, taxi, delivery and other service jobs. In the United States, for example, an estimated 2.86% of workers have driving occupations (Surakitbanharn et al., 2018[16]). Bösch et al. (2018[3]) highlight potentially significant cost savings in these industries from a shift to autonomous systems. Therefore, a rapid transition to AVs in the industry from a profit-maximising perspective might be expected when the technology is sufficiently advanced. Non-technical barriers such as regulation, however, would need to be overcome. This technological shift will displace workers, highlighting the need for policy work focused on skills and jobs in the context of a transitioning work environment (OECD, 2014[19]).
Infrastructure
The introduction of AVs may require changes to infrastructure in keeping with the move to a mixed driving environment with a combination of human drivers and AVs. AVs may have the necessary equipment to communicate with each other in the future. However, legacy automobiles with human drivers would remain a significant source of uncertainty. AVs would need to adjust their behaviour in response to human-controlled vehicles. The possibility of dedicated AV lanes or infrastructure that could separate human drivers from AVs in the future is being discussed (Surakitbanharn et al., 2018[16]). Infrastructure policy will need to integrate AV awareness into the planning process as the technology advances and AVs roll out.
AI in agriculture
Improving accuracy of cognitive computing technologies such as image recognition is changing agriculture. Traditionally, agriculture has relied on the eyes and hands of experienced farmers to identify the right crops to pick. “Harvesting” robots equipped with AI technologies and data from cameras and sensors can now make this decision in real time. This type of robot can increasingly perform tasks that previously required human labour and knowledge.
Technology start-ups are creating innovative solutions leveraging AI in agriculture (FAO, 2017[20]). They can be categorised as follows (Table 3.1):
Agricultural robots handle essential agricultural tasks such as harvesting crops. Compared to human workers, these robots are increasingly fast and productive.
Crop and soil monitoring leverages computer vision and deep-learning algorithms to monitor crop and soil health. Monitoring has improved due to greater availability of satellite data (Figure 3.3).
Predictive analytics use ML models to track and predict the impact of environmental factors on crop yield.
Table 3.1. A selection of AI start-ups in agriculture
Category |
Company |
Description |
Agricultural robots |
Abundant Robotics |
Developed an apple-vacuum robot that uses computer vision to detect and pick apples with the same accuracy and care as a human. The company claims that the work of one robot is equivalent to that of ten people. |
Blue River Technology |
Developed a robot known as See & Spray to monitor plants and soils and spray herbicide on weeds in lettuce and cotton fields. Precision spraying can help prevent resistance to herbicide and decrease the volume of chemicals used by 80%. John Deere acquired this company in September 2017 at USD 305 million. |
|
Harveset CROO Robotics |
Developed a robot to help pick and pack strawberries. It can harvest 3.2 hectares a day and replace 30 human workers, helping to address labour shortage in key farming regions and prevent associated revenue losses. |
|
Crop and soil monitoring |
PEAT |
Developed a deep-learning application to identify potential soil defects and nutrient deficiencies. It diagnoses plant health based on images taken by farmers. |
Resson |
Developed image recognition algorithms that can accurately detect and classify plant pests and diseases. Resson has partnered with McCain Foods to help minimise losses in the potato production supply chain. |
|
SkySquirrel Technologies |
Developed a system to analyse vineyard health based on images. Users upload images by drones to the company’s cloud system that diagnoses grapevine leaves’ condition. The company claims its technology can scan 20 hectares in 24 minutes and provide data analysis with 95% accuracy. |
|
Predictive analytics |
aWhere |
Developed ML algorithms based on satellite data to predict weather conditions and provide customised advice to farmers, crop consultants and researchers. It also provides users with access to over a billion points of agronomic data daily. |
FarmShots |
Developed a system to analyse agricultural data derived from satellite and drone images. The system can detect diseases, pests and poor plant nutrition on farms and inform users precisely where their fields need fertiliser, reducing the amount used by nearly 40%. |
Source: Companies’ descriptions from their respective websites.
Challenges to AI adoption in agriculture
The Food and Agriculture Organization of the United Nations (FAO) predicts the global population will grow by close to 30% between now and 2050 – from 7 billion to 9 billion people. However, only an additional 4% of land will be cultivated (FAO, 2009[23]). The OECD has investigated new opportunities and challenges of digital transformation in agriculture and the food sector (Jouanjean, 2019[24]). Among digital technologies, AI applications hold particular promise to increase agriculture productivity. However, the following challenges remain for wide adoption (Rakestraw, 2017[25]):
Lack of infrastructure: Network connections remain poor in many rural areas. Also, data warehousing systems would be required to build robust applications.
Production of quality data: AI applications in agriculture require high-quality data for recognition of crops or leaves. Collecting these data can be expensive because they can be captured only during the annual growing season.
Different mindset between tech start-ups and farmers: Technology start-ups usually develop and launch products and services quickly, but farmers tend to adopt new processes and technologies more incrementally. Even big agricultural companies conduct lengthy field trials to ensure consistent performance and clear benefit of technology adoption.
Cost, notably for transactions: High-tech farms (e.g. agricultural robots) require large investments in sensors and automation tools. For example, French agriculture is designing policies to encourage investment in specific AI agricultural applications. This could facilitate adoption of new technologies even for small-scale farmers (OECD, 2017[26]).
Potential ways to encourage adoption of AI in agriculture
Solutions are being developed to address the various challenges to AI in agriculture. As in other areas of applications, open-source software is being developed and could help address cost issues. For example, Connectra has developed a motion-sensing device that attaches to a cow’s neck and monitors its health based on Google’s TensorFlow open-source software suite (Webb, 2017[27]). Transfer learning (see Subsection “Access and use of data” in Chapter 4) is helping address data issues by training algorithms with much smaller data sets. For example, researchers developed a system to detect diseases in the cassava plant that leverages learning from a different type of plant. With input of only 2 756 images of cassava leaves from plants in Tanzania, the ML researchers correctly identified brown leaf spot disease on cassava plants with 98% accuracy (Simon, 2017[28]).
AI in financial services
In the financial sector, large companies such as JPMorgan, Citibank, State Farm and Liberty Mutual are rapidly deploying AI. The same is true for start-ups such as Zest Finance, Insurify, WeCash, CreditVidya and Aire. Financial service companies are combining different ML practices. For example, French start-up QuantCube Technology analyses several billion data points collected from over 40 countries. It uses language processing, deep learning, graph theory and more to develop AI solutions for decision making in financial corporates.
Deploying AI in the financial sector has many significant benefits. These include improving customer experience, identifying rapidly smart investment opportunities and possibly granting customers more credit with better conditions. However, it raises policy questions related to ensuring accuracy and preventing discrimination, as well as the broader impact of automation on jobs.
This section provides an overview of AI applications in the financial sector. It covers credit scoring, financial technology (FinTech), algorithmic trading, cost reduction in financial services, customer experience and compliance.
Credit scoring
The financial services industry has long used statistical approaches for different ends, including calculating down-payment amounts and estimating risk of default. Credit scoring is a statistical analysis performed by financial institutions to assess a person’s credit-worthiness. In other words, it assesses the possibility a borrower may default on her/his debt obligations. In traditional credit-scoring models, analysts make hypotheses regarding the attributes affecting a credit score and create customer segments.
More recently, neural network techniques have enabled the analysis of vast quantities of data collected from credit reports. They can conduct fine-grained analysis of the most relevant factors and of their relationships. In AI systems, algorithms based on large datasets automatically determine the leveraging of neural networks, customer segments and their weights. Credit bureaus in the United States report that deep-learning techniques that analyse data in new ways can improve accuracy of predictions by up to 15% (Press, 2017[29]).
As in other sectors, the difficulty to explain results from credit-scoring algorithms based on ML is an issue. Legal standards in several countries require high levels of transparency in the financial services sector. For example, in the United States, the Fair Credit Reporting Act (1970) and the Equal Credit Opportunity Act (1974) imply that the process and the output of any algorithm have to be explainable. Companies seem to be acting. For example, Equifax, a credit reporting agency, and SAS, a data analysis company, have created an interpretable credit-scoring tool based on deep learning.
Financial technology lending
FinTech businesses have grown rapidly in recent years. FinTech lending platforms allow consumers to shop for, apply and obtain loans online within seconds. They provide lenders with traditional credit report data (including payment history, amounts owed, length of history, number of accounts and more). In addition, FinTech lenders leverage a variety of alternative data sources. These include insurance claims, social media activities, online shopping information from marketplaces such as Amazon, shipping data from postal services, browsing patterns, and type of telephone or browser used (Jagtiani and Lemieux, 2019[30]). Research shows that alternative data processed by FinTech companies using AI can facilitate access to credit for those with no traditional credit history. They can also lower the costs associated with lending both for consumers and lenders (FSB, 2017[31]).
Research has compared the performance of algorithms to predict the probability of default based on the traditional FICO3 score used in the United States and alternative data (Berg et al., 2018[32]). The FICO score alone had an accuracy rate of 68.3%, while an algorithm based on alternative data had an accuracy rate of 69.6%. Using both types of data together, the accuracy rate rose to 73.6%. These results suggest that alternative data complements, rather than substitutes for, credit bureau information. Thus, a lender using information from both traditional (FICO) sources, as well as alternative data, can make better lending decisions.
In the People’s Republic of China (hereafter “China”), Ant Financial has emphasised how AI has driven its loan success (Zeng, 2018[33]). It uses algorithms to process the huge amount of transaction data generated by small businesses on its platform. This has allowed Ant to lend more than USD 13.4 billion to nearly 3 million small businesses. Ant’s algorithms analyse transaction data automatically on all borrowers and on all their behavioural data in real time. It can process loans as small as several hundred Yuan renminbi (around USD 50) in a few minutes. Every action taken on Alibaba’s platform – transaction, communication between seller and buyer, or connection with other services – affects a business’s credit score. At the same time, the algorithms that calculate the scores themselves evolve over time, improving the quality of decision making with each iteration. The micro-lending operation has a default rate of about 1%, compared to the World Bank’s 2016 estimate of an average of 4% worldwide.
Credit-scoring company Alipay uses consumer data points to determine credit scores (O’Dwyer, 2018[34]). These include purchase history, type of phone used, games played and friends on social media. In addition to traditional credit scoring to grant loans, the Chinese social credit score can influence decisions like the deposit level on an apartment rental or online dating matches. A person playing video games for hours every day might, for example, obtain a lower social credit score than a person purchasing diapers who is assumed to be a responsible parent (Rollet, 2018[35]). A broader Chinese social credit system to score the “trustworthiness” of individuals, businesses and government officials is planned to be in place by 2020.
Alternative data have the possibility of expanding access to credit. However, some caution that use of alternative data may raise concerns about disparate impact, privacy, security and “explainability” (Gordon and Stewart, 2017[36]). As a result, the Consumer Financial Protection Bureau in the United States investigated the use of alternative data in credit scoring (CFPB, 2017[37]).
Deploying AI for cost reduction in financial services
The use of AI benefits both customers and financial institutions within the front office (e.g. client interaction), middle office (e.g. support for front office) and back office (e.g. settlements, human resources, compliance). The deployment of AI in the front, middle and back offices is expected to save financial entities an estimated 1 trillion dollars by 2030 in the United States, impacting 2.5 million financial services employees (Sokolin and Low, 2018[38]). Increasingly advanced AI tools are decreasing the need for human intervention.
In the front office, financial data and account actions are being integrated with AI-powered software agents. These agents can converse with clients within platforms such as Facebook Messenger or Slack that use advanced language processing. In addition to improving traditional customer service with AI, many financial companies are using AI to power “robot advisors”. In this approach, algorithms provide automated financial advice and offerings (OECD, 2017[39]).
Another interesting development is the use of sentiment analysis on financial social media platforms. Companies such as Seeking Alpha and StockTwits focus on the stock market, enabling users to connect with each other and consult with professionals to grow their investment. The data produced on these platforms can be integrated in decision-making processes (Sohangir et al., 2018[40]). AI also helps enable online and mobile banking by authenticating users via fingerprint or facial recognition captured by smart phones. Alternatively, banks use voice recognition as a password to customer service rather than numerical passcodes (Sokolin and Low, 2018[38]).
In the middle office, AI can facilitate risk management and regulatory oversight processes. In addition, AI is helping portfolio managers to invest more efficiently and accurately. In back office product design, AI is broadening data sources to assess credit risk, take insurance underwriting risk and assess claims damage (e.g. assessing a broken windshield using machine vision).
Legal compliance
The financial sector is well known for the high cost of complying with standards and regulatory reporting requirements. New regulation over the past decade in the United States and European Union has further heightened the cost of regulatory compliance for banks. In recent years, banks spent an estimated USD 70 billion annually on regulatory compliance and governance software. This spending reflects the cost of having bank attorneys, paralegals and other officers verify transaction compliance. Costs for these activities were expected to grow to nearly USD 120 billion in 2020 (Chintamaneni, 26 June 2017[41]). Deploying AI technologies, particularly language processing, is expected to decrease banks’ compliance costs by approximately 30%. It will significantly decrease the time needed to verify each transaction. AI can help interpret regulatory documents and codify compliance rules. For example, the Coin program created by JPMorgan Chase reviews documents based on business rules and data validation. In seconds, the program can examine documents that would take a human being 360 000 hours of work to review (Song, 2017[42]).
Fraud detection
Fraud detection is another major application of AI by financial companies. Banks have always monitored account activity patterns. Advances in ML, however, are starting to enable near real-time monitoring. This is allowing identification of anomalies immediately, which trigger a review. The ability of AI to continuously analyse new behaviour patterns and to automatically self-adjust is uniquely important for fraud detection because patterns evolve rapidly. In 2016, the bank Credit Suisse Group AG launched an AI joint venture with Silicon Valley surveillance and security firm Palantir Technologies. To help banks detect unauthorised trading, they developed a solution that aims to catch employees with unethical behaviours before they can harm the bank (Voegeli, 2016[43]). Fraud detection based on ML biometric security systems is also gaining traction in the telecommunications sector.
Algorithmic trading
Algorithmic trading is the use of computer algorithms to decide on trades automatically, submit orders and manage those orders after submission. The popularity of algorithmic trading has grown dramatically over the past decade. It now accounts for the majority of trades put through exchanges globally. In 2017, JPMorgan estimated that just 10% of trading volume in stocks was “regular stock picking” (Cheng, 2017[44]). Increased computing capabilities enable “high frequency trading” whereby millions of orders are transmitted every day and many markets are scanned simultaneously. In addition, while most human brokers use the same type of predictors, the use of AI allows more factors to be considered.
AI in marketing and advertising
AI is influencing marketing and advertising in many ways. At the core, AI is enabling the personalisation of online experiences. This helps display the content in which consumers are most likely to be interested. Developments in ML, coupled with the large quantities of data being generated, increasingly allow advertisers to target their campaigns. They can deliver personalised and dynamic ads to consumers at an unprecedented scale (Chow, 2017[45]). Personalised advertising offers significant benefits to enterprises and consumers. For enterprises, it could increase sales and the return on investment of marketing campaigns. For consumers, online services funded by advertising revenue are often provided free of charge to end users and can significantly decrease consumers’ research costs.
The following non-exhaustive list outlines some developments in AI that could have a large impact on marketing and advertising practices around the world:
Language processing: One of the major subfields of AI that increases personalisation of ads and marketing messages is natural language processing (NLP). It enables the tailoring of marketing campaigns based on linguistic context such as social media posts, emails, customer service interactions and product reviews. Through NLP algorithms, machines learn words and identify patterns of words in common human language. They improve their accuracy as they go. In so doing, they can infer a customer’s preferences and buying intent (Hinds, 2018[46]). NLP can improve the quality of online search results and create a better match between the customer’s expectations and the ads presented, leading to greater advertising efficiency. For example, if customers searched online for a specific brand of shoes, an AI-based advertising algorithm could send targeted ads for this brand while they are doing unrelated tasks online. It can even send phone notifications when customers walk close to a shoe store offering discounts.
Structured data analysis: AI’s marketing impact goes beyond the use of NLP models to analyse “unstructured data”. Because of AI, today’s online recommendation algorithms vastly outdo simple sets of guidelines or historical ratings from users. Instead, a wide range of data is used to provide customised recommendations. For instance, Netflix creates personalised suggested watching lists by considering what movies a person has watched or the ratings given to those movies. However, it also analyses which movies are watched multiple times, rewound and fast-forwarded (Plummer, 2017[47]).
Determining the likelihood of success: In online advertising, click-through rate (CTR) – the number of people who click on an ad divided by the number who have seen the ad – is an important metric for assessing ad performance. As a result, click prediction systems based on ML algorithms have been designed to maximise the impact of sponsored ads and online marketing campaigns. For the most part, Reinforced Learning algorithms are used to select the ad that incorporates the characteristics that would maximise CTR in the targeted population. Boosting CTR could significantly increase businesses’ revenue: a 1% CTR improvement could yield huge gains in additional sales (Hong, 27 August 2017[48]).
Personalised pricing:4 AI technologies are allowing companies to offer prices that continuously adjust to consumer behaviour and preferences. At the same time, companies can respond to the laws of supply and demand, profit requirements and external influences. ML algorithms can predict the top price a customer will pay for a product. These prices are uniquely tailored to the individual consumer at the point of engagement, such as online platforms (Waid, 2018[49]). On the one hand, AI can leverage dynamic pricing to the consumer’s benefit. On the other, personalised pricing will likely be detrimental if it involves exploitative, distortionary or exclusionary pricing (Brodmerkel, 2017[50]).
AI-powered augmented reality: Augmented reality (AR) provides digital representations of products superimposed on the customer’s view of the real world. AR combined with AI can give customers an idea of how the product would look once produced and placed in its projected physical context. AI-powered AR systems can learn from a customer’s preferences. It can then adapt the computer-generated images of the products accordingly, improving customer experience and increasing the likelihood of buying (De Jesus, 2018[51]). AR could expand the online shopping market and thus boost online advertising revenue.
AI in science
Global challenges today range from climate change to antibiotic bacterial resistance. Solutions to many of these challenges require increases in scientific knowledge. AI could increase the productivity of science, at a time when some scholars are claiming that new ideas may be becoming harder to find (Bloom et al., 2017[52]). AI also promises to improve research productivity even as pressure on public research budgets is increasing. Scientific insight depends on drawing understanding from vast amounts of scientific data generated by new scientific instrumentation. In this context, using AI in science is becoming indispensable. Furthermore, AI will be a necessary complement to human scientists because the volume of scientific papers is vast and growing rapidly, and scientists may have reached “peak reading”.5
The use of AI in science may also enable novel forms of discovery and enhance the reproducibility of scientific research. AI’s applications in science and industry have become numerous and increasingly significant. For instance, AI has predicted the behaviour of chaotic systems, tackled complex computational problems in genetics, improved the quality of astronomical imaging and helped discover the rules of chemical synthesis. In addition, AI is being deployed in functions that range from analysis of large datasets, hypothesis generation, and comprehension and analysis of scientific literature to facilitation of data gathering, experimental design and experimentation itself.
Recent drivers of AI in science
Forms of AI have been applied to scientific discovery for some time, even if this has been sporadic. For example, the AI program DENDRAL was used in the 1960s to help identify chemical structures. In the 1970s, an AI known as Automated Mathematician assisted mathematical research. Since those early approaches, computer hardware and software have vastly improved, and data availability has increased significantly. Several additional factors are also enabling AI in science: AI is well-funded, at least in the commercial sector; scientific data are increasingly abundant; high-performance computing is improving; and scientists now have access to open-source AI code.
The diversity of AI applications in science
AI is in use across many fields of research. It is a frequently used technique in particle physics, for example, which depends on finding complex spatial patterns in vast streams of data yielded by particle detectors. With data gleaned from social media, AI is providing evidence on relationships between language use, psychology and health, and social and economic outcomes. AI is also tackling complex computational problems in genetics, improving the quality of imaging in astronomy and helping discover the rules of chemical synthesis, among other uses (OECD, 2018[53]). The range and frequency of such applications is likely to grow. As advances occur in automated ML process, scientists, businesses and other users can more readily employ this technology.
Progress has also occurred in AI-enabled hypothesis generation. For example, IBM has produced a prototype system, KnIT, which mines information contained in scientific literature. It represents this information explicitly in a queryable network, and reasons on these data to generate new and testable hypotheses. KnIT has text-mined published literature to identify new kinases – an enzyme that catalyses the transfer of phosphate groups from high-energy, phosphate-donating molecules to specific substrates. These kinases have introduced a phosphate group into a protein tumour suppressor (Spangler et al., 2014[54]).
AI is likewise assisting in the review, comprehension and analysis of scientific literature. NLP can now automatically extract both relationships and context from scientific papers. For example, the KnIT system involves automated hypothesis generation based on text mining of scientific literature. Iris.AI6 is a start-up that offers a free tool to extract key concepts from research abstracts. It presents the concepts visually (such that the user can see cross-disciplinary relationships). It also gathers relevant papers from a library of over 66 million open access papers.
AI is assisting in large-scale data collection. In citizen science, for example, applications use AI to help users identify unknown animal and plant specimens (Matchar, 2017[55]).
AI can also combine with robotic systems to execute closed-loop scientific research
The convergence of AI and robotics has many potential benefits for science. Laboratory-automation systems can physically exploit techniques from the AI field to pursue scientific experiments. At a laboratory at the University of Aberystwyth in Wales, for example, a robot named Adam uses AI techniques to perform cycles of scientific experimentation automatically. It has been described as the first machine to independently discover new scientific knowledge. Specifically, it discovered a compound, Triclosan, that works against wild-type and drug-resistant Plasmodium falciparum and Plasmodium vivax (King et al., 2004[56]). Fully automating science has several potential advantages (OECD, 2018[57]):
Faster scientific discovery: Automated systems can generate and test thousands of hypotheses in parallel. Due to their cognitive limits, human beings can only consider a few hypotheses at a time (King et al., 2004[56]).
Cheaper experimentation: AI systems can select experiments that cost less to perform (Williams et al., 2015[58]). The power of AI offers efficient exploration and exploitation of unknown experimental landscapes. It leads the development of novel drugs (Segler, Preuss and Waller, 2018[59]), materials (Butler et al., 2018[60]) and devices (Kim et al., 2017[61]).
Easier training: Including initial education, a human scientist requires over 20 years and huge resources to be fully trained. Humans can only absorb knowledge slowly through teaching and experience. Robots, by contrast, can directly absorb knowledge from each other.
Improved knowledge and data sharing and scientific reproducibility: One of the most important issues in biology – and other scientific fields – is reproducibility. Robots have the superhuman ability to record experimental actions and results. These results, along with the associated metadata and employed procedures, are automatically and completely recorded and in accordance with accepted standards at no additional cost. By contrast, recording data, metadata and procedures adds up to 15% to the total costs of experimentation by humans.
Laboratory automation is essential to most areas of science and technology. However, it is expensive and difficult to use due to a low number of units sold and market immaturity. Consequently, laboratory automation is used most economically in large central sites. Indeed, companies and universities are increasingly concentrating their laboratory automation. The most advanced example of this trend is cloud automation. In this practice, a large amount of equipment is gathered in a single site. Biologists, for example, send their samples and use an application to help design their experiments.
Policy considerations
The increasing use of AI systems in science could also affect sociological, institutional and other aspects of science. These include the transmission of knowledge, systems of credit for scientific discoveries, the peer-review system and systems of intellectual property rights. As AI contributes increasingly to the world of science, the importance of policies that affect access to data and high-performance computing will amplify. The growing prominence of AI in discovery is raising new, and as yet unanswered, questions. Should machines be included in academic citations? Will IP systems need adjustments in a world in which machines can invent? In addition, a fundamental policy issue concerns education and training (OECD, 2018[57]).
AI in health
Background
AI applications in healthcare and pharmaceuticals can help detect health conditions early, deliver preventative services, optimise clinical decision making, and discover new treatments and medications. They can facilitate personalised healthcare and precision medicine, while powering self-monitoring tools, applications and trackers. AI in healthcare offers potential benefits for quality and cost of care. Nevertheless, it also raises policy questions, in particular concerning access to (health) data (Section “AI in Health”) and privacy (Subsection “Personal data protection” in Chapter 4). This section focuses on AI’s specific implications for healthcare.
In some ways, the health sector is an ideal platform for AI systems and a perfect illustration of its potential impacts. A knowledge-intensive industry, it depends on data and analytics to improve therapies and practices. There has been tremendous growth in the range of information collected, including clinical, genetic, behavioural and environmental data. Every day, healthcare professionals, biomedical researchers and patients produce vast amounts of data from an array of devices. These include electronic health records (EHRs), genome sequencing machines, high-resolution medical imaging, smartphone applications and ubiquitous sensing, as well as Internet of Things (IoT) devices that monitor patient health (OECD, 2015[62]).
Beneficial impact of AI on healthcare
If put to use, AI data generated could be of great value to healthcare and research. Indeed, health sectors across countries are undergoing a profound transformation as they capitalise on opportunities provided by information and communication technologies. Key objectives shaping this transformation process include improved efficiency, productivity and quality of care (OECD, 2017[26]).
Specific illustrations
Improving patient care: Secondary use of health data can improve the quality and effectiveness of patient care, in both clinical and homecare settings. For example, AI systems can alert administrators and front-line clinicians when measures related to quality and patient safety fall outside a normal range. They can also highlight factors that may be contributing to the deviations (Canadian Institute for Health Information, 2013[63]). A specific aspect of improving patient care concerns precision medicine. This is based on rapid processing of a variety of complex datasets such as a patient’s health records, physiological reactions and genomic data. Another aspect concerns mobile health: mobile technologies provide helpful real-time feedback along the care continuum – from prevention to diagnosis, treatment and monitoring. Linked with other personal information such as location and preferences, AI-enhanced technologies can identify risky behaviours or encourage beneficial ones. Thus, they can produce tailored interventions to promote healthier behaviour (e.g. taking the stairs instead of the lift, drinking water or walking more) and achieve better health outcomes. These technologies, as well as sensor-based monitoring systems, offer continuous and direct monitoring and personalised intervention. As such, they can be particularly useful to improve the quality of elderly care and the care of people with disabilities (OECD, 2015[62]).
Managing health systems: Health data can inform decisions regarding programmes, policy and funding. In this way, they can help manage and improve the effectiveness and efficiency of the health system. For example, AI systems can reduce costs by identifying ineffective interventions, missed opportunities and duplicated services. Access to care can be increased and wait times reduced through four key ways. First, AI systems understand patient journeys across the continuum of care. Second, they ensure that patients receive the services most appropriate for their needs. Third, they accurately project future healthcare needs of the population. Fourth, they optimise allocation of resources across the system (Canadian Institute for Health Information, 2013[63]). With increasing monitoring of therapies and events related to pharmaceuticals and medical devices (OECD, 2015[62]), countries can use AI to advance identification of patterns, such as systemic failures and successes. More generally, data-driven innovation fosters a vision for a “learning health system”. Such a system can continuously incorporate data from researchers, providers and patients. This allows it to improve comprehensive clinical algorithms, reflecting preferred care at a series of decision nodes for clinical decision support (OECD, 2015[62]).
Understanding and managing population and public health: In addition to timelier public health surveillance of influenza and other viral outbreaks, data can be used to identify unanticipated side effects and contraindications of new drugs (Canadian Institute for Health Information, 2013[63]). AI technologies may allow for early identification of outbreaks and surveillance of disease spreading. Social media, for example, can both detect and disseminate information on public health. AI uses NLP tools to process posts on social media to extract potential side effects (Comfort et al., 2018[64]; Patton, 2018[65]).
Facilitating health research: Health data can support clinical research and accelerate discovery of new therapies. Big data analytics offers new and more powerful opportunities to measure disease progression and health for improved diagnosis and care delivery, as well as translational and clinical research, e.g. for developing new drugs. In 2015, for example, the pharmaceutical company Atomwise collaborated with researchers at the University of Toronto and IBM to use AI technology in performing Ebola treatment research.7 The use of AI is also increasingly tested in medical diagnosis, with a landmark approval by the United States Food and Drug Administration. The ruling allowed marketing of the first medical device to use AI to “detect greater than a mild level of the eye disease diabetic retinopathy in adults who have diabetes” (FDA, 2018[66]). Similarly, ML techniques can be used to train models to classify images of the eye, potentially embedding cataract detectors in smartphones and bringing them to remote areas (Lee, Baughman and Lee, 2017[67]; Patton, 2018[65]). In a recent study, a deep-learning algorithm was fed more than 100 000 images of malignant melanomas and benign moles. It eventually outperformed a group of 58 international dermatologists in the detection of skin cancer (Mar and Soyer, 2018[68]).
Enabling AI in healthcare – success and risk factors
Sufficient infrastructure and risk mitigation should be in place to take full advantage of AI capabilities in the health sector.
Countries are increasingly establishing EHR systems and adopting mobile health (m-health), allowing mobile services to support the practice of medicine and public health (OECD, 2017[69]). Robust evidence demonstrates how EHRs can help reduce medication errors and better co-ordinate care (OECD, 2017[26]). On the other hand, the same study showed that only a few countries have achieved high-level integration and capitalised on the possibility of extracting data from EHRs for research, statistics and other secondary uses. Healthcare systems still tend to capture data in silos and analyse them separately. Standards and interoperability are key challenges that must be addressed to realise the full potential of EHRs (OECD, 2017[26]).
Another critical factor for the use of AI in the health sector concerns minimising the risks to data subjects’ privacy. The risks in increased collection and processing of personal data are described in detail in Subsection “Personal data protection” of Chapter 4. This subsection addresses the high sensitivity of health-related information. Bias in the operation of an algorithm recommending specific treatment could create real health risks to certain groups. Other privacy risks are particular to the health sector. For example, questions from the use of data extracted from implantable healthcare devices, such as pacemakers, could be evidenced in court.8 Additionally, as these devices become more sophisticated, they raise increasing safety risks, such as a malicious takeover that would administer a harmful operation. Another example is the use of biological samples (e.g. tissues) for ML, which raises complex questions of consent and ownership (OECD, 2015[62]; Ornstein and Thomas, 2018[70]).9
As a result of these concerns, many OECD countries report legislative barriers to the use of personal health data. These barriers include disabling data linkages and hindering the development of databases from EHRs. The 2016 Recommendation of the Council on Health Data Governance is an important step towards a more coherent approach in health data management and use (OECD, 2016[71]). It aims primarily to promote the establishment and implementation of a national health data governance framework. Such a framework would encourage the availability and use of personal health data to serve health-related public interests. At the same time, it would promote protection of privacy, personal health data and data security. Adopting a coherent approach to data management could help remove the trade-off between data use and security.
Involving all relevant stakeholders is an important means of garnering trust and public support in the use of AI and data collection for health purposes. Similarly, governments could develop appropriate trainings for future health data scientists, or pair data scientists with healthcare practitioners. In this way, they could provide better understanding of the opportunities and risks in this emerging field (OECD, 2015[62]). Involving clinicians in the design and development of AI healthcare systems could prove essential for getting patients and providers to trust AI-based healthcare products and services.
AI in criminal justice
AI and predictive algorithms in the legal system
AI holds the potential to improve access to justice and advance its effective and impartial adjudication. However, concerns exist about AI systems’ potential challenges to citizen participation, transparency, dignity, privacy and liberty. This section will focus on AI advancement in the area of criminal justice, touching upon developments in other legal areas as well.
AI is increasingly used in different stages of the criminal procedure. These range from predicting where crimes may occur and the outcome of a criminal procedure to conducting risk assessments on defendants, as well as to contributing to more efficient management of the process. Although many AI applications are still experimental, a few advanced prediction products are already in use in justice administration and law enforcement. AI can improve the ability to make connections, detect patterns, and prevent and solve crimes (Wyllie, 2013[72]). The uptick in the use of such tools follows a larger trend of turning to fact-based methods as a more efficient, rational and cost-effective way to allocate scarce law enforcement resources (Horgan, 2008[73]).
Criminal justice is a sensitive point of interaction between governments and citizens, where asymmetry of power relations and information is particularly pronounced. Without sufficient safeguards, it might create disproportionately adverse results, reinforce systemic biases and possibly even create new ones (Barocas and Selbst, 2016[74]).
Predictive policing
In predictive policing, law enforcement uses AI to identify patterns in order to make statistical predictions about potential criminal activity (Ferguson, 2014[75]). Predictive methods were used in policing even before the introduction of AI to this field. In one notable example, police analysed accumulated data to map cities into high- and low-risk neighbourhoods (Brayne, Rosenblat and Boyd, 2015[76]). However, AI can link multiple datasets and perform complex and more fine-grained analytics, thus providing more accurate predictions. For example, the combination of automatic license plate readers, ubiquitous cameras, inexpensive data storage and enhanced computing capabilities can provide police forces with significant information on many people. Using these data, police can identify patterns, including patterns of criminal behaviour (Joh, 2017[77]).
There are two major methods of predictive policing. Location prediction applies retrospective crime data to forecast when and where crimes are likely to occur. Locations could include liquor stores, bars and parks where certain crimes have occurred in the past. Law enforcement could attempt to prevent future crimes by deploying an officer to patrol these areas, on a specific day/time of the week. In person-based prediction, law enforcement departments use crime statistics to help predict which individuals or groups are most likely to be involved in crimes, either as victims or offenders.
AI-enhanced predictive policing initiatives are being trialled in cities around the world, including in Manchester, Durham, Bogota, London, Madrid, Copenhagen and Singapore. In the United Kingdom, the Greater Manchester Police developed a predictive crime mapping system in 2012. Since 2013, the Kent Police has been using a system called PredPol. These two systems estimate the likelihood of crime in particular locations during a window of time. They use an algorithm originally developed to predict earthquakes.
In Colombia, the Data-Pop Alliance uses crime and transportation data to predict criminal hotspots in Bogota. Police forces are then deployed to specific places and at specific times where risk of crime is higher.
Many police departments also rely on social media for a wide range of purposes. These include discovering criminal activity, obtaining probable cause for search warrants, collecting evidence for court hearings, pinpointing the locations of criminals, managing volatile situations, identifying witnesses, broadcasting information and soliciting tips from the public (Mateescu et al., 2015[78]).
The use of AI raises issues with respect to use of personal data (Subsection “Personal data protection” in Chapter 4) and to risks of bias (Subsection “Fairness and ethics” in Chapter 4). In particular, it raises concerns with regard to transparency and the ability to understand its operation. These issues are especially sensitive when it comes to criminal justice. One approach to improve algorithmic transparency, applied in the United Kingdom, is a framework called ALGO-CARE. This aims to ensure that police using algorithmic risk assessment tools consider key legal and practical elements (Burgess, 2018[79]). The initiative translates key public law and human rights principles, developed in high-level documents, into practical terms and guidance for police agencies.
Use of AI by the judiciary
In several jurisdictions, the judiciary uses AI primarily to assess risk. Risk assessment informs an array of criminal justice outcomes such as the amount of bail or other conditions for release and the eligibility for parole (Kehl, Guo and Kessler, 2017[80]). The use of AI for risk assessment follows other forms of actuarial tools that judges have relied on for decades (Christin, Rosenblat and Boyd, 2015[81]). Researchers at the Berkman Klein Center at Harvard University are working on a database of all risk assessment tools used in the criminal justice systems in the United States to help inform decision making (Bavitz and Hessekiel, 2018[82]).
Risk assessment algorithms predict the risk level based on a small number of factors, typically divided into two groups. These are criminal history (e.g. previous arrests and convictions, and prior failures to appear in court) and sociodemographic characteristics (e.g. age, sex, employment and residence status). Predictive algorithms summarise the relevant information for making decisions more efficiently than the human brain. This is because they process more data at a faster rate and may also be less exposed to human prejudice (Christin, Rosenblat and Boyd, 2015[81]).
AI-based risk assessment tools developed by private companies raise unique transparency and explainability concerns. These arise because non-disclosure agreements often prevent access to proprietary code to protect IP or prevent access for malicious purposes (Joh, 2017[77]). Without access to the code, there are only limited ways to examine the validity and reliability of the tools.
The non-profit news organisation ProPublica reportedly tested the validity of a proprietary tool called COMPAS, which is used in some jurisdictions in the United States. It found that COMPAS predictions were accurate 60% of the time across all types of crime. However, the prediction accuracy rate for violent crime was only 20%. In addition, the study pointed out racial disparities. The algorithm falsely flagged black defendants as future criminals twice as often as it did with white defendants (Angwin et al., 2016[83]). The study attracted media attention and its results were questioned on the basis of statistical errors (Flores, Bechtel and Lowenkamp, 2016[84]). COMPAS is a “black box” algorithm, meaning that no one, including its operators, has access to the source code.
The use of COMPAS was challenged in court with opponents claiming its proprietary nature violates defendants’ right to due process. The Supreme Court of Wisconsin approved the use of COMPAS in sentencing. However, it must remain an assistive tool and the judge must retain full discretion to determine additional factors and weigh them accordingly.10 The US Supreme Court denied a petition to hear the case.11
In another study examining the impact of AI on criminal justice, Kleinberg et al. (2017[85]) built an ML algorithm. It aims to predict which defendants would commit an additional crime while awaiting trial or try to escape court (pre-trial failures). Input variables were known and the algorithm determined the relevant sub-categories and their respective weight. For the age variable, for example, the algorithm determined the most statistical significant division of age group brackets, such as 18-25 and 25-30 years old. The authors found this algorithm could considerably reduce incarceration rates, as well as racial disparities. Moreover, AI reduced human biases: the researchers concluded that any information beyond the necessary factors for prediction could distract judges and increase the risk of biased rulings.
Advanced AI-enhanced tools for risk assessment are also used in the United Kingdom. Durham Constabulary has developed the Harm Assessment Risk Tool to evaluate the risk of convicts reoffending. The tool is based on a person’s past offending history, age, postcode and other background characteristics. Based on these indicators, algorithms classify the person as low, medium or high risk.
Using AI to predict the outcome of cases
Using advanced language processing techniques and data analysis capabilities, several researchers have built algorithms to predict the outcome of cases with high accuracy rates. For example, researchers at University College London and the Universities of Sheffield and Pennsylvania, developed an ML algorithm that can predict the outcome of cases heard by the European Court of Human Rights with a 79% accuracy rate (Aletras et al., 2016[86]). Another study by researchers from the Illinois Institute of Technology in Chicago built an algorithm that can predict the outcome of cases brought before the US Supreme Court with a 79% accurate rate (Hutson, 2017[87]). The development of such algorithms could help the parties assess the likelihood of success in trial or on appeal (based on previous similar cases). It could also help lawyers identify which issues to highlight in order to increase their chances of winning.
Other uses of AI in legal procedures
In civil cases, the use of AI is broader. Attorneys are using AI for drafting contracts, mining documents in discovery and due diligence (Marr, 2018[88]). The use of AI might expand to other similar areas of the criminal justice system such as plea bargain and examination. Because the design of the algorithms and their use could affect the outcome, policy implications related to AI need to be considered carefully.
AI in security
AI promises to help address complex digital and physical security challenges. In 2018, global defence spending is forecasted to reach USD 1.67 trillion, a 3.3% year-on-year increase (IHS, 2017[89]). Security spending is not limited to the public sector, however. The private sector worldwide was expected to spend USD 96 billion to respond to security risks in 2018, an 8% increase from 2017 (Gartner, 2017[90]). Recent large-scale digital security attacks have increased society’s awareness of digital security. They have demonstrated that data breaches can have far-reaching economic, social and national security consequences. Against this backdrop, public and private actors alike are adopting and employing AI technologies to adjust to the changing security landscape worldwide. This section describes two security-related areas that are experiencing particularly rapid uptake: digital security and surveillance.12,13
AI in digital security
AI is already broadly used in digital security applications such as network security, anomaly detection, security operations automation and threat detection (OECD, 2017[26]). At the same time, malicious use of AI is expected to increase. Such malicious activities include identifying software vulnerabilities with the goal of exploiting them to breach the availability, integrity or confidentiality of systems, networks and data. This will affect the nature and overall level of digital security risk.
Two trends make AI systems increasingly relevant for security: the growing number of digital security attacks and the skills shortage in the digital security industry (ISACA, 2016[91]). As a result of these trends, ML tools and AI systems are becoming increasingly relevant to automate threat detection and response (MIT, 2018[92]). Malware constantly evolves. ML has become indispensable to combat attacks such as polymorphic viruses, denial of service and phishing.14 Indeed, leading email service providers, such as Gmail and Outlook, have employed ML at varying levels of success for more than a decade to filter unwanted or pernicious email messages. Box 3.1 illustrates some uses of AI to protect enterprises against malicious threats.
Computer code is prone to human error. Nine out of ten digital security attacks are estimated to result from flaws in software code. This occurs despite the vast amount of development time – between 50% and 75% – spent on testing (FT, 2018[93]). Given the billions of lines of code being written every year and the re-use of third party proprietary libraries to do it, detecting and correcting errors in software code is a daunting task for the human eye. Countries such as the United States and China are funding research projects to make AI systems that can detect software security vulnerabilities. Companies such as Ubisoft – the video game maker – are starting to use AI to flag faulty code before it is implemented, effectively reducing testing time by 20% (FT, 2018[93]). In practice, software-checking AI technologies work like spellcheck tools that identify typos and syntax errors in word processing software. However, AI technologies learn and become more effective as they go (FT, 2018[93]).
Box 3.1. Using AI to manage digital security risk in business environments
Companies like Darktrace, Vectra and many others apply ML and AI to detect and react to digital security attacks in real time. Darktrace relies on Enterprise Immune System technology, which does not require previous experience of a threat to understand its potential danger. AI algorithms iteratively learn a network’s unique “pattern of life” or “self” to spot emerging threats that would otherwise go unnoticed. In its methods, Darktrace is analogous to the human immune system, which learns about what is normal to the body and automatically identifies and neutralises situations outside such pattern of normality.
Vectra proactively hunts down attackers in cloud environments by using a non-stop, automated and always-learning “cognito platform”. This provides full visibility into the attacker behaviours from cloud and data centre workloads to user and IoT devices. In this way, Vectra makes it increasingly hard for attackers to hide.
Sources: www.darktrace.com/; https://vectra.ai/.
AI in surveillance
Digital infrastructure is developing in cities. This is especially true in the surveillance sector, where various tools that use AI are being installed to increase public security. Smart cameras, for example, can detect a fight. Gunshot locators automatically report recorded shots and provide the exact location. This section looks at how AI is revolutionising the world of public security and surveillance.
Video surveillance has become an increasingly common tool to enhance public security. In the United Kingdom, a recent study estimated that security footage provided useful evidence for 65% of the crimes committed on the British railway network between 2011 and 2015 for which footage was available (Ashby, 2017[94]). The massive volume of surveillance cameras – 245 million globally in 2014 – implies a growing amount of data being generated. This went from 413 Petabytes (PB) of information produced in just one day in 2013 to a daily estimate of 860 PB in 2017 (Jenkins, 2015[95]); (Civardi, 2017[96]). Humans have limited ability to process such high amounts of data. This gives way to the use of AI technologies designed to handle large volumes of data and automate mechanical processes of detection and control. Moreover, AI enables security systems to detect and react to crime in real time (Box 3.2).
Box 3.2. Surveillance with “smart” cameras
The French Commission for Atomic and Alternative Energies, in partnership with Thales, uses deep learning to automatically analyse and interpret videos for security applications. A Violent Event Detection module automatically detects violent interactions such as a fight or aggression captured by closed-circuit television cameras and alerts operators in real time. Another module helps locate the perpetrators on the camera network. These applications are being evaluated by French public transportation bodies RATP and SNCF in the Châtelet-Les Halles and Gare du Nord stations, two of Paris’ busiest train and subway stations. The city of Toulouse, France, also uses smart cameras to signal unusual behaviour and spot abandoned luggage in public places. Similar projects are being trialled in Berlin, Rotterdam and Shanghai.
Source: Demonstrations and information provided to the OECD by CEA Tech and Thales in 2018. More information (in French) available at: www.gouvernement.fr/sites/default/files/contenu/piece-jointe/2015/11/projet_voie_videoprotection_ouverte_et_integree_appel_a_projets.pdf.
Box 3.3. Face recognition as a tool for surveillance
Face-recognition technologies are increasingly being used to provide effective surveillance by private or public actors (Figure 3.4). AI improves traditional face-recognition systems by allowing for faster and more accurate identification in cases where traditional systems would fail, such as poor lighting and obstructed targets. Companies such as FaceFirst combine face-recognition tools with AI to offer solutions to prevent theft, fraud and violence. Specific considerations are embedded into their design. This allows the design to meet the highest standards of privacy and security, such as anti-profiling to prevent discrimination, encryption of image data and strict timeframes for data purging. These surveillance tools have been applied across industries, ranging from retail (e.g. to stop shoplifting), banking (e.g. to prevent identity fraud) and law enforcement (e.g. for border security) to event management (e.g. to recognise banned fans) and casinos (e.g. to spot important people).
In line with the dual-use nature of AI, surveillance tools incorporating AI could have illegitimate purposes that may go against the principles described in Chapter 4. Legitimate purposes include law enforcement to streamline criminal investigations, detect and stop crimes at their early stages and counter terrorism. Face-recognition technologies have proven relevant in this regard (Box 3.3). However, the impact of AI in surveillance goes beyond face-recognition systems. It also plays an increasingly important role in enhancing faceless recognition technologies. In these cases, alternative information about subjects (height, clothing, build, postures, etc.) is used for their identification. Additionally, AI has been effective when combined with image-sharpening technologies: large image datasets are used to train neural networks on the typical features of physical objects such as skin, hair or even bricks in a wall. The system then recognises such features in new images and adds extra details and textures to them using the knowledge previously acquired. This fills the gaps produced by poor image resolution, improving the effectiveness of surveillance systems (Medhi, Scholkopf and Hirsch, 2017[97]).
AI in the public sector
The potential of AI for public administrations is manifold. The development of AI technologies is already having an impact on how the public sector works and designs policies to serve citizens and businesses. Applications touch on areas such as health, transportation and security services.15
Governments in OECD countries are experimenting with and implementing projects aimed at exploiting AI to better meet the needs of public-service users. They also want to enhance stewardship of their resources (e.g. increasingly saving the time civil servants spend on customer support and administrative tasks). AI tools could enhance the efficiency and quality of many public sector procedures. For example, they could offer citizens the opportunity to be engaged right up-front in the process of service design and to interact with the state in a more agile, effective and personalised way. If correctly designed and implemented, AI technologies could be integrated into the entire policy-making process, support public sector reforms and improve public sector productivity.
Some governments have deployed AI systems to strengthen social welfare programmes. For instance, AI could help attain optimal inventory levels at health and social service locations. They would do this through ML technologies that analyse transaction data and make increasingly accurate replenishment predictions. This, in turn, would facilitate forecasting and policy development. In another example, AI algorithms are helping the UK government detect fraud in social benefits claims (Marr, 2018[98]).
AI applications using augmented and virtual reality
Companies are using AI technology and high-level visual recognition tasks such as image classification and object detection to develop AR and virtual reality (VR) hardware and software. Benefits include offering immersive experiences, training and education, helping people with disabilities and providing entertainment. VR and AR have grown remarkably since Ivan Sutherland developed the first VR headset prototype in 1968 to view 3D images. Too heavy to wear, it had to be mounted on the ceiling (Günger and Zengin, 2017[99]). VR companies now provide 360-degree video streaming experiences with much lighter headsets. Pokemon GO drew consumers’ attention to AR in 2016 and expectations remain high. AI-embedded applications are already in the marketplace. IKEA provides a mobile app that allows customers to see how a piece of furniture would look and fit in a given space with an accuracy up to 1 millimetre (Jesus, 2018[100]). Some tech companies are developing applications for the visually impaired.16
AI enables interactive AR/VR
As AR/VR develop, AI is being used to help them become interactive, and feature more attractive and intuitive content. AI technologies enable AR/VR to detect user’s motion, such as eye movements and hand gestures. This allows it to interpret the motions with high accuracy and customise content in real time according to the user’s reaction (Lindell, 2017[101]). For example, AI in VR can detect when a user is looking at a specific visual field and provide full resolution content only in that case. This reduces system resource needs, lags and frame loss (Hall, 2017[102]). Symbiotic development of AR/VR and AI technologies is expected in fields such as marketing research, training simulations and education (Kilpatrick, 2018[103]); (Stanford, 2016[104]).
VR for training AI systems
Some AI systems require large amounts of training data. However, lack of data availability remains an important issue. For example, AI systems in driverless cars must be trained to deal with critical situations, but little actual data about children running onto a street exist. An alternative would be the development of a digital reality. In this case, the AI system would be trained in a computer-simulated environment that faithfully replicates relevant features of the real world. Such a simulated environment can also be used for validating performance of AI systems (e.g. as “driver’s license test” for driverless cars) (Slusallek, 2018[105]).
The field of application goes beyond driverless cars. In fact, researchers developed a platform called Household Multimodal Environment (HoME) that would provide a simulated environment to train household robots. HoME has a database with over 45 000 diverse 3D house layouts. It provides a realistic environment for artificial agents to learn through vision, audio, semantics, physics and interaction with objects and other agents (Brodeur et al., 2017[106]).
By allowing AI systems to learn by trial and error, cloud-based VR simulation would be ideal for systems’ training, particularly in critical situations. Continuous development in cloud technology would help realise the environment. For example, in October 2017, NVIDIA announced a cloud-based VR simulator that can replicate accurate physics in real-world environments. It is expected that developers will create a new training ground for AI systems within a few years (Solotko, 2017[107]).
References
[86] Aletras, N. et al. (2016), “Predicting judicial decisions of the European Court of Human Rights: A natural language processing perspective”, PeerJ Computer Science, Vol. 2, p. e93, http://dx.doi.org/10.7717/peerj-cs.93.
[83] Angwin, J. et al. (2016), “Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks”, ProPublica, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
[94] Ashby, M. (2017), “The value of CCTV surveillance cameras as an investigative tool: An empirical analysis”, European Journal on Criminal Policy and Research, Vol. 23/3, pp. 441-459, http://dx.doi.org/10.1007/s10610-017-9341-6.
[74] Barocas, S. and A. Selbst (2016), “Big data’s disparate impact”, California Law Review, Vol. 104, pp. 671-729, http://www.californialawreview.org/wp-content/uploads/2016/06/2Barocas-Selbst.pdf.
[82] Bavitz, C. and K. Hessekiel (2018), Algorithms and Justice: Examining the Role of the State in the Development and Deployment of Algorithmic Technologies, Berkman Klein Center for Internet and Society, https://cyber.harvard.edu/story/2018-07/algorithms-and-justice.
[32] Berg, T. et al. (2018), “On the rise of FinTechs – Credit scoring using digital footprints”, Michael J. Brennan Irish Finance Working Paper Series Research Paper, No. 18-12, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3163781.
[52] Bloom, N. et al. (2017), “Are ideas getting harder to find?”, NBER Working Paper, No. 23782, http://dx.doi.org/10.3386/w23782.
[3] Bösch, P. et al. (2018), “Cost-based analysis of autonomous mobility services”, Transport Policy, Vol. 64, pp. 76-91, https://doi.org/10.1016/j.tranpol.2017.09.005.
[15] Bose, A. et al. (2016), “The VEICL Act: Safety and security for modern vehicles”, Willamette Law Review, Vol. 53, p. 137.
[76] Brayne, S., A. Rosenblat and D. Boyd (2015), “Predictive policing, data & civil rights: A new era of policing and justice”, Pennsylvania Law Review, Vol. 163/327, http://www.datacivilrights.org/pubs/2015-1027/Predictive_Policing.pdf.
[106] Brodeur, S. et al. (2017), “HoME: A household multimodal environment”, arXiv 1107, https://arxiv.org/abs/1711.11017.
[50] Brodmerkel, S. (2017), “Dynamic pricing: Retailers using artificial intelligence to predict top price you’ll pay”, ABC News, 27 June, http://www.abc.net.au/news/2017-06-27/dynamic-pricing-retailers-using-artificial-intelligence/8638340.
[108] Brundage, M. et al. (2018), The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, Future of Humanity Institute, University of Oxford, Centre for the Study of Existential Risk, University of Cambridge, Centre for a New American Security, Electronic Frontier Foundation and Open AI, https://arxiv.org/ftp/arxiv/papers/1802/1802.07228.pdf.
[79] Burgess, M. (2018), “UK police are using AI to make custodial decisions but it could be discriminating against the poor”, WIRED, 1 March, http://www.wired.co.uk/article/police-ai-uk-durham-hart-checkpoint-algorithm-edit.
[60] Butler, K. et al. (2018), “Machine learning for molecular and materials science”, Nature, Vol. 559/7715, pp. 547-555, http://dx.doi.org/10.1038/s41586-018-0337-2.
[63] Canadian Institute for Health Information (2013), “Better information for improved health: A vision for health system use of data in Canada”, in collaboration with Canada Health Infoway, http://www.cihi.ca/cihi-ext-portal/pdf/internet/hsu_vision_report_en.
[7] Carey, N. and P. Lienert (2018), “Honda to invest $2.75 billion in GM’s self-driving car unit”, Reuters, 3 October, https://www.reuters.com/article/us-gm-autonomous/honda-buys-in-to-gm-cruise-self-driving-unit-idUSKCN1MD1GW.
[37] CFPB (2017), “CFPB explores impact of alternative data on credit access for consumers who are credit invisible”, Consumer Financial Protection Bureau, https://www.consumerfinance.gov/about-us/newsroom/cfpb-explores-impact-alternative-data-credit-access-consumers-who-are-credit-invisible/.
[44] Cheng, E. (2017), “Just 10% of trading is regular stock picking, JPMorgan estimates”, CNBC, 13 June, https://www.cnbc.com/2017/06/13/death-of-the-human-investor-just-10-percent-of-trading-is-regular-stock-picking-jpmorgan-estimates.html.
[41] Chintamaneni, P. (26 June 2017), How banks can use AI to reduce regulatory compliance burdens, digitally.cognizant blog, https://digitally.cognizant.com/how-banks-can-use-ai-to-reduce-regulatory-compliance-burdens-codex2710/.
[45] Chow, M. (2017), “AI and machine learning get us one step closer to relevance at scale”, Google, https://www.thinkwithgoogle.com/marketing-resources/ai-personalized-marketing/.
[81] Christin, A., A. Rosenblat and D. Boyd (2015), Courts and Predictive Algorithms, presentation at the "Data & Civil Rights, A New Era of Policing and Justice" conference, Washington, 27 October, http://www.law.nyu.edu/sites/default/files/upload_documents/Angele%20Christin.pdf.
[96] Civardi, C. (2017), Video Surveillance and Artificial Intelligence: Can A.I. Fill the Growing Gap Between Video Surveillance Usage and Human Resources Availability?, Balzano Informatik, http://dx.doi.org/10.13140/RG.2.2.13330.66248.
[8] CMU (2015), “Uber, Carnegie Mellon announce strategic partnership and creation of advanced technologies center in Pittsburgh”, Carnegie Mellon University News, 2 February, https://www.cmu.edu/news/stories/archives/2015/february/uber-partnership.html.
[64] Comfort, S. et al. (2018), “Sorting through the safety data haystack: Using machine learning to identify individual case safety reports in social-digital media”, Drug Safety, Vol. 41/6, pp. 579-590, https://www.ncbi.nlm.nih.gov/pubmed/29446035.
[22] Cooke, A. (2017), Digital Earth Australia, presentation at the "AI: Intelligent Machines, Smart Policies" conference, Paris, 26-27 October, http://www.oecd.org/going-digital/ai-intelligent-machines-smart-policies/conference-agenda/ai-intelligent-machines-smart-policies-cooke.pdf.
[51] De Jesus, A. (2018), “Augmented reality shopping and artificial intelligence – Near-term applications”, Emerj, 18 December, https://www.techemergence.com/augmented-reality-shopping-and-artificial-intelligence/.
[2] Fagnant, D. and K. Kockelman (2015), “Preparing a nation for autonomous vehicles: Opportunities, barriers and policy recommendations”, Transportation Research A: Policy and Practice, Vol. 77, pp. 167-181, https://www.sciencedirect.com/science/article/pii/S0.
[20] FAO (2017), “Can artificial intelligence help improve agricultural productivity?”, e-agriculture, 19 December, http://www.fao.org/e-agriculture/news/can-artificial-intelligence-help-improve-agricultural-productivity.
[23] FAO (2009), How to Feed the World in 2050, Food and Agriculture Organization of the United Nations, Rome, http://www.fao.org/fileadmin/templates/wsfs/docs/expert_paper/How_to_Feed_the_World_in_2050.pdf.
[66] FDA (2018), FDA permits marketing of artificial intelligence-based device to detect certain diabetes-related eye problems, Food and Drug Administration, News Release, 11 April, https://www.fda.gov/NewsEvents/Newsroom/PressAnnouncements/ucm604357.htm.
[75] Ferguson, A. (2014), “Big Data and Predictive Reasonable Suspicion”, SSRN Electronic Journal, http://dx.doi.org/10.2139/ssrn.2394683.
[84] Flores, A., K. Bechtel and C. Lowenkamp (2016), False positives, false negatives, and false analyses: A rejoinder to “Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks“, Federal probation, 80.
[17] Fridman, L. (8 October 2018), “Tesla autopilot miles”, MIT Human-Centered AI blog, https://hcai.mit.edu/tesla-autopilot-miles/.
[18] Fridman, L. et al. (2018), “MIT autonomous vehicle technology study: Large-scale deep learning based analysis of driver behavior and interaction with automation”, arXiv, Vol. 30/September, https://arxiv.org/pdf/1711.06976.pdf.
[31] FSB (2017), Artificial Intelligence and Machine Learning in Financial Services: Market Developments and Financial Stability Implications, Financial Stability Board, Basel.
[93] FT (2018), “US and China back AI bug-detecting projects”, Financial Times, Cyber Security and Artificial Intelligence, 28 September, https://www.ft.com/content/64fef986-89d0-11e8-affd-da9960227309.
[90] Gartner (2017), “Gartner’s worldwide security spending forecast”, Gartner, Press Release, 7 December, https://www.gartner.com/newsroom/id/3836563.
[36] Gordon, M. and V. Stewart (2017), “CFPB insights on alternative data use on credit scoring”, Law 360, 3 May, https://www.law360.com/articles/919094/cfpb-insights-on-alternative-data-use-in-credit-scoring.
[99] Günger, C. and K. Zengin (2017), A Survey on Augmented Reality Applications using Deep Learning, https://www.researchgate.net/publication/322332639_A_Survey_On_Augmented_Reality_Applications_Using_Deep_Learning.
[102] Hall, N. (2017), “8 ways AI makes virtual & augmented reality even more real,”, Topbots, 13 May, https://www.topbots.com/8-ways-ai-enables-realistic-virtual-augmented-reality-vr-ar/.
[6] Higgins, T. and C. Dawson (2018), “Waymo orders up to 20,000 Jaguar SUVs for driverless fleet”, The Wall Street Journal, 27 March, https://www.wsj.com/articles/waymo-orders-up-to-20-000-jaguar-suvs-for-driverless-fleet-1522159944.
[46] Hinds, R. (2018), How Natural Language Processing is shaping the Future of Communication, MarTechSeries, Marketing Technology Insights, 5 February, https://martechseries.com/mts-insights/guest-authors/how-natural-language-processing-is-shaping-the-future-of-communication/.
[48] Hong, P. (27 August 2017), “Using machine learning to boost click-through rate for your ads”, LinkedIn blog, https://www.linkedin.com/pulse/using-machine-learning-boost-click-through-rate-your-ads-tay/.
[73] Horgan, J. (2008), “Against prediction: Profiling, policing, and punishing in an actuarial age – by Bernard E. Harcourt”, Review of Policy Research, Vol. 25/3, pp. 281-282, http://dx.doi.org/10.1111/j.1541-1338.2008.00328.x.
[87] Hutson, M. (2017), “Artificial intelligence prevails at predicting Supreme Court decisions”, Science Magazine, 2 May, http://www.sciencemag.org/news/2017/05/artificial-intelligence-prevails-predicting-supreme-court-decisions.
[61] Hu, X. (ed.) (2017), “Human-in-the-loop Bayesian optimization of wearable device parameters”, PLOS ONE, Vol. 12/9, p. e0184054, http://dx.doi.org/10.1371/journal.pone.0184054.
[89] IHS (2017), “Global defence spending to hit post-Cold War high in 2018”, IHS Markit, 18 December, https://ihsmarkit.com/research-analysis/global-defence-spending-to-hit-post-cold-war-high-in-2018.html.
[14] Inners, M. and A. Kun (2017), Beyond Liability: Legal Issues of Human-Machine Interaction for Automated Vehicles, Proceedings of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, September, pp. 245-253, http://dx.doi.org/10.1145/3122986.3123005.
[91] ISACA (2016), The State of Cybersecurity: Implications for 2016, An ISACA and RSA Conference Survey, Cybersecurity Nexus, https://www.isaca.org/cyber/Documents/state-of-cybersecurity_res_eng_0316.pdf.
[12] ITF (2018), Safer Roads with Automated Vehicles?, International Transport Forum, https://www.itf-oecd.org/sites/default/files/docs/safer-roads-automated-vehicles.pdf.
[30] Jagtiani, J. and C. Lemieux (2019), “The roles of alternative data and machine learning in Fintech lending: Evidence from the LendingClub Consumer Platform”, Working Paper, No. 18-15, Federal Reserve Bank of Philadelphia, http://dx.doi.org/10.21799/frbp.wp.2018.15.
[95] Jenkins, N. (2015), “245 million video surveillance cameras installed globally in 2014”, IHS Markit, Market Insight, 11 June, https://technology.ihs.com/532501/245-million-video-surveillance-cameras-installed-globally-in-2014.
[100] Jesus, A. (2018), “Augmented reality shopping and artificial intelligence – near-term applications”, Emerj, 12 December, https://www.techemergence.com/augmented-reality-shopping-and-artificial-intelligence/.
[77] Joh, E. (2017), “The undue influence of surveillance technology companies on policing”, New York University Law Review, Vol. 91/101, http://dx.doi.org/10.2139/ssrn.2924620.
[24] Jouanjean, M. (2019), “Digital opportunities for trade in the agriculture and food sectors”, OECD Food, Agriculture and Fisheries Papers, No. 122, OECD Publishing, Paris, https://doi.org/10.1787/91c40e07-en.
[80] Kehl, D., P. Guo and S. Kessler (2017), Algorithms in the Criminal Justice System: Assessing the Use of Risk Assessment in Sentencing, Responsive Communities Initiative, Responsive Communities Initiative, Berkman Klein Center for Internet & Society, Harvard Law School.
[103] Kilpatrick, S. (2018), “The rising force of deep learning in VR and AR”, Logik, 28 March, https://www.logikk.com/articles/deep-learning-in-vr-and-ar/.
[56] King, R. et al. (2004), “Functional genomic hypothesis generation and experimentation by a robot scientist”, Nature, Vol. 427/6971, pp. 247-252, http://dx.doi.org/10.1038/nature02236.
[85] Kleinberg, J. et al. (2017), “Human decisions and machine predictions”, NBER Working Paper, No. 23180.
[67] Lee, C., D. Baughman and A. Lee (2017), “Deep learning is effective for classifying normal versus age-related macular degeneration OCT images”, Opthamology Retina, Vol. 1/4, pp. 322-327.
[10] Lee, T. (2018), “Fully driverless Waymo taxis are due out this year, alarming critics”, Ars Technica, 1 October, https://arstechnica.com/cars/2018/10/waymo-wont-have-to-prove-its-driverless-taxis-are-safe-before-2018-launch/.
[101] Lindell, T. (2017), “Augmented reality needs AI in order to be effective”, AI Business, 6 November, https://aibusiness.com/holographic-interfaces-augmented-reality/.
[5] Lippert, J. et al. (2018), “Toyota’s vision of autonomous cars is not exactly driverless”, Bloomberg Business Week, 19 September, https://www.bloomberg.com/news/features/2018-09-19/toyota-s-vision-of-autonomous-cars-is-not-exactly-driverless.
[88] Marr, B. (2018), “How AI and machine learning are transforming law firms and the legal sector”, Forbes, 23 May, https://www.forbes.com/sites/bernardmarr/2018/05/23/how-ai-and-machine-learning-are-transforming-law-firms-and-the-legal-sector/#7587475832c3.
[98] Marr, B. (2018), “How the UK government uses artificial intelligence to identify welfare and state benefits fraud”, Forbes, 29 October, https://www.forbes.com/sites/bernardmarr/2018/10/29/how-the-uk-government-uses-artificial-intelligence-to-identify-welfare-and-state-benefits-fraud/#f5283c940cb9.
[68] Mar, V. and H. Soyer (2018), “Artificial intelligence for melanoma diagnosis: How can we deliver on the promise?”, Annals of Oncology, Vol. 29/8, pp. 1625-1628, http://dx.doi.org/10.1093/annonc/mdy193.
[55] Matchar, E. (2017), “AI plant and animal identification helps us all be citizen scientists”, Smithsonian.com, 7 June, https://www.smithsonianmag.com/innovation/ai-plant-and-animal-identification-helps-us-all-be-citizen-scientists-180963525/.
[78] Mateescu, A. et al. (2015), Social Media Surveillance and Law Enforcement, New Era of Criminal Justice and Policing, Data Civil Rights, http://www.datacivilrights.org/pubs/2015-1027/Social_Media_Surveillance_and_Law_Enforce.
[97] Medhi, S., B. Scholkopf and M. Hirsch (2017), “EnhanceNet: Single image super-resolution through automated texture synthesis”, arXiv 1612.07919, https://arxiv.org/abs/1612.07919.
[92] MIT (2018), “Cybersecurity’s insidious new threat: Workforce stress”, MIT Technology Review, 7 August, https://www.technologyreview.com/s/611727/cybersecuritys-insidious-new-threat-workforce-stress/.
[34] O’Dwyer, R. (2018), Algorithms are making the same mistakes assessing credit scores that humans did a century ago, Quartz, 14 May, https://qz.com/1276781/algorithms-are-making-the-same-mistakes-assessing-credit-scores-that-humans-did-a-century-ago/.
[57] OECD (2018), “Artificial intelligence and machine learning in science”, OECD Science, Technology and Innovation Outlook 2018: Adapting to Technological and Societal Disruption, No. 5, OECD Publishing, Paris.
[53] OECD (2018), OECD Science, Technology and Innovation Outlook 2018: Adapting to Technological and Societal Disruption, OECD Publishing, Paris, https://dx.doi.org/10.1787/sti_in_outlook-2018-en.
[109] OECD (2018), “Personalised pricing in the digital era – Note by the United Kingdom”, Key paper for the joint meeting of the OECD Consumer Protection and Competition committees, OECD, Paris, http://www.oecd.org/daf/competition/personalised-pricing-in-the-digital-era.htm.
[1] OECD (2018), Structural Analysis Database (STAN), Rev. 4, Divisions 49 to 53, http://www.oecd.org/industry/ind/stanstructuralanalysisdatabase.htm (accessed on 31 January 2018).
[69] OECD (2017), New Health Technologies: Managing Access, Value and Sustainability, OECD Publishing, Paris, http://dx.doi.org/10.1787/9789264266438-en.
[26] OECD (2017), OECD Digital Economy Outlook 2017, OECD Publishing, Paris, http://dx.doi.org/10.1787/9789264276284-en.
[39] OECD (2017), Technology and Innovation in the Insurance Sector, OECD Publishing, Paris, https://www.oecd.org/finance/Technology-and-innovation-in-the-insurance-sector.pdf (accessed on 28 August 2018).
[71] OECD (2016), Recommendation of the Council on Health Data Governance, OECD, Paris, https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0433.
[62] OECD (2015), Data-Driven Innovation: Big Data for Growth and Well-Being, OECD Publishing, Paris, http://dx.doi.org/10.1787/9789264229358-en.
[19] OECD (2014), “Skills and Jobs in the Internet Economy”, OECD Digital Economy Papers, No. 242, OECD Publishing, Paris, https://dx.doi.org/10.1787/5jxvbrjm9bns-en.
[4] Ohnsman, A. (2018), “Waymo dramatically expanding autonomous taxi fleet, eyes sales to individuals”, Forbes, 31 May, https://www.forbes.com/sites/alanohnsman/2018/05/31/waymo-adding-up-to-62000-minivans-to-robot-fleet-may-supply-tech-for-fca-models.
[9] ORAD (2016), “Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles”, On-Road Automated Driving (ORAD) Committee, SAE International, http://dx.doi.org/10.4271/j3016_201609.
[70] Ornstein, C. and K. Thomas (2018), Sloan Kettering’s cozy deal with start-up ignites a new uproar, 20 September, https://www.nytimes.com/2018/09/20/health/memorial-sloan-kettering-cancer-paige-ai.html.
[65] Patton, E. (2018), Integrating Artificial Intelligence for Scaling Internet of Things in Health Care, OECD-GCOA-Cornell-Tech Expert Consultation on Growing and Shaping the Internet of Things Wellness and Care Ecosystem, 4-5 October, New York.
[47] Plummer, L. (2017), “This is how Netflix’s top-secret recommendation system works”, WIRED, 22 August, https://www.wired.co.uk/article/how-do-netflixs-algorithms-work-machine-learning-helps-to-predict-what-viewers-will-like.
[29] Press, G. (2017), “Equifax and SAS leverage AI and deep learning to improve consumer access to credit”, Forbes, 20 February, https://www.forbes.com/sites/gilpress/2017/02/20/equifax-and-sas-leverage-ai-and-deep-learning-to-improve-consumer-access-to-credit/2/#2ea15ddd7f69.
[25] Rakestraw, R. (2017), “Can artificial intelligence help feed the world?”, Forbes, 6 September, https://www.forbes.com/sites/themixingbowl/2017/09/05/can-artificial-intelligence-help-feed-the-world/#16bb973646db.
[21] Roeland, C. (2017), EC Perspectives on the Earth Observation, presentation at the "AI: Intelligent Machines, Smart Policies" conference, Paris, 26-27 October, http://www.oecd.org/going-digital/ai-intelligent-machines-smart-policies/conference-agenda/ai-intelligent-machines-smart-policies-roeland.pdf.
[35] Rollet, C. (2018), “The odd reality of life under China’s all-seeing credit score system”, WIRED, 5 June, https://www.wired.co.uk/article/china-blacklist.
[59] Segler, M., M. Preuss and M. Waller (2018), “Planning chemical syntheses with deep neural networks and symbolic AI”, Nature, Vol. 555/7698, pp. 604-610, http://dx.doi.org/10.1038/nature25978.
[28] Simon, M. (2017), “Phone-powered AI spots sick plants with remarkable accuracy”, WIRED, 2 February, https://www.wired.com/story/plant-ai/.
[105] Slusallek, P. (2018), Artificial Intelligence and Digital Reality: Do We Need a CERN for AI?, The Forum Network, OECD, Paris, https://www.oecd-forum.org/channels/722-digitalisation/posts/28452-artificial-intelligence-and-digital-reality-do-we-need-a-cern-for-ai.
[40] Sohangir, S. et al. (2018), “Big data: Deep learning for financial sentiment analysis”, Journal of Big Data, Vol. 5/1, http://dx.doi.org/10.1186/s40537-017-0111-6.
[38] Sokolin, L. and M. Low (2018), Machine Intelligence and Augmented Finance: How Artificial Intelligence Creates $1 Trillion Dollar of Change in the Front, Middle and Back Office, Autonomous Research LLP, https://next.autonomous.com/augmented-finance-machine-intelligence.
[107] Solotko, S. (2017), “Virtual reality is the next training ground for artificial intelligence”, Forbes, 11 October, https://www.forbes.com/sites/tiriasresearch/2017/10/11/virtual-reality-is-the-next-training-ground-for-artificial-intelligence/#6e0c59cc57a5.
[42] Song, H. (2017), “JPMorgan software does in seconds what took lawyers 360,000 hours”, Bloomberg.com, 28 February, https://www.bloomberg.com/news/articles/2017-02-28/jpmorgan-marshals-an-army-of-developers-to-automate-high-finance.
[54] Spangler, S. et al. (2014), Automated Hypothesis Generation based on Mining Scientific Literature, ACM Press, New York, http://dx.doi.org/10.1145/2623330.2623667.
[104] Stanford (2016), Artificial Intelligence and Life in 2030, AI100 Standing Committee and Study Panel, Stanford University, https://ai100.stanford.edu/2016-report.
[16] Surakitbanharn, C. et al. (2018), Preliminary Ethical, Legal and Social Implications of Connected and Autonomous Transportation Vehicles (CATV), Purdue University, https://www.purdue.edu/discoverypark/ppri/docs/Literature%20Review_CATV.pdf.
[43] Voegeli, V. (2016), “Credit Suisse, CIA-funded palantir to target rogue bankers”, Bloomberg, 22 March, https://www.bloomberg.com/news/articles/2016-03-22/credit-suisse-cia-funded-palantir-build-joint-compliance-firm.
[49] Waid, B. (2018), “AI-enabled personalization: The new frontier in dynamic pricing”, Forbes, 9 July, https://www.forbes.com/sites/forbestechcouncil/2018/07/09/ai-enabled-personalization-the-new-frontier-in-dynamic-pricing/#71e470b86c1b.
[11] Walker-Smith, B. (2013), “Automated vehicles are probably legal in the United States”, Texas A&M Law Review, Vol. 1/3, pp. 411-521.
[27] Webb, L. (2017), Machine Learning in Action, presentation at the "AI: Intelligent Machines, Smart Policies" conference, Paris, 26-27 October, http://www.oecd.org/going-digital/ai-intelligent-machines-smart-policies/conference-agenda/ai-intelligent-machines-smart-policies-webb.pdf.
[13] Welsch, D. and E. Behrmann (2018), “Who’s winning the self-driving car race?”, Bloomberg, 7 May, https://www.bloomberg.com/news/features/2018-05-07/who-s-winning-the-self-driving-car-race.
[58] Williams, K. et al. (2015), “Cheaper faster drug development validated by the repositioning of drugs against neglected tropical diseases”, Journal of The Royal Society Interface, Vol. 12/104, pp. 20141289-20141289, http://dx.doi.org/10.1098/rsif.2014.1289.
[72] Wyllie, D. (2013), “How ’big data’ is helping law enforcement”, PoliceOne.Com, 20 August, https://www.policeone.com/police-products/software/Data-Information-Sharing-Software/articles/6396543-How-Big-Data-is-helping-law-enforcement/.
[33] Zeng, M. (2018), “Alibaba and the future of business”, Harvard Business Review, September-October, https://hbr.org/2018/09/alibaba-and-the-future-of-business.
Notes
← 1. STAN Industrial Analysis, 2018, value added of “Transportation and Storage” services, ISIC Rev. 4 Divisions 49 to 53, as a share of total value added, 2016 unweighted OECD average. The 2016 weighted OECD average was 4.3%.
← 3. In 1989, Fair, Isaac and Company (FICO) introduced the FICO credit score. It is still used by the majority of banks and credit grantors.
← 4. The OECD Committee on Consumer Protection, adopted the definition of personalised pricing given by the United Kingdom’s Office of Fair Trading: “Personalised pricing can be defined as a form of price discrimination, in which: ‘businesses may use information that is observed, volunteered, inferred, or collected about individuals’ conduct or characteristics, to set different prices to different consumers (whether on an individual or group basis), based on what the business thinks they are willing to pay” (OECD, 2018[109]). If utilised by vendors, personalised pricing could result in some consumers paying less for a given good or service, while others pay more than they would have done if all consumers were offered the same price.
← 5. This section draws on work by the OECD Committee for Scientific and Technological Policy, particularly Chapter 3 – “Artificial Intelligence and Machine Learning in Science” – of OECD (2018[53]). The main authors of that chapter were Professor Stephen Roberts, of Oxford University, and Professor Ross King, of Manchester University.
← 6. See https://iris.ai/.
← 10. State of Wisconsin v. Loomis, 881 N.W.2d 749 (Wis. 2016).
← 11. Loomis v. Wisconsin, 137 S.Ct. 2290 (2017).
← 12. While the importance of public spending on AI technologies for defence purposes is acknowledged, this area of study falls outside the scope of this publication.
← 13. Unless otherwise specified, throughout this work “digital security” refers to the management of economic and social risks resulting from breaches of availability, integrity and confidentiality of information and communication technologies, and data.
← 14. A phishing attack is a fraudulent attempt to obtain sensitive information from a target by deceiving it through electronic communications with an illegitimately trustworthy facade. A more labour-intensive modality is spear phishing, which is customised to the specific individual or organisation being targeted by collecting and using sensitive information such as name, gender, affiliation, etc. (Brundage et al., 2018[108]). Spear phishing is the most common infection vector: in 2017, 71% of cyberattacks began with spear phishing emails.
← 15. This has been confirmed by the E-Leaders group, which is part of the OECD Public Governance Committee. Its Thematic Group on Emerging Technologies – made up of representatives from 16 countries – focuses mainly on AI and blockchain.
← 16. BlindTool (https://play.google.com/store/apps/details?id=the.blindtool&hl=en) and Seeing AI (https://www.microsoft.com/en-us/seeing-ai) are examples of the applications.