While the benefits and opportunities of AI seem boundless, certain applications of AI risk causing intentional or unintentional harms. It is critical to ground conversations on AI development in international standards on responsible business conduct, a foundation of sustainable economic development. International standards set out recommendations to help companies identify and address the negative impacts their operations and products may have on people and the environment. This chapter focuses on potential human rights impacts of AI and how companies developing and using AI can apply OECD guidance on human rights due diligence. It also examines how existing legislation, both on human rights and on AI, deals with this issue.
OECD Business and Finance Outlook 2021
3. Human rights due diligence through responsible AI
Copy link to 3. Human rights due diligence through responsible AIAbstract
3.1. Introduction
Copy link to 3.1. IntroductionThe ability of AI to quickly analyse enormous amounts of data, recognise patterns, build upon existing knowledge, and build predictive models make it an invaluable tool for economic and social development. AI is being applied, for example, in healthcare for drug development, patient monitoring, and epidemiology; in law enforcement to detect financial crime, to combat kidnapping and human trafficking, to identify situations of bonded or child labour, and analyse crime scenes; and in local government administration to improve welfare distribution, to predict infrastructure maintenance requirements and direct traffic flows to reduce road congestion. While the benefits and opportunities of AI seem boundless, it is critical to ground conversations in in international standards on responsible business conduct (RBC), a foundation of sustainable economic development, as well as the OECD AI Principle on “robustness, security and safety”. To maximise the positive impact of AI, companies and governments also need to understand and prevent risks of harm that the technology can cause.
Key message
Copy link to Key messageTo maximise the positive impact of AI, companies and governments also need to understand and prevent risks of harm. The OECD Guidelines for Multinational Enterprises and OECD Due Diligence Guidance for Responsible Business Conduct provide government-backed frameworks, aligned with international standards on business and human rights that companies can implement to better identify and address risks of harm.
This chapter broadly lays out how certain applications of AI risk causing intentional or unintentional harms, and how companies developing, selling and using AI can apply OECD recommendations to help prevent and mitigate negative impacts. The chapter also summarises current national, international, business-led, and multi-stakeholder initiatives helping to tackle some of these issues.
Since its inception, the OECD has been committed to utilising the power of international business and new technology as a driving force for sustainable economic, environmental and social development. In parallel to acknowledging and encouraging this, the OECD also recognises that business activities can result in adverse impacts related to workers, human rights, the environment, bribery, consumers and corporate governance. This is why the OECD Guidelines for Multinational Enterprises (the OECD Guidelines) were first adopted in 1976.1
The OECD Guidelines go beyond the traditional, philanthropic Corporate Social Responsibility (CSR) approach by setting out government-backed recommendations for business to proactively address potential harms they may cause, contribute to, or are directly linked to. The OECD Guidelines specifically recommend that companies carry out due diligence to identify and address any adverse impacts associated with their operations, their supply chains or other business relationships.
On technology specifically, the OECD Guidelines call on companies to support science and technological innovation in the countries where they operate.2 Companies are encouraged to do this through establishing partnerships with local research institutions (such as universities), hiring and training local staff to work with new technologies and to sell or license new tech on reasonable terms and with due consideration to the long term development effects on the host country.
The OECD Guidelines are also a commitment by governments to provide an enabling environment for RBC. Governments can enable RBC in several ways including: Regulating – establishing and enforcing an adequate legal framework that protects the public interest and underpins RBC, and monitoring business performance and compliance with regulatory frameworks; Facilitating – clearly communicating expectations on what constitutes RBC, providing guidance with respect to specific practices and enabling companies to meet those expectations; and Co-operating – working with stakeholders in the business community, worker organisations, civil society, general public, across internal government structures, as well as other governments to create synergies and establish coherence with regard to RBC.
The OECD Guidelines are aligned with the United Nations Guiding Principles on Business and Human Rights and the International Labour Organisation Tripartite Declaration of Principles Concerning Multinational Enterprises. In addition, certain RBC expectations outlined in the OECD Guidelines (e.g. on addressing environmental degradation in business activities) are also referenced in global frameworks such as the G20 agenda, the Sustainable Development Goals, and the Paris Climate Accord.
3.2. Overview of human rights impacts of AI
Copy link to 3.2. Overview of human rights impacts of AIAI’s impacts on RBC are manifold given the positive and negative potential of the technology and far reaching effects. Given the potential breadth of its application and use, AI promises to advance the protection and fulfilment of human rights, as well as allowing people with disabilities to overcome hurdles to living a more independent life. Examples include using AI to facilitate more personalised education to individuals with learning disabilities, assisting visually impaired people to navigate electronic devices, and shedding light on discrimination (OECD, 2019[1]). Likewise, as described in Section 3.5 below, AI is being used to support supply chain management in a way to make human rights due diligence far more efficient for companies.
Beyond impacts to the financial markets and competitive practices (which all concern RBC, but are covered in more detail in other chapters and other OECD publications), AI has had an observed negative impact on human rights and labour rights across a broad scope of applications. This includes the following use-cases, which were selected for illustrative purposes and are not an exhaustive list. It should also be noted that the issues themselves are not mutually exclusive (e.g. applications that could affect rights to privacy may also impact freedom of expression and in some cases, the right to life and freedom from cruel and degrading treatment).
As billions of smartphones, laptops, cameras, and other devices collect data and analyse it using increasingly powerful and sophisticated software, users of that data are able to build more accurate profiles of individuals, that can be monetised, used to track and predict movements and purchases, or ultimately used to manipulate the individual. Much of the privacy-sensitive data analysis, such as search algorithms, recommendation engines, and advertising software, is driven by AI. While existing consumer privacy and consent laws restrict access to some information, AI powered analysis can still create highly accurate behaviour predictions based on existing publicly available data. As AI improves, it magnifies the ability to exploit personal information in ways that can intrude on privacy rights and other human rights by raising analysis of personal information to new levels (Kerry, 2020[2]).
AI applications for facial recognition provide a salient example. Drawing from the thousands of images of an individual available on social media and government databases (such as driver’s licenses and passports), AI-powered surveillance cameras can recognise individuals and match their images with broader sets of data on the individual. This type of technology is already being deployed for police use in some countries, and risks being used by authoritarian regimes to oppress political dissidents and minority populations.
A 2016 study found that half of US adults are already in police facial recognition databases across the country (Bedoya, 2016[3]). Though advocates point to the positive aspects of the technology to find missing persons and identify victims of crime, there have also been accusations of misuse, such as targeting political activists for arrest during larger protests against police violence (Vincent, 2020[4]). In other contexts, reports have emerged of surveillance and facial recognition being used to track ethnic minorities based on physical appearance, while keeping records of their movements for search and review (Mozur, 2019[5]).
Owing to concerns over privacy and misuse, multiple major cities in the United States have adopted bans on the technology. California, New Hampshire, and Oregon all have enacted legislation banning use of facial recognition technology with police body cameras (Kerry, 2020[2]). Following the Black Lives Matter protests in the United States in 2020, IBM, Amazon, and Microsoft, restricted or suspended sales of their facial recognition products.
In Europe, the General Data Protection Regulation (GDPR) prohibits the processing of biometric data for the purpose of uniquely identifying a natural person, data concerning health, data concerning a natural person’s sex life or sexual orientation and the processing of data revealing racial or ethnic origins.4 GDPR rules on AI more broadly are discussed in the section below on data use legislation (1.4.4). EU officials had originally considered a blanket ban on facial recognition in public spaces, but have instead left it to member states to impose the ban after strong opposition from some members. Additionally, in April 2021, the European Commission proposed new rules and actions on the development of trustworthy AI, where facial recognition is considered to be a high-risk application and is only allowed in specific cases (European Commission, 2021[6]).
Human biases are often present in non-automated systems of reviewing data (e.g. reviewing job applications) and automated AI systems can, in theory, help correct or compensate for some of those biases. However, AI systems can also be intentionally or unintentionally biased themselves. Biases present in AI systems can include those that relate to model design (e.g. deciding which variables to consider) and pre-existing biases in the data (e.g. only male applicants in a data pool of CVs). The initial design of a programme may omit important aspects of an issue or reflect the biases of designers which can mean that decisions are improperly influenced by data on ethnicity, sex, age when hiring or firing, offering loans, or even in criminal proceedings. The decision-making generated by these systems might be perceived incorrectly as inherently fair and neutral. In a phenomenon sometimes referred to as ‘mathwashing’, using AI-generated numbers to represent complex social realities can make findings seem factual and precise when they are not (European Commission, 2021[7]).
Concerns of discrimination arise when individual variables in algorithms indirectly serve as proxies for protected or undisclosed information such as race, sexual orientation, gender or age. An algorithm may lead users to discriminate against a group which correlates with the proxy variable in question.
The direct impacts of this are obvious when applied to something like filtering job applications. For example, Amazon’s failed experiment with a hiring algorithm replicated the company’s existing disproportionately male workforce (Destin, 2018[8]). In that situation, computer models were trained to vet job applicants by observing patterns in successful applications submitted to the company over a 10-year period. Most came from men. As a result, the algorithm taught itself to penalise women candidates by correlating less preferable applications with the word “women” when it appeared in phrases in applications like “women’s college”, “women in business club” or “women’s basketball team”.
Another striking but less obvious impact is observed in search engines discriminating when providing search results. Due to a number of the factors taken into account, search engines rank advertisements of smaller companies that are registered in less affluent neighbourhoods lower than those of large entities, which may put them at a commercial disadvantage and perpetuate economic inequality (Council of Europe Committee of Experts on Internet Intermediaries, 2018[9]). Different users of the search engine are also provided with different results based on their profiles, resulting in differential pricing. In a related example, a Harvard study found that names linked with black Americans were 25% more likely to have results that prompted the searcher to click on a link to search criminal record history which is certain to have detrimental effects when potential employers, loan officers, etc., use those search engines. (Sweeny, 2013[10]).
When applied to contexts of crime prevention and predictive policing, discriminatory AI decision support can result in serious infringements on other rights such as presumption of innocence, the right to be informed promptly of the cause and nature of an accusation, the right to a fair hearing and the right to defend oneself in person. Such examples include the use of AI systems to support identifying potential terrorists based on content they post online, to determine if an individual poses a flight risk, to suggest the length of a prison sentence or whether an individual should be granted parole (Council of Europe Committee of Experts on Internet Intermediaries, 2018[9]). AI learning in such applications is based on current police databases, which often reflect and reinforce existing racial and cultural biases present in communities. Existing databases may be biased or incomplete, or even when complete, AI systems may fail to apply a presumption of innocence when recommending an action to be taken based on probabilities.
3.2.3. Right to fair trial and due process6
Copy link to 3.2.3. Right to fair trial and due process<a id="back-endnotea3z7" href="/content/oecd/en/publications/2021/09/oecd-business-and-finance-outlook-2021_377c2c18/full-report/component-7.html#endnotea3z7" style="vertical-align: top;font-size: 0.8em;">6</a>When investigators enter a crime scene or initiate an investigation, they are often presented with an enormous amount of very detailed information. AI systems are being used to help investigators analyse and process that information to filter out only the most useful, timely evidence (Baraniuk, 2019[11]). Applications range from analysing thousands of photographs on a phone to reconstructing the faces of murder victims based on small fragments of genetic information. The extreme efficiency that comes with higher computing power can have positive impacts on police ability to resolve crimes and catch suspects. AI is also being used to help law enforcement in the financial sector through what is called ‘SupTech’ or supervisory technology (see Chapter 5). This could help regulators analyse large amounts of financial data to spot risks of fraud, market manipulation or anti-competitive practices.
AI is also being applied in the courts to reduce the burdens of judges and magistrates. For example, the Estonian Ministry of Justice are asking AI firms to design a “robot judge” that could adjudicate small claims disputes of less than €7000 (Niiler, 2018[12]). In some parts of the United States, AI systems are used to help recommend criminal sentences. This trend is increasing across different countries to handle the notoriously overwhelmed legal dockets of judges, prosecutors and public defenders. Likewise, AI systems are being used to help provide limited forms of legal aid to individuals who might not be able to afford it (Chouhan, 2019[13]). For example, an app developed by a university student in the United States in 2018 uses AI systems to fight parking tickets by automatically filling up appeals forms based on interactions with users (Walter, 2019[14]). The same technology is being deployed more broadly to help users fill out complicated government applications or to act as a screener for potential clients at law firms.
There is a some fear, however, that decision support systems based on AI are inappropriately used and that they are perceived as being more ”objective”, even when this is not the case. Questions also arise when legal decisions are made based on difficult (or impossible) to explain algorithms (e.g. obtaining an arrest or search warrant). Deep Learning Algorithms are able to rework the rules on the basis for which they were programmed and may make decisions that are incomprehensible to the AI actors designing and developing it (Floridi, Mittelstadt and Watcher, 2017[15]) (Gasparri, 2019[16]). In criminal law, evidence obtained illegally is inadmissible at trial. If the party against whom evidence is introduced during a trial cannot dispute its accuracy and reliability, the question then arises as to whether evidence gathered through a system not subject to criticism, because the inaccessibility of the source code or other characteristics of the software, is legally permissible. This also raises broader questions about who bears responsibility for AI decision-making, which some legislation is attempting to addresses (see discussion on the European General Data Protection Regulation and the European Commission proposal for an AI regulation in section 1.4.1 of this chapter) and which is discussed in the OECD AI Principle on transparency and explainability.
Content moderation and content curation are often automated procedures, with AI deciding on which content is taken down or to whom it is disseminated. This can be very helpful for managing massive amounts of information uploaded onto a website, particularly for quickly flagging and removing clearly prohibited content (i.e. child pornography, illegal weapons sales, snuff videos, etc.). Questions and questions arise when automating these features with regard to political content, including extremist views. Without a transparent, clearly explainable decision-making process, arbitrary silencing of views can pose risks for state capture of online platforms that violate freedom of expression under the guise of moderation of extremist content or fake news. Google, Youtube, and Facebook have developed automated systems to remove ‘extremist content’, but have not publically disclosed how their filters work (Menn and Volz, 2016[17]). Reddit has publically disclosed how their automated moderation system works (see Box 3.3).
This type of application can also present a threat to democracy; AI has already been blamed for creating online echo chambers based on a person's previous online behaviour, displaying only content a person would like to see based on previous personal interactions as well as those of similar users, instead of creating an environment for equally accessible and inclusive public debate. AI is also being used to spread misinformation, either through algorithms designed to push addictive content on users of social media or through the creation of fake content that appears legitimate. For example, AI can be used to create extremely realistic video, audio and images that are false or misleading, known as deepfakes. This can also present individual reputational harm, financial risks, and challenge free and fair decision making. In the aggregate this could lead to severe political and social polarisation.
Freedom of association explicitly includes the right to form and join trade unions, and it is in particular here that certain trends in the use of AI can be identified that may provide reason for concern. The right to associate with others may come under pressure if AI is used to monitor, control and repress worker engagement. Data processing capabilities of AI used in combination with new productivity and movement tracking tools makes it possible to increase digital monitoring of workers and workplaces in ways that are unprecedented.
A glimpse of what is technically possible can be seen from the management of workplaces during the Covid-19 pandemic. In order to guarantee social distancing rules, new “biometric solutions for safer places” were introduced such as ultrasonic bracelets beeping every time workers came within virus-catching distance of a co-worker, or microchips allowing workers to enter the workplace in a contactless fashion (Aloisi and De Stefano, 2021[18]). Crucially, these tools permit private contact tracing. Increased telework during the pandemic was also accompanied by the use of new types of surveillance software measuring time spent online, the number of keystrokes, but also software reporting to managers when employees are distracted or when and for how long someone is away from their workstation. AI can be used to allow employers to turn extensive data sets of employee information into extensive behavioural profiles and patterns that can then be used to detect and predict the probability of workers organising themselves (Moore, 2020[19]).
In Autumn 2020, Amazon was reported to be looking to hire two intelligence analysts, to be charged with tracking “labour organizing threats” against the company (Palmer, 2020[20]). While these job vacancies were quickly withdrawn after widespread reactions in the public opinion, the Amazon-owned Whole Foods company is using technology and data to track and score stores it deems at risk of unionising. In Europe, Amazon’s Intelligence Unit is reportedly also closely monitoring the labour and union-organising activity of their workers, as well as environmentalist and social justice groups on Facebook and Instagram. Intelligence analysts keep close tabs on how many warehouse workers attend union meetings; specific worker dissatisfactions with warehouse conditions, such as excessive workloads; updates on labour organizing activities at warehouses that include the exact date, time, location, the source who reported the action and the number of participants at an event (Gurley and Rose, 2020[21]).
By delivering enhanced information and knowledge tracking of worker activity and their possible engagement in organising themselves, AI can make it possible for businesses to discourage, interfere or even restrain efforts of workers to unionize, thus disrespecting a fundamental labour and human right.
3.3. RBC applied to AI supply chain actors
Copy link to 3.3. RBC applied to AI supply chain actors3.3.1. Six Step OECD Due Diligence Framework
Copy link to 3.3.1. Six Step OECD Due Diligence FrameworkBased on the recommendation in the Guidelines that companies conduct due diligence to identify and address adverse impacts in their own operations and their supply chains, the OECD has developed sector-specific guidance for carrying out supply chain due diligence in minerals, garment & footwear, agriculture, as well as for institutional investors. Most recently, and most relevant to the discussion on new technology, the OECD has developed a general OECD Due Diligence Guidance for Responsible Business Conduct (the Due Diligence Guidance) that draws from and builds on sector specific guidance, but can be applied to all sectors of the economy. The due diligence framework in the Due Diligence Guidance consists of six steps (see Figure 3.2). (OECD, 2018[22]).
Due diligence is a tailored process, so when applied to the context of companies in the AI space, this could take a variety of forms depending on the size and location of the company, the type of product they are developing, its position in the value chain, the type of harm caused by its product, who its clients are, and a number of other factors.
Due diligence is also risk-based, meaning the measures that a company takes to conduct due diligence should be commensurate to the severity and likelihood of the adverse impact. When the severity and likelihood of the impact is high, as is presumably the case where the product developed has the capacity to be used in harmful ways, then due diligence must be more extensive.
Other key principles of due diligence to keep in mind when applying the due diligence framework are that it is flexible, progressive, consultative and transparent. The expectation on companies is that they initiate and continue the due diligence process; no one expects fully mapped out and impact-free operations and supply chains overnight. Businesses need to make difficult choices about the issues they prioritise and they need to show progressive improvement over time. This is a consultative and transparent approach whereby stakeholders expect to be consulted at each step of the due diligence process to ensure that efforts are effective. Companies are also expected to publically report on their efforts to conduct due diligence. Figure 3.2 demonstrates that these steps are not mutually exclusive and can all be undertaken simultaneously.
It is important for companies in business relationships with high risk end-users to keep in mind that the goal of due diligence is not to prevent them from doing business, but rather to promote increased responsible investment and trade, utilising the power of global business to leverage positive change. Global companies working with cutting edge technology have considerable leverage to address risks, for example by enforcing contractual terms, developing standards to ensure implementation of RBC across their value chains, and collaborating with other actors – such as international and regional organisations, national governments and civil society – to influence vendors and third parties.
The overall due diligence framework could be applied by all companies in the AI space. Companies should note that the Due Diligence Guidance goes into more detail on how exactly the steps can be implemented, however, a more detailed assessment of the technology and thorough consultation with all relevant stakeholders (including companies, government, and civil society) is necessary to develop guidance specific to AI. The examples on how RBC could be applied to AI in this chapter are drawn from the Due Diligence Guidance, engagement with experts and stakeholders and existing best practice.9
It is also important to note that application of the due diligence framework can assist companies in meeting expectations set out by the OECD Principles on AI10, as each step of the Due Diligence Guidance tracks closely with the five principles.
Table 3.1. Linking the OECD AI Principles and the OECD Due Diligence Guidance
Copy link to Table 3.1. Linking the OECD AI Principles and the OECD Due Diligence Guidance
Values-based OECD AI Principles |
OECD Due Diligence Guidance |
---|---|
1. Benefits to people and planet: AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being. |
Step 1 & 3: Companies should embed RBC into policies and management systems in order to ensure that commitments to benefit people and the planet are incorporated in the product’s design, sale, and use. Companies can often most effectively prevent and manage risk of harm of its products by thinking of the opportunities to enhance positive impact / benefit. |
2. Human-centred values and fairness: AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society. |
Steps 2 & 3: Companies should identify adverse impacts and take steps to mitigate and prevent them, including through establishing safeguards like whistleblower mechanisms, kill switches, and allowing for human intervention, and also restricting sales/services to certain customers. |
3. Transparency and explainability: There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them. |
Step 5: Companies should publically report on due diligence efforts on a periodic basis, including tracking progress and efforts to expand the risk scope. |
4. Robustness, security and safety: AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed. |
Steps 2 & 3: All companies in the AI lifecycle (and value chain more broadly) have a responsibility to ensure that negative impacts are addressed in its development, sale, and use. This not only includes technology companies, but non-technology companies that use AI, governments, and investors. |
5. Accountability: Organisations and individuals developing, deploying or operating AI systems should be held accountable for the proper functioning of those systems throughout their lifecycle in line with the above principles. |
Step 6: Companies should provide for or cooperate with remediation mechanisms if appropriate. Numerous judicial and non-judicial mechanisms exist to hold companies accountable and allow for impacts to be remediated. |
Box 3.1. RBC in practice: Establishing a public company policy on human rights
Copy link to Box 3.1. RBC in practice: Establishing a public company policy on human rights(OECD Due Diligence Guidance Step 1)
Technology companies should implement and disseminate policies on the company’s most significant adverse impacts to align their commitments to the Guidelines, including the commitment to refrain from causing harm and to conduct supply chain due diligence to address harms. As part of this step, companies should incorporate RBC expectations into their engagement with suppliers, customers and other business relationships. Companies should communicate clearly to suppliers and customers that certain uses or unintentional effects of their technology are unacceptable and may have consequences for the commercial relationship. Policies should also be updated on an ongoing basis, taking into account stakeholder views and learnings from the company’s efforts to address risk.
In March 2018, Google announced a contract with the US Department of Defence to work on analysing military drone videos using AI, known as Project Maven. In response, over 4000 Google employees signed a letter calling on Google to cancel Project Maven and to draft, publicise and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology. A dozen employees also quit the company in protest. Following this opposition from employees, in early June 2018, Google announced that it will stop working on Project Maven when its current contract expires.
Since then, Google has published its AI Principles prominently on its website. The Principles are that AI should: “(1) Be socially beneficial, (2) Avoid creating or reinforcing unfair bias, (3) Be built and tested safely, (4) Be accountable to people, (5) Incorporate privacy design principles, (6) Uphold high standards of scientific excellence, (7) Be made available for uses that accord with these principles.” And that Google will not pursue: “Technologies that cause or are likely to cause overall harm. (Subject to risk/benefit analysis.) Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people. Technologies that gather or use information for surveillance violating internationally accepted norms. Technologies whose purpose contravenes widely accepted principles of international law and human rights.”
The Guidelines and related due diligence guidance aim to foster responsible business conduct in all sectors, even those which by nature are considered to be high-risk. The Guidelines do not necessarily suggest that companies disengage in high-risk activities, such as those in the defence sector. Instead, companies should seek to design strategies appropriate to their own risk appetite, with enhanced due diligence to identify and prevent or mitigate human right risks, prioritising actual or potential harms based on their severity. In this regard, RBC principles of transparency and stakeholder engagement are particularly important.
Notwithstanding Google’s decision to disengage from this project, this anecdote serves as a strong example of a company applying the RBC approach with regards to stakeholder engagement and setting a public policy. Google responded to stakeholder feedback (in this case, the letter from the 4000 employees), made a decision to alter their approach to certain government contracts, and developed a clear public policy based on that stakeholder engagement, which will provide greater accountability to future undertakings.
Note: The Electronic Frontier Foundation (EFF), a watchdog on civil liberties issues relating to online platforms, publishes a helpful annual summary of public commitments to various human rights issues by the biggest online platforms: https://www.eff.org/wp/who-has-your-back-2019#transparent-about-legal-takedown-requests
Source: Coldewey, David (2018), “Google’s new ‘AI principles’ forbid its use in weapons and human rights violations,” TechCrunch, https://techcrunch.com/2018/06/07/googles-new-ai-principles-forbid-its-use-in-weapons-and-human-rights-violations/?_guc_consent_skip=1615197403 ; Google’s AI Principles, https://ai.google/responsibilities/.
3.3.2. Roles/responsibilities of different supply chain actors
Copy link to 3.3.2. Roles/responsibilities of different supply chain actorsDifferent from a standard physical product, where the relationship between the manufacturers, retailers and consumer is linear, there is significant overlap and exchange in the AI landscape between developers, vendors and end-users. Indeed, the assigning of responsibility to the different AI actors who develop, sell, or deploy different technologies is still an open question. The OECD defines AI actors as “those who play an active role in the AI system lifecycle, including organisations and individuals that deploy or operate AI” (OECD, 2019[1]). Companies in this space should keep in mind that, given the broad definition of a ‘business relationship’ under the OECD Guidelines, these relationships commence prior to the execution of legally binding contracts (OECD, 2021, forthcoming[23]).
All supply chain actors are expected to carry out a broad scoping exercise to identify where RBC risks are most likely to be present and most significant. The scoping exercise should enable the company to carry out an initial prioritisation of the most significant risk areas for further assessment.
For AI, this exercise will consist of two primary elements: (1) a mapping of relevant business relationships and (see Figure 3.2); (2) a risk assessment of the product or service in question to determine the potential for misuse or negative side effects. As with all steps in the due diligence process, stakeholders such as civil society groups, representatives of potentially affected groups/communities, worker’s unions and independent experts should be consulted in both of these elements in order to gain a more complete understanding of the risks.
Developers: Including key actors involved in data collection & processing, planning & design, model building & interpretation
Copy link to Developers: Including key actors involved in data collection & processing, planning & design, model building & interpretationAI products are created by developers, through the following process:11
Ideation: Identifying a problem and priorities for the AI
Data Gathering: Selecting the appropriate data set for the AI to learn from
Method Selection: Selected the method for teaching the AI what to do with the data
Performance Testing: Testing the AI’s ability to perform the task it was taught
While due diligence should cover all stages of the product lifecycle, companies have the greatest opportunity to address risks during the product development of AI technologies. By applying a “human rights by design strategy,” developers can prevent/mitigate potential risks of technologies at every step of development, so it is critical to map out the relevant actors and get them involved in the process early.
Developer due diligence could include asking questions such as:
Who will likely use the product and for what purpose?
Is the system robust, secure and safe? Is there the potential for misuse, poor handling or lack of enforcement of respective rules and standards?
Is there a chance that vulnerable groups will be especially impacted by the use of the technology?
Are appropriate safeguards in place to prevent negative impacts?
Are processes and decisions made during the AI system lifecycle explainable and transparent?
Vendors
Copy link to VendorsOnce a product is developed, it is sold by vendors to end-users, who deploy and operate the technology. It is the responsibility of the vendor to conduct due diligence at the point of sale on the risks associated with the use of the product. Importantly, many AI developers also sell their own products. Other companies develop AI products that are distributed by its partners or third-party retailers. Developers may also be contracted directly to create specific products, and as such, take on extended responsibilities of due diligence. Vendors should review credible reports of the human rights records of the recipient or history of misuse of products.
Vendor due diligence could include asking questions such as:
Was the product designed and assembled according to RBC standards?
Is the product being sold directly to the end-user or to another distributor?
Does the product come with an end-user agreement or training on AI limitations?
When end users are government agencies or government contractors, particularly militaries or private military and security companies, they present higher risks of the technology being used for harm and require more stringent due diligence. Due diligence prior to sale is especially important in this circumstance because the scale of the potential harm and also the potential lack of leverage to drive positive change. The United States State Department developed detailed guidance on conducting due diligence for selling technology with surveillance capabilities to foreign governments. Despite the narrow product scope of that guidance, many of recommendations can be extended to AI vendors.
Additional due diligence questions when selling to / contracting with a government include:
Are there laws and oversight in the recipient country allowing for (or preventing) abuse of the technology (e.g. counter terrorism laws that unduly restrict freedom of expression or allow for arbitrary surveillance)?
Are there data localisation requirements that may result in violation of privacy or other human rights in the country where the data is stored?
Is the government involved in an on-going conflict where the technology can potentially be deployed?
Is the government involved in on-going human rights abuses against protected groups?
If the end-user is not the government, does the government have effective control over the end-user, opening the door for potential misuse of the technology or access to data?
Can licenses be revoked or the product be disabled if misuse occurs?
Does the product have a dual-use that is harmful?
End Users
Copy link to End UsersEnd users can be anyone, ranging from a government, a government contractor, another company, or a civil society organisation. For many AI technologies that are licensed to end users, developers have the ability to monitor the product, creating opportunities for human rights due diligence directly between the developer and the end user. For example, developers and vendors can limit licensing renewals with end users.
End user due diligence could include asking questions such as:
Was the product accompanied by guidance or training on its limitations?
Has the product been altered in any way that may increase its potential RBC risks through resale channels?
Companies in the AI lifecycle should also assess whether the capabilities of its product or service may cause, contribute to or be directly linked to an adverse impact. This assessment should take into account the design, development, marketing, sale, licensing and deployment of its products and services. AI-based products and services could potentially linked to adverse impacts in a variety of ways.
Box 3.2. Investor due diligence and leverage to drive responsible AI development
Copy link to Box 3.2. Investor due diligence and leverage to drive responsible AI developmentIn recent years, investors and financial institutions have become a major driving force for uptake of due diligence expectations in companies they lend to. The volume of “responsible” or “sustainable” financial products and strategies has grown exponentially in the past 10 years, driven largely by increased demand from beneficiaries and policy signals that the financial sector should be a driver in achieving global sustainability agendas.
Despite widespread funding and massive amounts of money raised for AI start-ups, the sector seems to be dominated by a relatively small number of venture capital funds. In 2019, the International Finance Corporation noted that AI start-ups in the United States raised $4.4 billion from 155 investments, while Chinese start-ups raised $4.9 billion from 19 investments (Xiaomin, 2019[24]). Together, US-based and Chinese start-ups represented over 80% of the monetary value of VC investments in AI start-ups in 2020. This compares to 72% of VC investments the two countries represented across all sectors (OECD, 2021, forthcoming[25]). If these funds were to incorporate human rights due diligence requirements as a condition for financing, it could have a significant impact on future AI development. To that end, the OECD developed a framework for financial institutions to identify, respond to and publicly communicate on environmental and social risks associated with their clients (OECD, 2019[26]).
There have already been high profile cases and backlash against large banks for their role in financing certain technology used in human rights abuses. For example, a Swiss bank is the subject of an NCP complaint due to alleged failure to observe the Guidelines regarding human rights due diligence with regards to its relationship with a Chinese surveillance technology firm (OECD, 2021[27]). Similarly, a large financial institution was reported to have sold its loan to an Israeli surveillance firm at a loss following reports of the firm’s development and sale of technology allegedly used to spy on journalists and human rights defenders (Smith, 2019[28]).
3.3.3. Risk prevention/mitigation at different stages of the AI lifecycle
Copy link to 3.3.3. Risk prevention/mitigation at different stages of the AI lifecycleBased on the initial scoping and risk assessment companies should act to stop, prevent or mitigate the impact(s) identified. This involves developing and implementing plans that are fit-for-purpose. All impacts are expected to be addressed, with the most severe impacts taking priority. Stakeholders should be meaningfully involved in planning, enacting and monitoring impact prevention and mitigation efforts. Prevention/mitigation can take place at the design phase, as the product is being developed; at the procurement or sale phase; and after the produce has already been sold. With customers, companies can already mitigate potential impacts through contractual and procedural safeguards and strong grievance mechanisms.
At the design phase:
A growing community of researchers have called on developers to consider explainability (XAI), and fairness, accountability, and transparency (FATML) when developing products. Developers should strive to develop technology where the outcomes of decision making are readily interpretable (Council of Europe Expert Committee on human rights dimensions of automated data processing and different forms of artificial intelligence, 2019[29]). Similarly, techniques should be developed to identify and overcome problems of bias and discrimination arising from the use of data mining and other machine learning techniques (known as discrimination-aware’ or ‘fairness-aware’ techniques for machine learning). This is also reflected in the OECD AI Principles which state that “AI actors should…provide meaningful information…to enable those adversely affected by an AI system to challenge its outcome based on plain and easy-to-understand information on the factors, and the logic that served as the basis for the prediction, recommendation or decision.”12
Companies should also develop secure, accessible, and responsive communications channels and grievance mechanisms for both internal and external actors to report possible misuse of products or services.
At the contracting phase:
Contractual safeguards include, for example, end-use and end-user limitations, reserving the seller’s right to terminate access to technology, and denying software updates, training, and other services.
After sale:
Technology companies tend to have a unique, on-going relationship with their customers that companies in other sectors lack (e.g. customer support, software updates, maintaining networks, etc). This provides them with very strong leverage should they identify misuse of their products or unintended side-effects that should be stopped or mitigated. Actions include:
Tracking/Monitoring of product use
Alerts of misuse
Kill switches on certain features of the product (i.e. a way to rapidly shut down or disable those features or the entire product)
Limiting customer support / updates
Currently, AI research and use of AI technology is currently under regulated and little is disclosed about internal whistleblowing mechanisms available to hold AI developers and users accountable, though this is increasingly changing (Katyal, 2018[30]). One of the central obstacles is the clash between human rights and the right of a business to not disclose proprietary information about an algorithm. This will increasingly become a problem as government agencies take data or AI service contracts from companies.
Again, the practical implications of RBC considerations in the context of the use of a technology product or service would be useful to explore and elaborate through multi-stakeholder processes. At a minimum, companies should explore all possible avenues to mitigate any new, unintended human rights risk and engage human rights experts and affected stakeholders when deliberating on dilemmas. Table 3.2 provides examples of specific mitigation actions depending on the type of harm.
Table 3.2. Risk mitigation based on the type of harm caused by AI
Copy link to Table 3.2. Risk mitigation based on the type of harm caused by AI
Type of harm |
Examples |
Illustrative risk mitigation options |
---|---|---|
Purposeful “harm by design” |
Deepfake video designed to harm an individual’s reputation and right to privacy. |
Update company policies against developing this type of technology Investments in detection technology |
Harm caused by inherent “side effects” |
Biased police data leading to discrimination |
Engagement with civil rights group during product development and roll out / transparent grievance mechanism / public oversight |
Social media algorithm promoting hate speech or false information |
Policies that balance supporting freedom of expression with responsible content moderation. Transparent grievance mechanisms should be provided to inform users that content they uploaded has been filtered out or blocked, and to lodge complaints in case they do not agree with the assessment of the filtering system. Public reporting on the removal process and data in the aggregate |
|
Harm caused by failure rates |
Biased inputs leading to biased outcomes in hiring processes |
Ensuring a balanced dataset, improving product validation and verification process, and engagement with civil rights groups during product development and roll out |
Harm caused by intentional misuse |
AI powered surveillance technology |
Restrict sale and product support to certain governments Public oversight over how the technology is deployed |
Source: Adapted from OECD (2019) briefing paper on RBC & AI, https://mneguidelines.oecd.org/RBC-and-artificial-intelligence.pdf.
Box 3.3. RBC in practice - transparency and accountability
Copy link to Box 3.3. RBC in practice - transparency and accountability(OECD Due Diligence Guidance Steps 4 & 5)
The Guidance recommends that companies publically report on the outcomes of their due diligence, and why the company is making certain decisions. Public reporting should include public policies, risk identification results, and a description of risk mitigation and tracking efforts. The flexible, transparent approach of this framework can help MNEs in particular overcome the lack of an internationally agreed view on many of the human rights concerns facing the technology sector (e.g. privacy, data ownership, and free speech).
In 2020, twelve of the biggest online platforms endorsed the Santa Clara Principles, which call for transparency by social media companies by publishing the numbers of removed posts, notifying users of content removal, and providing opportunities for meaningful and timely appeals. Only one site, Reddit fully implemented the principles into their platform, according to the Electronic Frontier Foundation. Reddit annually publishes data on content that was removed, accounts that were suspended, and legal requests received from third parties to remove content or disclose private user data. This includes information on which reports were removed by human users and which were filtered through AI-based “AutoMods” or automatic moderators. Reddit also publicly discloses how the AutoMods work and the reasoning behind its programming.
According to Reddit’s Transparency Report, 99.76% of removals by AutoMods are spam. Of the remaining percentage, roughly one-quarter are posts containing minor sexualisation, hateful content, and harassment, 13.31% are violent content, 13% are involuntary pornography, and the rest include sale or promotion of prohibited goods and personally identifiable information.
Key to this process, and in line with expectations of the Guidance, is that Reddit also tracks progress made to reduce harmful content and updates made to its rules in order to improve. Data from their report shows a decrease in toxic comments per day from 11% to roughly 8% following the implementation of a ban wave.
Monitoring can be done by carrying out internal or third-party reviews or audits, as well as periodic assessments, to ensure that risk mitigation measures are being pursued or to assess the effectiveness of those measures. Many legislative proposals contain some general accountability requirements to ensure companies comply with their privacy programs, and some include self-audits or third-party audits. Paired with risk assessments and mitigation, auditing outcomes of algorithmic decision-making can help match foresight with hindsight. Auditing machine-learning routines remains a difficult and still developing field of research.
Source: Reddit Transparency Report 2020, https://www.redditinc.com/policies/transparency-report-2020-1.
3.4. National / International / Industry-led efforts to address AI risks
Copy link to 3.4. National / International / Industry-led efforts to address AI risksA wide range of tools are available to promote implementation of human rights due diligence by companies. Government policy to promote respect for human rights should involve a smart mix of voluntary, mandatory, national and international measures. Likewise, companies are encouraged to cooperate with each other, the government, and other stakeholders to jointly address sector-wide issues. This section provides a broad scoping of existing and future legislation, initiatives, and standards at the international / national / and industry-led level to address AI human rights risks.
3.4.1. Leveraging existing legislation
Copy link to 3.4.1. Leveraging existing legislationAlthough the Guidelines are directed primarily towards company behaviour, they acknowledge the role of governments as a key driver of RBC and there is widespread recognition that RBC cannot be achieved without governments taking part in these efforts. Experience from the minerals sector has shown that regulatory measures requiring human rights due diligence have had the largest impact in terms of driving business uptake of due diligence standards (OECD, 2016[31]). While voluntary standards have a role to play in promoting uptake, especially among the more progressive businesses, well-designed regulatory approaches have provided the strongest impetus for companies to change how they operate. Ultimately, a smart mix of market-based mechanisms driven by regulations will play an important role in scaling due diligence efforts and enforcement.
Dual-use export controls
Copy link to Dual-use export controlsDual-use export controls can also play a significant role in addressing AI human rights risks, with many AI applications falling under the scope of these types of controls. Dual-use items are goods and technologies that may be used for both civilian and military purposes. Dual-use export controls not only manufacturers but also transport providers, academia and research institutions. In recent years there has been an increased focus on the role they can play in areas outside just military purposes, including preventing human rights abuses and controlling the trade in cyber-surveillance systems.
In March 2021, the European Parliament officially accepted the new EU regulation on its regime for export controls of dual use goods, amending previous rules set in 2009.13 The new EU dual use goods regime will introduce new human rights-based catch-all controls over cyber-surveillance items. Specifically, it requires companies to produce due diligence findings about potential risks that the export of a non-listed cyber-surveillance item may be intended “for use in connection with internal repression and/or the commission of serious violations of international human rights and international humanitarian law.”
In the United States, the International Traffic in Arms Regulations and the Export Administration Regulations (EAR) both govern the export and import of items and technology relevant to national security. On AI, the EAR has taken a very narrow approach. To date, the only restrictions have been on the export and re-export of AI software designed to analyse satellite images.14 However, the US Department of State has released a due diligence guidance to assist US companies seeking to prevent their products or services with surveillance capabilities from being misused by foreign government end-users to commit human rights abuses, in line with the OECD Guidelines for Multinational Enterprises and the United Nations Guiding Principles on Business and Human Rights (United States Department of State, 2020[32]).
Data protection
Copy link to Data protectionData is the key ingredient for AI applications, so data protection laws could have a significant impact on how AI technologies develop. In May 2018, the European General Data Protection Regulation (GDPR)15 came into force. The regulation contains provisions and requirements related to the processing of personal data of individuals and applies to any company – regardless of its location and the data subjects' citizenship or residence – that is processing the personal information of data subjects inside the European Economic Area. This regulation has unified regulation within the EU, making compliance for companies within the EU easier. The GDPR also applies to the transfer of personal data outside the EU.
Key for this discussion is Article 22 of the GDPR which is a general restriction on automated decision making and profiling. It only applies when a decision is based solely on automated processing – including profiling – which produces legal effects or similarly significantly affects the data subject. Essentially, under the GDPR whenever companies use AI to make a significant decision about individuals, such as whether to offer a loan, the data subject has the right to have a human review that decision, including “meaningful information about the logic involved.” However, it appears that explainability under the GDPR only extends to what data was collected to result in the decision, rather than the logic behind the decision, which in some cases is impossible for the developer to explain.
RBC legislation
Copy link to RBC legislationRBC expectations are already integrated into a number of existing regulations in OECD countries. However, while the uptake of RBC expectations by companies within the scope of the regulations has increased, overall uptake remains low and enforcement efforts are lacking. Regulators should consider the human rights impacts of AI when prioritising regulatory oversight efforts.
Table 3.3 represents a brief accounting of existing RBC legislation that may be relevant to AI. Other less relevant legislation (e.g. UK Modern Slavery Act) is not included here, but may indeed have some sort of AI nexus not apparent to the author.
Broadly, due diligence legislation can be categorised as follows: those related to the mandatory disclosure and transparency of information and those relating to mandatory due diligence and other conduct requirements. The main distinction being that disclosure law does not include a requirement to take any affirmative steps in addressing RBC impacts.
Transparency and disclosure legislation requires companies to disclose risks they identify and whether they are taking or have taken any action to address those risks. To comply with this type of legislation, companies may have to follow certain standards and good practice when disclosing risks, but are not required to necessarily change their conduct, for example by addressing those risks. The idea behind this legislation is that it allows the market, including investors, consumers and civil society, to better assess companies. Examples of legislation focused on transparency and disclosure include the EU Non-financial disclosure directive, the Transparency Supply Chains Act in California, and the UK/Australian Modern Slavery Act.
Table 3.3. RBC Due Diligence Legislation in OECD Countries and the EU
Copy link to Table 3.3. RBC Due Diligence Legislation in OECD Countries and the EUFL / MS = Forced labour or Modern Slavery CL = Child labour
Country |
Legislation or Legislative Proposals |
Year |
Enacted |
Issue focus |
Reporting expectation |
Publication of reportingi |
Due diligence expectation |
---|---|---|---|---|---|---|---|
Netherlands |
Proposal for mandatory due diligence |
Under discussion |
|||||
France |
Duty of vigilance law |
2017 |
|||||
Denmark |
Proposal for mandatory human rights due diligence |
Under discussion |
|||||
Finland |
Proposal for Corporate Responsibility Act |
Under discussion |
|||||
Switzerland |
Parliamentary initiative for MHRDDii |
Under discussion |
FL/MS ; CL |
||||
European Union |
Non-financial reporting directiveiii |
2014 |
|||||
Corporate Sustainability Reporting Directive |
Under discussion |
||||||
Sustainable Finance Disclosure Regulation |
2019 |
||||||
Proposal on directors duties under Sustainable Corporate Governance initiative |
Under discussion |
||||||
Proposal on mandatory due diligence under Sustainable Corporate Governance initiative |
Under discussion |
||||||
Austria |
Proposal for Social Responsibility Act Viii |
Under discussion |
FL/MS CL |
||||
Germany |
Due Diligence Act |
2021 |
|||||
Norway |
Transparency Act |
2021 |
Notes: (i) Companies covered by the law are mandated to make their report publicly available; (ii) Counter proposal to Responsible Business Initiative; (iii) 2014/95/EU; (iv) 2018/0179(COD) - 24/05/2018 (v) Requires financial market participants to publish written policies on the integration of sustainability risks in investment decision making process; claiming products or services pursue sustainable investment objectives, obliging them to disclose information on the contribution of the investment decisions to the sustainable investment objectives. (vi) This regulation does not affect the garment sector but can represent a precedent as it is a successful conversion of voluntary self-certification into mandatory requirements stemming from the OECD Due Diligence Guidance for Responsible Supply Chains of Minerals from Conflict-affected and high risk areas. (vii) Update of the tariff act of 1930 (viii) Draft bill on social responsibility in the garment sector.
Mandatory due diligence legislation and other conduct requirements require companies to adhere to new forms of conduct and market practices, normally to prevent or mitigate RBC impacts and also to report on them. For example, the 2017 French Duty of Vigilance Law, which requires very large French companies and other companies with a substantial presence in France to publish and implement a “vigilance plan” and account for how they address human rights impacts in their global operations.
Most governments currently considering due diligence legislation have conducted independent studies confirming that voluntary standards do not lead to sufficient uptake (European Commission, 2020[33]; Netherlands Ministry of Foreign Affairs, 2020[34]; Norwegian Ethics Information Committee, 2019[35]; Business and Human Rights Resource Centre, 2020[36]; German Federal Ministry of Labour and Social Affairs (BMAS), 2020[37]). All the studies pointed out that smart mix of voluntary and mandatory rules is needed to increase uptake of due diligence implementation.
All proposed legislation currently under discussion plans to build on international standards (UN Guiding Principles on Business and Human Rights, the OECD Guidelines, and the OECD Due Diligence Guidance) in order to promote coherence and also to reduce legal uncertainty for multinational enterprises. The flexible, transparent approach of the Guidance framework can help MNEs in particular overcome the lack of an internationally agreed view on many of the human rights concerns facing the technology sector (e.g. privacy, data ownership, and free speech).
3.4.2. AI-specific initiatives
Copy link to 3.4.2. AI-specific initiativesIn November 2020, the OECD Centre for Responsible Business Conduct published a stocktaking of relevant national, international, and business-led initiatives, standards, and regulation on digitalisation and RBC, with a specific focus on social media platforms and artificial intelligence (OECD, 2020[38]).The paper found that governments are largely focused on developing AI strategies rather than regulation. Since 2015, governments increasingly include AI strategies in their national policies. This is particularly the case in OECD countries and key partners. Regulation on AI appears to remain minimal, with a clear concern from governments that they do not limit innovation with regulation that may place their country at a global disadvantage. More recently, the OECD Directorate for Science, Technology and Innovation developed a report on the state of implementation of the policy recommendations to governments contained in the OECD AI Principles (OECD, 2021[39]). This report presents a conceptual framework, provides findings, identifies good practices, and examines emerging trends in AI policy, particularly on how countries are implementing the five recommendations to policy makers contained in the OECD AI Principles.
Governments are increasingly developing strategies to advance their own efforts to create a conducive environment to innovation and digital transformation. Strategies commonly focus on the future of work, research, and incentivising innovation and leadership. Economic opportunities are driving state AI policies and research investments. Several states designate how AI will help specific sectors of their economies, often including agriculture, industry, healthcare and smart cities. Most national strategies or policies on AI address, in some form, the actual or potential impacts that artificial intelligence may have on people, planet and society.
The dominant focus areas in strategies dealing with AI in relation to RBC are competition issues, human rights, including privacy and discrimination in the workplace, labour market impacts, specifically on the future of work and consumer protection. About 40% of the strategies reviewed mention one or several of these elements. In addition, approximately 35% of the strategies reviewed also foresee some action on disclosure of AI systems by developers or users. The OECD AI Policy Observatory www.oecd.ai (OECD.AI), contains a database of national AI policies from OECD countries and partner economies and the EU. These resources help policy makers keep track of national initiatives to implement the recommendations to governments contained in the OECD AI Principles.
The OECD started work on AI in 2016. The resulting Recommendation of the Council on Artificial Intelligence, adopted in 2019, represents the first international, intergovernmental standard for AI and identifies AI Principles and a set of policy recommendations for responsible stewardship of trustworthy AI. Subsequently, the G20 Leaders have welcomed G20 AI Principles, drawn from the AI Principles contained in the OECD Recommendation. The AI principles focus on responsible stewardship of trustworthy AI, and include respect for human rights, fairness, transparency and explainability, robustness and safety, and accountability. The OECD AI Principles aim to complement existing OECD standards that are already relevant to AI. It refers to OECD standards in the field of privacy and data protection and digital security risk management, as well as to the Guidelines. There could be a role for the Guidelines also with respect to the implementation of the Recommendation.
In early 2020, the OECD launched OECD.AI, a platform to share and shape AI policies that provides data and multidisciplinary analysis on artificial intelligence. Also in early 2020, the OECD’s Committee on Digital Economy Policy tasked the OECD.AI Network of Experts (ONE AI) with proposing practical guidance for implementing the OECD AI principles for trustworthy AI through the activities of three expert groups and one task force. The OECD.AI expert group on implementing trustworthy AI developed a report to on tools for trustworthy AI, to help AI actors and decision-makers implement effective, efficient and fair policies for trustworthy AI (OECD, 2021[27]). The OECD.AI expert group on the classification of AI systems is also developing a user-friendly framework to classify and help policy makers navigate AI systems and understand the different policy considerations associated with different types of AI systems.
In 2018, the EU presented a Strategy for AI. It includes the elaboration of recommendations on future-related policy development and on ethical, legal and societal issues related to AI, including socio-economic challenges. The Strategy has resulted, among other things, in the Policy and Investment Recommendations, which require accountability complements and the reporting about negative impacts; and the Ethics Guidelines for Trustworthy Artificial Intelligence (AI) (revised document of April 2019). These non-binding guidelines address, among other things, accountability and risk assessment, privacy, transparency, societal and environmental well-being. In February 2020, the European Commission issued a White Paper (European Commission, 2020[40]) and an accompanying report on the safety and liability framework, which set out policy objectives on how to achieve a regulatory and investment oriented approach that both promotes the uptake of AI and addresses the risks associated with certain uses of AI at the same time. In April 2021, the Commission published its AI package proposing new rules and actions aiming to turn Europe into the global hub for trustworthy AI (European Commission, 2021[6]).
In September 2019 the Ministers of the Council of Europe have set up an intergovernmental Ad hoc Committee on Artificial Intelligence (CAHAI), to examine the feasibility of a legal framework for the development, design and application of artificial intelligence. Important issues to be addressed include the need for a common definition of AI, the mapping of the risks and opportunities arising from AI, notably its impact on human rights, rule of law and democracy, as well as the opportunity to move towards a binding legal framework. It takes due account of a gender perspective, building cohesive societies and promoting and protecting rights of persons with disabilities in the performance of its tasks. The CAHAI adopted a Feasibility Study on a legal framework for AI in December 2020 (Council of Europe Ad Hoc Committee on Aritificial Intelligence, 2020[41]). The study examines the viability and potential elements of such a framework for the development and deployment of AI, based on the Council of Europe’s standards on human rights, democracy and the rule of law.
In Latin America, Argentina, Mexico and Brazil have also developed national initiatives to support the development of AI aimed at addressing the Sustainable Development Goals. These initiatives involve financial support towards research on AI, grants for research & development of specific technologies, as well as support for development and awareness raising on AI ethics. In Mexico specifically, national strategy and research is focused on mitigating impacts of AI on the job market, in Argentina, the focus is on ethics, and in Brazil, the focus is on data protection and urban development.
The UN Human Rights Business and Human Rights in Technology (B-Tech) Project seeks to provide authoritative guidance and resources to enhance the quality of implementation of the United Nations’ Guiding Principles on Business and Human rights with respect to a selected number of strategic focus areas in the technology space.16 It aims to offer practical guidance and public policy recommendations to realise a rights-based approach to the development, application and governance of digital technologies. It uses an approach that includes attention for human rights risks, corporate responsibility and accountability by using the three pillars of the UNGPs: Protect, Respect, and Remedy. For example, it looks at the role of states and private actors in enhancing human rights in business models, human rights due diligence, and accountability and remedy. The project offers a framework for what responsible business conduct looks like in practice, regarding the development, application, sale and use of digital technologies and suggests a smart mix of regulation, incentives and public policy tools for policy makers that provide human rights safeguards and accountability, without hampering the potential of digital technologies to address social, ecological and other challenges.
Multi-stakeholder initiatives are also playing a critical role in helping clarify specific RBC issues in relation to digital technologies and support common action. For example, though not AI-specific, the Global Network Initiative provides a framework of principles and oversight for the ICT industry to respect, protect, and advance user rights to freedom of expression and privacy, in particular as it relates to requests for information by governments. The Partnership on AI primarily focuses on stakeholder engagement and dialogue seeking to maximise the potential benefits of AI for as many people as possible.
Civil society is actively involved in defining and promoting ethical principles for responsible development and use of digital technologies. While not consistently, many of the emerging principles reference some international RBC instruments (mostly from the United Nations). Leading efforts include the Santa Clara Principles. The Toronto Declaration is a human rights-based framework that delineates the responsibilities of states and private actors to prevent discrimination with AI advancements. Ranking Digital Rights is the first public tool to assess company performance on digital rights, seeking to trigger a ‘race to the top’.
Companies have developed detailed policies dealing with a wide range of RBC issues. For AI, company policies tend to focus on transparency of AI systems, promotion of human values, human control of technology, fairness and non-discrimination, safety and security, accountability, and privacy. For online platforms, company policies tend to focus on mitigating violence and criminal behaviour, safety, mitigating objectionable content, integrity and authenticity, data collection, use, and security, sharing of data with third parties, user control, accountability, and promotion of social welfare. Broad commitments to human rights are included in most company policies reviewed. A brief analysis of company efforts shows that while many companies have publicly committed to human rights, their due diligence commitments largely focus on identifying and managing risk related to the above-mentioned policy issues, rather than tracking effectiveness, public reporting, or supporting remediation.
3.5. AI uses to support RBC
Copy link to 3.5. AI uses to support RBCGoing into detail on the specific applications of AI to support due diligence can standalone as its own report. AI’s ability to analyse and interpret huge datasets quickly makes it an excellent tool to support supply chain due diligence. This section offers a brief look into AI applications for supply chain traceability and risk identification.
Physical supply chains (e.g. minerals/metals, garment, and agriculture) are extremely complex and fragmented. Many multinationals, particularly those involved in manufacturing, have thousands of suppliers and sometimes 10 – 15 tiers in their supply chains, with the exact relationship between those suppliers constantly changing. These supply chains include both informal and formal actors in developed and developing parts of the world, which makes it particularly difficult to track where the goods are coming from and who is handling the goods, which are both key sets of information for conducting supply chain due diligence.
Fraudulent misrepresentation of the origin of goods, money laundering, customs violations, bribery and tax evasion are common risks in physical supply chains, and are also often associated with or enablers of human rights abuses. Anomalous trade and production data can often by connected to these risks (e.g. an unusually high shipment of goods from a supplier, an unknown intermediary in a supply chain, or a shipment of raw material from a country known to not produce that material).
Currently, many of these anomalies are likely going unchecked given the overwhelming amount of data to sift through. AI can continuously analyse large amounts of rapidly changing data points along the supply chain (e.g. weather reports, shipping delays, payments made to customs agents, inventory, social media trends, financial and political news, etc.) to not only make supply chains more efficient, but quickly find hidden correlations between all these variables that potentially point towards illicit behaviour. A combination of AI-based analysis and human decision making could potentially allow for a less costly, more efficient due diligence process.
AI is also being used to evaluate company risk profiles for investors, linking information on RBC issues (human rights abuses, financial crime, and environmental degradation) with financial performance to support ‘ESG investing’. One such use case is in sentiment analysis algorithms (Barrachin and Shoaraee, 2019[42]). These algorithms allow computers to analyse news and sustainability reporting by companies in order to determine how seriously a company takes an issue. For example, sentiment analysis programs might be trained to read the transcripts of a company’s quarterly earnings calls to identify in which parts of the conversation the CEO talks about environmental degradation, and then infer from those words used how committed a company appears to be about mitigating risks. Once more data is gathered on the sustainability impacts of due diligence efforts (see for example (OECD, 2021[43]), AI systems could potentially be used to further link company rhetoric and efforts with change on the ground. However, this technology still certainly has its limits that human due diligence will be required to overcome. For example, companies aware of rating AI-based ESG rating systems may over represent ESG keywords in disclosures in an effort to game the system.
3.6. Looking forward
Copy link to 3.6. Looking forwardGiven the wide-ranging RBC issues addressed in the stocktaking review described above, OECD RBC instruments continue to be relevant. They can provide cross-sectoral frameworks for looking at these issues holistically, and can help connect the dots between the different RBC issues. The broad scope of the Guidelines, covering all areas where business interacts with society, allows for addressing the manifold impacts of digitalisation on society and to enhance the use of new technologies for actually improving RBC and supply chain due diligence.
Specifically, the Guidelines and Due Diligence Guidance enable business to systematically address the impacts of their activities in all of their interactions with society. At the same time, it is clear from the review that policy-makers, industry, workers, trade unions and business/employers’ organisations and other stakeholders could benefit from further work to integrate RBC standards and approaches into ongoing digitalisation efforts, and clarify the applicability of RBC instruments to specific digital issues. Companies can be supported through more specific research and targeted guidance on how to apply RBC standards to the development and applications of AI.
Technology companies should be aware that these expectations and voluntary recommendations may soon become legal requirements. Political and legislative efforts to make OECD recommended due diligence mandatory are multiplying across the globe, including in France, Germany, Finland, the US, the UK, Switzerland, as well as in the European Union, for which there is a legislative proposal to implement mandatory due diligence in 2021 (European Parliament, 2020[44]). Given the growing momentum, it is sensible for companies operating in this space to stay ahead of the curve. Not only will early implementation and active engagement with stakeholders help reduce future legal and reputational risks, it could also give companies a seat at the table in helping frame the rule making to make future rules on this topic as practically implementable as possible.
References
[18] Aloisi, A. and V. De Stefano (2021), “Essential jobs, remote work and digital surveillance: addressing the COVID-19 pandemic panopticon”, International Labour Review, https://doi.org/10.1111/ilr.12219.
[11] Baraniuk, C. (2019), “The new weapon in the fight against crime”, BBC, https://www.bbc.com/future/article/20190228-how-ai-is-helping-to-fight-crime.
[42] Barrachin, M. and S. Shoaraee (2019), “Sentiment Analysis: Is It All The Same?”, S&P Global Market Intelligence, https://www.spglobal.com/marketintelligence/en/news-insights/research/sentiment-analysis-is-it-all-the-same.
[3] Bedoya, A. (2016), The Perpetual Line-Up: Unregulated Police Face Recognition in America,, Georgetown Law Center on Privacy and Technology, https://www.perpetuallineup.org/.
[36] Business and Human Rights Resource Centre (2020), Germany: Cabinet passes mandatory due diligence proposal; Parliament now to consider & strengthen, https://www.business-humanrights.org/en/latest-news/german-due-diligence-law/.
[13] Chouhan, K. (2019), “Role of an AI in Legal Aid and Access to Criminal Justice”, International Journal of Legal Research, Vol. 6/2 (1), https://ssrn.com/abstract=3536194.
[41] Council of Europe Ad Hoc Committee on Aritificial Intelligence (2020), Feasibility study on AI legal framework, https://rm.coe.int/cahai-2020-23-final-eng-feasibility-study-/1680a0c6da.
[29] Council of Europe Expert Committee on human rights dimensions of automated data processing and different forms of artificial intelligence (2019), Responsibility and AI, Council of Europe, https://rm.coe.int/responsability-and-ai-en/168097d9c5.
[9] Council of Europe Committee of Experts on Internet Intermediaries (2018), Algorithms and human rights - Study on the human rights dimensions of automated data processing techniques and possible regulatory implications, Council of Europe, https://edoc.coe.int/en/internet/7589-algorithms-and-human-rights-study-on-the-human-rights-dimensions-of-automated-data-processing-techniques-and-possible-regulatory-implications.html.
[8] Destin, J. (2018), “Amazon scraps secret AI recruiting tool that showed bias against women”, Reuters, https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G.
[7] European Commission (2021), Artificial intelligence: threats and opportunities, https://www.europarl.europa.eu/news/en/headlines/society/20200918STO87404/artificial-intelligence-threats-and-opportunities.
[6] European Commission (2021), Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence, https://ec.europa.eu/commission/presscorner/detail/en/ip_21_1682.
[33] European Commission (2020), Study on due diligence requirements through the supply chain, https://op.europa.eu/en/publication-detail/-/publication/8ba0a8fd-4c83-11ea-b8b7-01aa75ed71a1/language-en.
[40] European Commission (2020), White Paper on Artificial Intelligence - A European approach to excellence and trust, https://ec.europa.eu/info/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en.
[44] European Parliament (2020), Towards a Mandatory EU system of due diligence for supply chains, EPRS | European Parliamentary Research Service, https://www.europarl.europa.eu/thinktank/en/document.html?reference=EPRS_BRI(2020)659299.
[15] Floridi, L., B. Mittelstadt and S. Watcher (2017), “Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation”, International Data Privacy Law, Vol. 7/2, https://academic.oup.com/idpl/article/7/2/76/3860948.
[16] Gasparri, G. (2019), “Risks and Opportunities of RegTech and SupTech Developments”, Frontiers in Artificial Intelligence, https://doi.org/10.3389/frai.2019.00014.
[37] German Federal Ministry of Labour and Social Affairs (BMAS) (2020), Respect for human rights along global value chains - risks and opportunities for sectors of the German economy, https://www.bmas.de/DE/Service/Publikationen/Forschungsberichte/fb-543-achtung-von-menschenrechten-entlang-globaler-wertschoepfungsketten.html.
[21] Gurley, L. and J. Rose (2020), Amazon Employee Warns Internal Groups They’re Being Monitored For Labor Organizing, Vice News, https://www.vice.com/en/article/m7jz7b/amazon-employee-warns-internal-groups-theyre-being-monitored-for-labor-organizing.
[30] Katyal, S. (2018), “Private Accountability in the Age of Artificial Intelligence”, UCLA Law Review, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3309397.
[2] Kerry, C. (2020), Protecting privacy in an AI-driven world, Brookings Institution, https://www.brookings.edu/research/protecting-privacy-in-an-ai-driven-world/.
[17] Menn, J. and D. Volz (2016), “Google, Facebook quietly move torward automatic blocking of extremist videos”, Reuters, https://www.reuters.com/article/us-internet-extremism-video-exclusive-idUSKCN0ZB00M.
[19] Moore, P. (2020), Data subjects, digital surveillance, AI and the future of work, Panel for the Future of Science and Technology for the Directorate-General for Parliamentary Research Services of the Secretariat of the European Parliament, https://www.europarl.europa.eu/thinktank/en/document.html?reference=EPRS_STU(2020)656305.
[5] Mozur, P. (2019), “One Month, 500,000 Face Scans: How China Is Using A.I. to Profile a Minority”, The New York Times, https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html.
[34] Netherlands Ministry of Foreign Affairs (2020), Evaluation and revision of policy on Responsible Business Conduct (RBC), https://www.government.nl/topics/responsible-business-conduct-rbc/evaluation-and-renewal-of-rbc-policy.
[12] Niiler, E. (2018), “Can AI Be a Fair Judge in Court? Estonia Thinks So”, WIRED, https://www.wired.com/story/can-ai-be-fair-judge-court-estonia-thinks-so/.
[35] Norwegian Ethics Information Committee (2019), Report on Supply Chain Transparency - Proposal for an Act regulating Enterprises’ transparency about supply chains, duty to know and due diligence, https://www.regjeringen.no/contentassets/6b4a42400f3341958e0b62d40f484371/ethics-information-committee---part-i.pdf.
[43] OECD (2021), Monitoring and Evaluation Framework: OECD Due Diligence Guidance for Responsible Supply Chains of Minerals from Conflict-Affected and High-Risk Areas, OECD, https://mneguidelines.oecd.org/monitoring-and-evaluation-framework.pdf.
[39] OECD (2021), “State of implementation of the OECD AI Principles: Insights from national AI policies”, OECD Digital Economy Papers, No. 311, OECD Publishing, Paris, https://dx.doi.org/10.1787/1cd40c44-en.
[27] OECD (2021), “Tools for trustworthy AI: A framework to compare implementation tools for trustworthy AI systems”, OECD Digital Economy Papers, No. 312, OECD Publishing, Paris, https://dx.doi.org/10.1787/008232ec-en.
[38] OECD (2020), Digitalisation and Responsible Business Conduct Stocktaking of Policies and Initiatives, https://mneguidelines.oecd.org/Digitalisation-and-responsible-business-conduct.pdf.
[1] OECD (2019), Artificial Intelligence in Society, OECD Publishing, Paris, https://dx.doi.org/10.1787/eedfee77-en.
[26] OECD (2019), Due Diligence for Responsible Corporate Lending and Securities Underwriting, OECD Publishing, https://mneguidelines.oecd.org/due-diligence-for-responsible-corporate-lending-and-securities-underwriting.htm.
[47] OECD (2019), “Scoping the OECD AI principles: Deliberations of the Expert Group on Artificial Intelligence at the OECD (AIGO)”, OECD Digital Economy Papers, https://doi.org/10.1787/d62f618a-en.
[22] OECD (2018), Due Diligence Guidance for Responsible Business Conduct, OECD Publishing, http://mneguidelines.oecd.org/due-diligence-guidance-for-responsible-business-conduct.htm.
[31] OECD (2016), Report on the Implementation of the Recommendation on Due Diligence Guidance for Responsible Supply Chains of Minerals from Conflict-Affected and High-Risk Areas, https://one.oecd.org/official-document/COM/DAF/INV/DCD/DAC(2015)3/FINAL/en.
[48] OECD (2011), OECD Guidelines for Multinational Enterprises, 2011 Edition, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264115415-en.
[23] OECD (2021, forthcoming), Considering the purposes of the Guidelines and the notion of the “multinational enterprise” in the context of initial assessments.
[25] OECD (2021, forthcoming), Venture Capital Investments in Artificial Intelligence.
[45] OECD Watch (2020), Society for Threatened Peoples Switzerland vs UBS Group, https://www.oecdwatch.org/complaint/society-for-threatened-peoples-switzerland-vs-ubs-group/.
[20] Palmer, A. (2020), How Amazon keeps a close eye on employee activism to head off unions, CNBC, https://www.cnbc.com/2020/10/24/how-amazon-prevents-unions-by-surveilling-employee-activism.html.
[46] Russell, S. (2019), Human-compatible, Penguin Books, ISBN 9780525558637.
[28] Smith, R. (2019), “Jefferies and Credit Suisse set to lose on Israeli cyber security deal”, https://www.ft.com/content/e390685a-5a10-11e9-939a-341f5ada9d40.
[10] Sweeny, L. (2013), “Discrimination in Online Ad Delivery: Google ads, black names and white names, racial discrimination, and click advertising”, ACM Queue, Vol. 11/3, https://dl.acm.org/doi/10.1145/2460276.2460278.
[32] United States Department of State (2020), Guidance on Implementing the UN Guiding Principles for Transactions Linked to Foreign Government End-Users for Products or Services with Surveillance Capabilities, https://www.state.gov/key-topics-bureau-of-democracy-human-rights-and-labor/due-diligence-guidance/.
[4] Vincent, J. (2020), “NYPD used facial recognition to track down Black Lives Matter activist”, The Verge, https://www.theverge.com/2020/8/18/21373316/nypd-facial-recognition-black-lives-matter-activist-derrick-ingram.
[14] Walter, J. (2019), AI Could Give Millions Online Legal Help. But What Will the Law Allow?, https://www.discovermagazine.com/technology/ai-could-give-millions-online-legal-help-but-what-will-the-law-allow.
[24] Xiaomin, M. (2019), “Artificial Intelligence: Investment Trends and Selected Industry Uses”, EM Compass Note 71, https://www.ifc.org/wps/wcm/connect/7898d957-69b5-4727-9226-277e8ae28711/EMCompass-Note-71-AI-Investment-Trends.pdf?MOD=AJPERES&CVID=mR5Jv.
Notes
Copy link to Notes← 1. The Guidelines were adopted as part of the broader 1976 OECD Declaration on International Investment and Multinational Enterprises. The Guidelines are regularly reviewed and revised and have been updated five times since 1976, most recently in 2011 (OECD, 2011[48])).
← 2. See Chapter IX, “Science and Technology,” (OECD, 2011[48]).
← 3. Universal Declaration of Human Rights, Article 12.
← 4. Article 9, Regulation (EU) 2016/679 of the European Parliament and of the Council on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32016R0679&rid=3#d1e2051-1-1.
← 5. Universal Declaration of Human Rights, Article 7.
← 6. Universal Declaration of Human Rights, Articles 10 & 11.
← 7. Universal Declaration of Human Rights, Article 19.
← 8. Universal Declaration of Human Rights, Article 20.
← 9. See for example, United States Department of State (2020), Guidance on Implementing the "UN Guiding Principles" for Transactions Linked to Foreign Government End-Users for Products or Services with Surveillance Capabilities, https://www.state.gov/key-topics-bureau-of-democracy-human-rights-and-labor/due-diligence-guidance/; Danish Institute for Human Rights (2020), Human rights impact assessment of digital activities, https://www.humanrights.dk/publications/human-rights-impact-assessment-digital-activities; United Nations Office of the High Commissioner on Human Rights B-Tech Project, Foundational Papers, https://www.ohchr.org/EN/Issues/Business/Pages/B-TechProject.aspx.
← 10. OECD (2019), Recommendation of the Council on Artificial Intelligence, https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449.
← 11. Adapted from Data Robot (2019), “Machine Learning Life Cycle,” https://datarobot.com/wiki/machine-learning-life-cycle/; Brook, Adrien (2019), “10 Steps to Create Your Very Own Corporate AI Project,” Towards Data Science, https://towardsdatascience.com/10-steps-to-your-very-own-corporate-a-i-project-ced3949faf7f.
← 12. OECD (2019), Recommendation of the Council on Artificial Intelligence, https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449.
← 13. European Parliament legislative resolution of 25 March 2021 on the proposal for a regulation of the European Parliament and of the Council setting up a Union regime for the control of exports, transfer, brokering, technical assistance and transit of dual-use items (recast) (COM(2016)0616 – C8-0393/2016 – 2016/0295(COD)), https://www.europarl.europa.eu/doceo/document/TA-9-2021-0101_EN.pdf.
← 14. United States Federal Register, Addition of Software Specially Designed To Automate the Analysis of Geospatial Imagery to the Export Control Classification Number 0Y521 Series, https://www.federalregister.gov/documents/2020/01/06/2019-27649/addition-of-software-specially-designed-to-automate-the-analysis-of-geospatial-imagery-to-the-export. https://www.govinfo.gov/content/pkg/FR-2020-01-06/pdf/2019-27649.pdf
← 15. EU Regulation 2016/679 (2016), on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), https://eur-lex.europa.eu/eli/reg/2016/679/2016-05-04.
← 16. See OHCHR webpage on the B-Tech Project for full list of materials, https://www.ohchr.org/EN/Issues/Business/Pages/B-TechProject.aspx.