This chapter discusses initiatives to promote the use of artificial intelligence (AI) for social good, as well as public perceptions of AI in Germany. Germany has a strong commitment to fostering an AI ecosystem dedicated to societal well-being. The Civic Coding - Innovation Network AI for the Common Good serves as a central hub for promoting AI projects for the common good. Public perceptions of AI in Germany are generally positive but differ by age group and application sector. Trust in AI applications within healthcare is notably high, and concerns over data privacy and disinformation are very diffused. Recommendations include broadening stakeholder engagement in AI policy design and regularly monitoring public perceptions of AI.
OECD Artificial Intelligence Review of Germany
7. Society
Abstract
The use of AI should be guided by economic interests and oriented towards the common good of German society. Both the 2018 national AI strategy and its 2020 update set out these goals. More specifically, the 2020 strategy aimed to establish an ecosystem for AI dedicated to the common good, promote AI applications supporting everyday consumer life (referred to as consumer-enabling technologies), and support AI projects for the preservation, exploration, accessibility, networking, and dissemination of cultural offerings. Furthermore, it outlined plans for building AI competence in exploring and verifying media content to safeguard the diversity of opinions.
Box 7.1. Society: Findings and recommendations
Findings
The Federal Government launched several initiatives to promote the use of AI for social good.
Overall public perception of AI in Germany is positive. Trust in AI applications is highest in healthcare and lowest in human resources.
Potential threats to the positive public perception of AI come from mis- and disinformation. The federal government initiated various activities to address this.
While social partners are involved in AI policy design, stakeholder participation could be broadened.
Citizens are consulted on AI debate, but their involvement in AI policy design could be deepened.
Recommendations
Involve a broader range of stakeholders in AI policy design.
Launch an AI citizens’ assembly.
Regularly monitor public perceptions on AI.
Programmes supporting AI for the common good
Germany's Civic Coding Initiative fosters AI for the common good by bundling initiatives and projects across ministries. Through platforms and spaces for encounters such as the Civic Innovation Platform and the AI Ideas Workshop for Environmental Protection, the federal government supports the networking of civil society actors and enables the testing of digital technologies. Together with civil society organisations, it is committed to creating data spaces for the common good as part of the Civic Data Lab.
The federal government supports initiatives to promote the use of AI for social good
The Civic Coding – Innovation Network AI for the Common Good, founded by three ministries, stands out as the central hub for promoting AI projects for the common good (Box 7.2). The independent think tank iRights.Lab established the Centre for Trustworthy Artificial Intelligence (Zentrum für vertrauenswürdige Künstliche Intelligenz, ZVKI) with the support of the Federal Ministry for the Environment, Nature Conservation, Nuclear Safety and Consumer Protection (Bundesministerium für Umwelt, Naturschutz, nukleare Sicherheit und Verbraucherschutz, BMUV), in co-operation with the Fraunhofer Institute for Applied and Integrated Security (Fraunhofer-Institut für Angewandte und Integrierte Sicherheit, AISEC), the Fraunhofer Institute for Intelligent Analysis and Information Systems (Fraunhofer-Institut für Intelligente Analyse- und Informationssysteme, IAIS) and the Freie Universität Berlin. As a national and neutral interface between science, business, and civil engagement, it provides information on many aspects relevant to consumers, facilitates public discussions and develops tools for the evaluation and certification of trustworthy AI (ZVKI, 2023[1]).
Box 7.2. Civic Coding - Innovation Network AI for the Common Good
Civic Coding - Innovation Network AI for the Common Good is an interdepartmental initiative of the Federal Ministry of Labour and Social Affairs (Bundesministerium für Arbeit und Soziales, BMAS), the BMUV and the Federal Ministry for Family Affairs, Senior Citizens, Women and Youth (Bundesministerium für Familie, Senioren, Frauen und Jugend, BMFSFJ). The initiative aims to create and leverage synergies by bundling and networking AI-related projects, programmes, structures, and communities of the three participating ministries. The goal is to develop a visible and effective innovation network that supports and secures the long-term use of AI that works for the public good and sustainable development (Civic Coding, 2023[2]). The following projects are part of the innovation network.
Civic Innovation Platform
The Civic Innovation Platform (CIP) is supported by the BMAS. The core of the project – a multifunctional internet platform for connecting cross-sectoral and/or interdisciplinary project teams – was integrated in the Civic Coding Initiative in 2023. Support for the human-centred development and use of AI applications for the common good and society benefits from two funding stages. First, financially and ideally through a prize of up to EUR 20 000 and in-kind support like consulting and workshop offerings. This is part of the idea contest “AI is what we make it!”, in which project teams submit their ideas for AI applications that benefit the common good. Second, a long-term project funding started in 2023 (Civic Innovation Platform, 2023[3]).
AI Ideas Workshop for Environmental Protection
The AI Ideas Workshop for Environmental Protection is supported by the BMUV. It serves as both a physical and virtual hub for civil society actors to support them in developing data-driven and AI-based solutions to address environmental challenges. An AI Ideation Workshop offers educational formats to impart skills in handling environmental data and, by using best practices, demonstrates how AI is already contributing to environmental protection today (KI-Ideenwerkstatt, 2023[4]).
Civic Data Lab
The Civic Data Lab is funded by the BMFSFJ. It helps organised and non-organised civil society actors to better achieve common good goals through with data by collecting, organising and structuring their data, evaluating it, linking it, reusing it for their target groups, making it available to others, and supplementing it with available data (Caritas digital, 2023[5]).
Source: Civic Coding (2023[2]), Die Initiative, https://www.civic-coding.de/ueber-civic-coding/die-initiative (accessed on 2 November 2023); Civic Innovation Platform (2023[3]), Projekt und Leitbild, https://www.civic-innovation.de/ueber-uns/projekt-und-leitbild (accessed on 2 November 2023); KI-Ideenwerkstatt (2023[4]), Über uns, https://www.ki-ideenwerkstatt.de/ (accessed on 2 November 2023); Caritas digital (2023[5]), Projekt Civic Data Lab, https://www.caritas-digital.de/projekte/civic-data/ (accessed on 2 November 2023).
Public perception of AI in Germany
German citizens generally perceive AI positively, with high awareness and trust levels. However, concerns about data privacy and disinformation exist. Germany could monitor regularly public perceptions on AI and step-up citizen and diverse civil society participation in policy making to reflect civil values in AI policies.
People in Germany have a positive perception of AI
When promoting a common good-oriented use of AI in the German society, it is also important to be familiar with the attitudes that people hold regarding AI. In general, people in Germany had a comparatively positive perception of AI in 2021, one of the highest in a sample of comparable countries (Lloyd's Register Foundation, 2021[6]). They also show a comparatively high willingness to trust and accept AI systems, ranking seventh in front of people in countries such as Australia, Canada, Estonia, Finland, France, Israel and the United Kingdom (UK). However, people in Germany are significantly less willing to trust and accept AI systems than those living in countries such as Brazil, the People’s Republic of China, India or South Africa (KPMG, 2023[7]).
In a recent survey gauging the perspectives of Europeans on the impact of AI, Germany has an advantage over other countries in that more than 80% of respondents indicated that they knew a little or a lot about AI – the highest among countries involved in the survey. Regarding the overall applications of AI, opinions are divided, with roughly equal proportions believing AI to be either beneficial or harmful (21% and 22% respectively). More than half of respondents expressed neutrality or indicated that they did not have enough information; however, there was a significant difference between younger populations (19-25) where 36% of respondents believed AI would be beneficial, while only 13% of older respondents (65+) shared this view (bidt, 2023[8]).
Regarding the sector of AI applications, people living in Germany placed the highest level of trust in AI applications within the healthcare sector in 2023 (see Chapter 10). Conversely, AI applications in the human resources sector receive the least amount of trust. However, this difference is not specific to Germany but is also evident in other countries. It is likely a result of the significant and immediate benefits that improved precision in medical diagnoses and treatments provides to individuals, coupled with the generally high levels of trust in healthcare professionals in most countries (KPMG, 2023[7]).
In addition, most German X users also seem to have a rather neutral or positive sentiment regarding AI, displaying a lower baseline negative sentiment than in similar countries (Figure 7.1). However, it is essential to note that the data presented here extend only until 2022. Given that ChatGPT-3 was launched in November of that year, it is plausible that sentiments may have evolved since then. Taking into consideration the rapid development of AI technologies, it would be crucial for countries to stay aware of their citizens’ public perceptions of AI. This is particularly relevant for Germany, given that its AI strategy explicitly emphasises a human-centred approach.
However, this positive attitude might be jeopardised if new AI applications do not reflect German values and are not trustworthy. Concerns over data privacy, social scoring, and disinformation are already prevalent among X users in Germany. The targeted use of AI applications to disseminate mis- and disinformation, for instance, could deepen these concerns, but also have negative consequences for German democracy as a whole. It is therefore imperative for the federal government to continue and expand its efforts to combat AI-generated mis and disinformation. Two promising examples of the many projects targeted at addressing these issues are DeFaktS and noFake (BMBF, 2022[9]).
The DeFaktS project, funded by the Federal Ministry of Education and Research (Bundesministerium für Bildung und Forschung, BMBF) takes a comprehensive approach to researching and combating disinformation. To achieve this goal, an AI model is trained using extracted data from suspicious social media and messenger groups, enabling it to recognise factors and stylistic elements characteristic of disinformation. Subsequently, this trained AI becomes a component for eXplainable Artificial Intelligence (XAI), which is used to develop an app that aims to transparently inform and warn users of online offerings about the possible occurrence of disinformation (BMBF, 2023[10]).
Funded with EUR 1.33 million by the BMBF, the goal of the AI-supported Assistance System for Crowdsourced Detection of Disinformation on Digital Platforms (noFake) project is to develop an assistance system that aids crowd workers in quickly identifying disinformation. This system automates the analysis of large datasets by pre-sorting suspicious text and image materials, associating them with similar content, and revealing the dissemination channels of the examined materials. The research finding will be combined with journalistic expertise for the development of the assistance system. Recognising that the success of the system depends not only on its functionality but also on how crowd workers use it, training materials and learning curricula are being developed to empower crowd workers to make informed assessments of information materials (BMBF, 2023[11]).
Simply measuring the attitudes of the German population, though, is not enough. In order to fully take their concerns into account when formulating AI policies, the involvement of civil society organisations and citizens in these processes should be encouraged. In this regard, the federal government does engage with stakeholders in the design of AI policies, for instance through an online consultation during the formulation of the 2018 national AI strategy. However, the organisations most strongly and frequently involved are primarily social partners (Sozialpartner:innen). In contrast, minority or environmental protection civil society organisations are relatively rarely invited to the table. For German AI policies to better reflect the diverse interests of its civil society, a more balanced participation of organisations would hence be required.
There is equally as much untapped potential with regards to engaging citizens in AI policy design. In Germany, this has so far been limited to consultations which take place very rarely. Countries such as Canada, Norway and the UK could serve as a role model here, where AI policy design is much more strongly informed by citizens through public deliberations or co-construction workshops.
Recommendations
Involve a broader range of stakeholders in AI policy design
Social partners have been involved in the AI policy development process, primarily representing the interests of employees and employers. However, given AI’s implications for various areas of society, focusing on only the economy is not enough. It is essential to strengthen the involvement of organisations from more areas affected by AI, including, environmental or minority rights organisations, among others.
Launch an AI citizens’ assembly
In 2022, the project Citizens’ Assembly (Bürgerrat), led by the organisation More Democracy (Mehr Demokratie), conducted a citizens’ assembly to address AI in Germany (Bürgerrat, 2022[12]). However, to ensure that the topics and recommendations discussed in such assemblies reach policy makers, a federal ministry such as the BMAS could organise a citizen assembly on AI.
While randomly selected citizens’ assemblies are currently employed as ad-hoc and irregular democratic instruments, they possess the potential to serve as a permanent tool supporting political decision making. Establishing ongoing citizens’ assemblies bolsters public trust in political decision makers and fosters continuous dialogue. As elucidated by (OECD, 2021[13]), regularly practicing deliberative democracy allows people and decision makers to build mutual trust.
In this context, Germany could consider creating a permanent citizens’ assembly dedicated to emerging technologies, specifically focusing on AI. Such an approach would contribute to sustained public engagement and enhance trust in the decision-making process. Furthermore, it would allow the federal government to expand its “common-good approach” from the development of applied AI projects to the formulation of AI policies.
Monitor public perceptions on AI regularly
It is crucial to be aware of the public’s perceptions of AI and its implementation in various societal domains to be able to take them into account when developing AI policies. Notably, the AI Observatory (KI Observatorium) of the BMAS is already making efforts in this regard, publicly sharing the results of a monitoring initiative from 2021 on its website (KI Observatorium, 2021[14]). However, given AI’s rapid advancements in recent years, it is imperative to conduct monitoring regularly, possibly with the AI Observatory’s leadership.
References
[8] bidt (2023), Autorinnen und Autoren: Das bidt-Digitalbarometer. international, Bayerisches Forschungsinstitut für Digitale Transformation, https://doi.org/10.35067/xypq-kn68.
[10] BMBF (2023), DeFaktS, Bundesministerium für Bildung und Forschung, https://www.forschung-it-sicherheit-kommunikationssysteme.de/projekte/defakts (accessed on 3 November 2023).
[11] BMBF (2023), noFake, Bundesministerium für Bildung und Forschung, https://www.forschung-it-sicherheit-kommunikationssysteme.de/projekte/nofake (accessed on 3 November 2023).
[9] BMBF (2022), “Fake News erkennen, verstehen, bekämpfen [Recognising, understanding and combating fake news]”, Bundesministerium für Bildung und Forschung, https://www.bmbf.de/bmbf/shareddocs/kurzmeldungen/de/2022/02/fake-news-bekaempfen.html (accessed on 11 December 2023).
[12] Bürgerrat (2022), “Citizens’ assembly discussed artificial intelligence”, https://www.buergerrat.de/en/news/citizens-assembly-discussed-artificial-intelligence/ (accessed on 25 January 2024).
[5] Caritas digital (2023), Projekt Civic Data Lab, https://www.caritas-digital.de/projekte/civic-data/ (accessed on 2 November 2023).
[2] Civic Coding (2023), Die Initiative, https://www.civic-coding.de/ueber-civic-coding/die-initiative (accessed on 2 November 2023).
[3] Civic Innovation Platform (2023), Projekt und Leitbild, https://www.civic-innovation.de/ueber-uns/projekt-und-leitbild (accessed on 2 November 2023).
[14] KI Observatorium (2021), KI Indikatoren - KI in Arbeit und Gesellschaft, Bundesministerium für Arbeit und Soziales, https://www.ki-observatorium.de/ki-indikatoren (accessed on 24 January 2024).
[4] KI-Ideenwerkstatt (2023), Über uns, KI-Ideenwerkstatt für Umweltschutz des Bundesministeriums für Umwelt, Naturschutz, nukleare Sicherheit und Verbraucherschutz, https://www.ki-ideenwerkstatt.de/ (accessed on 2 November 2023).
[7] KPMG (2023), Trust in Artificial Intelligence: A Global Study, https://assets.kpmg.com/content/dam/kpmg/au/pdf/2023/trust-in-ai-global-insights-2023.pdf (accessed on 31 October 2023).
[6] Lloyd’s Register Foundation (2021), World Risk Poll 2021: A Digital World, https://wrp.lrfoundation.org.uk/LRF_2021_report_a-digtial-world-ai-and-personal-data_online_version.pdf (accessed on 31 October 2023).
[13] OECD (2021), “Eight ways to institutionalise deliberative democracy”, OECD Public Governance Policy Papers, No. 12, OECD Publishing, Paris, https://doi.org/10.1787/4fcf1da5-en.
[1] ZVKI (2023), Über uns, Zentrum für vertrauenswürdige Künstliche Intelligenz, https://www.zvki.de/ueber-uns (accessed on 2 November 2023).