This chapter examines Germany’ policy and regulatory frameworks for the responsible development and deployment of artificial intelligence (AI) technologies. Several initiatives, such as sustainability and environmental programmes, transparent AI use in the workplace and regulatory frameworks addressing security matters show the country’s commitment to responsible and human-centred AI. Germany also has a solid legal foundation for regulatory experimentation and a proactive stance on standardisation for trustworthy AI, both domestically and internationally. Recommendations emphasise the need for a clear and integrated vision at the highest political level, increased funding for AI ambitions, and the establishment of a central team to support regulatory experimentation at the regional level.
OECD Artificial Intelligence Review of Germany
6. Policy and regulatory frameworks
Abstract
As an OECD member, Germany adheres to the OECD AI Principles included in the OECD Recommendation on AI [OECD/LEGAL/0449]. Its national AI strategy and policies demonstrate a commitment to the responsible and human-centric development and deployment of AI technologies. The country emphasises promoting inclusive growth, transparency, and accountability in AI systems. Its AI strategy supports democratic principles and trustworthy AI in the public sector, acknowledging the importance of high standards regarding non-discrimination, transparency, traceability, verifiability, fairness, participation, and data protection to uphold the public’s trust in public-sector AI use. Furthermore, the Federal Government recognises that it needs a policy and regulatory framework for the responsible and public-good-oriented development and use of AI to address risks.
Box 6.1. Policy and regulatory framework: Findings and recommendations
Findings
Germany was among the first countries to issue a national AI strategy, signalling an early commitment to human-centred AI.
Germany is implementing the OECD AI Principles.
Besides the 2018 national AI strategy and its 2020 update, Germany developed policies and initiatives in across fields of work.
Germany’s budget allocated to the national AI strategy is low compared to peer countries.
Germany has clear plans and legal provisions for regulatory sandboxes. However, limited expertise at the Länder level often hinders their application.
Germany can transfer its reputation for high-quality industrial products to trustworthy AI products and services.
Recommendations
Develop a clear, agile, long-term, integrated vision of how Germany wants AI to serve societal progress and well-being, and detailed roadmaps for implementation.
Create an oversight and co-ordination body for the national AI strategy at the top political level (i.e. the Federal Chancellery) to secure coherent implementation.
Increase funding for national AI ambitions.
Establish a central team/hub to help authorities implement regulatory experimentation.
Better leverage influence on European Union (EU) regulation and standards development.
Implement specific policies to promote the trustworthy use of AI in the workplace.
The German National AI Strategy
Germany's AI strategy, launched in 2018 and backed by a EUR 3 billion budget for implementation by 2025, aims for growth and trustworthy AI development, focusing on human-centred applications supported by the Federal Ministries of Labour and Social Affairs (Bundesministerium für Arbeit und Soziales, BMAS), of Education and Research (Bundesministerium für Bildung und Forschung, BMBF) and of Economic Affairs and Climate Action (Bundesministerium für Wirtschaft und Klimaschutz, BMWK). It targets competitiveness and integrating AI into society through societal dialogue and political action.
In 2018, Germany was among the first countries to issue a national AI strategy. The AI strategy’s overall objective is to foster growth and competitiveness, and to ensure AI’s responsible and trustworthy development. A key characteristic of the strategy is its emphasis on human-centred AI and understanding and shaping it to benefit workers and society. Given this focus, the BMAS is one of the three leading Ministries of the national AI strategy, alongside the BMBF and the BMWK.
The three main goals of the AI strategy are to: i) secure Germany’s future competitiveness while making Germany and Europe leading locations for the development and application of AI technologies; ii) ensure that AI use and development are responsible and focused on the common good; and iii) embed AI in society ethically, legally, culturally, and institutionally through broad societal dialogue and active political efforts.
One year after the strategy’s launch in November 2019, the German Federal Government published an interim report (BMBF, 2019[1]), presenting the main measures implemented and perspectives for the following years. In December 2020, Germany updated the national AI strategy in response to recent developments (German Federal Government, 2020[2]). In particular, the COVID-19 pandemic and questions of environmental sustainability and climate protection were brought to the fore, alongside the importance of European and international collaboration.
The national AI strategy positions the German Federal Government’s intervention around five key areas: i) minds; ii) research; iii) transfer and applications; iv) regulatory framework; and v) society. The strategy also outlines new initiatives focusing on sustainability, environment/climate protection, health and pandemic control, and international/European co-operation.
The national AI strategy had an initial EUR 3 billion allocated budget until 2025, a funding commitment increased in June 2020 by an additional EUR 2 billion through the Economic stimulus and future package (German Federal Government, 2020[3]). However, it is not clear whether the overall commitment will be enacted. Up to today, EUR 3.5 billion have been distributed among the federal ministries. The strategy does not provide a breakdown of allocated funds to each of the areas nor a repartition of funds to federal ministries. In October 2023, a total of EUR 2.8 billion had been allocated to concrete projects to support implementation.
Considering the allocated budget of EUR 3.5 billion, Germany’s funding for its AI ambitions appears to be lower than other European countries and the United States (US). On a per capita basis, German public investment in AI per year stands at EUR 6. In comparison to other European countries, Germany lags behind, as seen in France's commitment of EUR 6.6 per capita annually for its national AI strategy (Ministère de l'Économie, des Finances et de la Souveraineté industrielle et numérique, 2023[4]), and the United Kingdom (UK)’s allocation of EUR 6.5 per capita (HM Government, 2021[5]). Moreover, the US outpaces these figures, dedicating as much as EUR 7.5 per capita annually exclusively for AI research and development (Executive Office of the President, 2022[6]).
The strategy lists several measures and initiatives under each pillar but lacks an implementation roadmap detailing concrete steps, a timeframe and with limited exceptions, targets and indicators to measure progress. However, the 2018 National Strategy clarifies that ministries are responsible for monitoring progress of actions under their purview.
Along with the 2018 strategy and its 2020 update, Germany has developed policies and initiatives in adjacent fields. These include the National Digitalisation Strategy 2025 (BMWi, 2016[7]), the Data Strategy of the German Federal Government, updated in 2023 (Datenstrategie der Bundesregierung) (German Federal Government, 2023[8]), as well as sector legislation, such as the Mobility Data Act (Mobilitätsdatengesetz) (BMDV, 2023[9]), and legislation related to AI in the health sector (see Chapter 10). However, formal mechanisms for co-ordination are not in place for these initiatives which are all managed by different ministries.
German Länder and municipal AI strategies
Accompanying the 2018 national AI strategy and its 2020 update, Germany’s federal states (Länder) have developed measures and strategic goals in the field of AI. Five out of the 16 Länder published AI strategies, whereas the remaining 11 have defined goals and measures regarding AI within their innovation or digital strategies. Looking at specific areas of focus, 15 out of 16 Länder prioritise the topic of “transfer and applications”, displaying a significant overlap with the federal government’s ambitions. The other fields with the most overlaps are health (13/16), research (12/16) and infrastructure (10/16).
Despite these overlaps, the co-operation and co-ordination between the federal government and the Länder remains limited. Several federal states have initiated dedicated state-level AI agencies or platforms, such as the Bavarian AI Agency and network, the Hessian Centre for Artificial Intelligence (hessian.AI), or the KI.NRW platform in North Rhine-Westphalia. Co-operation between these state institutions and federal ministries should be promoted and encouraged, for instance, through the recently established permanent conference of digital ministers, announced in November 2023 that could be expanded to include relevant AI stakeholders from the federal level.
AI is influencing municipalities and changing the working environment of local authorities. Germany’s ten most populated cities (Berlin, Hamburg, Munich, Cologne, Frankfurt am Main, Stuttgart, Düsseldorf, Leipzig, Dortmund, Essen) have a digital strategy, and those of cities like Berlin, Cologne and Hamburg refer to the relevance of AI. None of these ten cities has developed an AI strategy, a finding that is not uncommon with other European municipalities. For example, only 5 of the 26 European capitals – Amsterdam (Netherlands), Brussels (Belgium), Luxembourg (Luxembourg), Madrid (Spain), and Vienna (Austria) – have an AI strategy. Given the critical role of municipalities in citizens’ daily lives, it will be crucial for municipal authorities to be aware of potential challenges and opportunities presented by AI, and how they might respond. Co-ordination with state and federal authorities and responsible AI strategies can also help to ensure a cohesive national response across all levels of government.
Developing a responsible, trustworthy, and human-centric approach to AI
Germany aligns with OECD AI Principles, emphasising human-centred AI and societal dialogue in its national strategy, with initiatives for AI’s social good, environmental and climate protection and assurance that public sector systems are responsibly developed with laws that foster workplace AI transparency. EU legislation such as the General Data Protection Regulation (GDPR) and the European Union Regulation on Artificial Intelligence (the “EU AI Act”) (EU, 2024[10]) also guide its approach.
Germany solidified commitment to and implementation of principles for trustworthy AI
The OECD AI Principles include five values-based principles for the responsible stewardship of trustworthy AI. Consistently with the EU approach of excellence and trust in AI (EC, 2024[11]), Germany has been active in shaping the EU AI Act, the world’s first comprehensive AI law. The EU AI Act aims to safeguard users’ safety and fundamental rights and to boost the AI market by raising consumer trust in AI applications. The EU AI Act establishes obligations - including on transparency, human oversight, accountability and liability - for operators1 of AI systems depending on the level of risk associated with their use. Risks are classified as unacceptable (prohibited), high (subject to conformity assessment procedures before placing an AI system on the market, as well as to post-market monitoring); limited (subject to transparency obligations), and minimal or no risk (not covered by the Regulation). The EU AI Act also introduces a tiered approach for providers of General-Purpose AI (GPAI) models, defined as AI models that can perform a wide range of distinct tasks and can be integrated into a variety of downstream systems or applications. The EU AI Act differentiates between those with potential systemic risks for society and others GPAI models. German officials say that they will issue guidance shortly after enactment to adapt and contextualise implementation in Germany. Similarly, German officials interviewed cited the EU’s proposed “AI liability directive” (European Parliament, 2023[12]) as being relevant to Germany’s future approaches to accountability.
Germany adheres to the GDPR like all EU members, which includes regulations regarding the transparent and fair processing of personal data. This regulation impacts AI systems that involve the processing of personal data, ensuring that individuals are informed about how their data are being used.
Several initiatives at national level also illustrate Germany’s alignment to the OECD AI Principles. Consistent with the OECD values-based principle on People and Planet, Germany’s AI strategy states that AI applications must augment and support human performance. It also includes an explicit commitment to a responsible development and use of AI that serves the good of society and to a broad societal dialogue on its use. To this end, several federal initiatives promote the use of AI for social good (see Chapter 7).
Germany plans to leverage the power of AI systems and environment-related data to conduct impact assessments, ecosystem analyses or investigations of energy consumption behaviour. Charged with the task of bringing together the federal government and the Länder on these topics and developing applications in the relevant domains will be the newly founded Application Lab for AI Big Data. For the application development, the AI-Lab places particular emphasis on the responsible handling of data and a resource-saving use of AI and Big Data. The federal government also initiated the AI Lighthouse programme, a funding initiative that promotes AI development for environmental, climate, nature and resource protection. Additionally, the national AI strategy details plans about launching a brand called Sustainable AI, which is meant to assess and rate the consumption of resources of different AI systems.
Germany’s Federal Government affirms its commitment to the “human-centric design” of AI systems inside and outside of the public sector, mentioning this principle in various sections of its 2018 and 2020 Strategy. Human-centric design is also a principle of the AI Observatory led by BMAS (BMAS and DenkFabrik, 2023[13]), which has an overarching focus on how AI may contribute to societal and workforce trends. One additional dimension of human-centred AI is user-orientation, which is a core part of the German Federal Government’s strategy to improve public service efficiency and make them faster and more accessible to citizens. Finally, according to German officials, Germany’s Legal Framework for the Public Sector supports human-centred AI by requiring that only humans make the final decision for anything of meaningful impact.
For public entities tasked with security, an Algorithm Assessment Centre for Authorities and Organizations with Security Tasks (Algorithmenbewertungsstelle für Behörden und Organisationen mit Sicherheitsaufgaben, ABOS) – a central body that certifies and assesses the conformity of AI systems – was called for in the national AI strategy but has not yet been formally launched (Merkur, 2023[14]).
Germany promotes the transparent use of AI in the workplace through the 2021 Works Council Modernisation Act (Betriebsrätemodernisierungsgesetz), which updated the German Works Constitution Act (Betriebsverfassungsgesetz, BetrVG). Even before the amendment, the works council had to be involved when implementing AI‑assisted information technology tools. What is new since 2021, however, is that it is easier for the works council to call in external expertise in case a company wants to use AI internally. Involving the works councils at an early stage aims to foster trust in AI technology and gain acceptance within the workforce.
In 2017, Germany established an Ethics Commission on Automated and Connected Driving at sectoral level, which provides recommendations on the ethical aspects of autonomous driving (BMDV, 2017[15]). While it specifically addresses the automotive sector, it sets a precedent for considering ethical implications in using AI technologies. In 2021, Germany passed the first comprehensive national law on autonomous driving. Germany’s Automated Vehicles Bill in the Road Traffic Act and its Act Amending the Road Traffic Act and the Compulsory Insurance Act (Autonomous Driving Act) aim to ensure the robust, secure and safe use of AI. These Acts legalise automated vehicles by modifying the Road Traffic Act and define the requirements for automated vehicles in public roads.
Regulatory experimentation in AI
Germany has various initiatives aimed at increasing agility in AI governance and is developing a comprehensive legal framework for regulatory sandboxes, with the federal regulatory sandbox law expected to be effective by 2025.
Germany has clear plans and legal provisions for regulatory experimentation
Since 2017, the BMAS has supported the launch of “company level spaces for learning and experimentation” (Lern-und Experimentierräume), which was expanded in 2019 to include a specific component focused on AI. The predominant focus of this initiative is to equip small and medium-sized enterprises (SMEs) with access to innovative technologies, but the programme is also open to agencies and entities in public administration. The 2020 update explicitly refers to this model and its use for both companies and the public service to develop innovative technical solutions.
In addition, the national AI strategy called for providing regulatory sandboxes (Reallabore) for new projects and AI systems. Regulatory experimentation in AI, especially through sandboxes, can contribute to increasing agility in AI governance. For example, AI innovators and regulators are able to test new products safely. The EU AI Act also proposes to use regulatory sandboxes to test and validate AI systems before they go onto the market. Germany already has several initiatives regarding regulatory sandboxes. These include:
a handbook, i.e. a manual for the design, implementation, and evaluation of regulatory sandboxes (BMWK, 2019[16])
a guide for formulating experimentation clauses for law makers (BMWi, 2020[17])
a practical guide to data protection for regulatory sandboxes (BMWi, 2021[18])
an inter-ministerial working group on regulatory sandboxes
a federal-Länder working group on regulatory sandboxes
a cross-cutting regulatory sandboxes network with more about 1 000 members
workshops, provision of information, establishment of contact persons
regulatory sandbox competitions (“innovation prize”).
There are also several legal provisions for national regulatory experimentation (e.g. for autonomous driving, digital identities, or drone traffic management). Germany is currently in the process of developing a comprehensive legal framework for regulatory sandboxes. Set as a goal in the 2021 Coalition Treaty (German Federal Government, 2021[19]), the federal regulatory sandbox law is expected to be effective by 2025.
A green book launched on 10 July 2023 is part of this initiative. It includes a set of proposals and questions that pertain to four essential elements (BMWK, 2023[20]). These actions plan to i) introduce new legal possibilities and experimentation clauses designed for regulatory sandboxes in key innovation areas; ii) establish overarching standards to govern regulatory sandboxes; iii) implement an “experimentation clause check” within legislative frameworks; and iv) create a one-stop shop to streamline the regulatory sandbox process. Furthermore, a public consultation period ran from 10 July to 29 September 2023, to give an opportunity for engagement and feedback on these proposals.
Germany plays an important role in developing the legal basis for regulatory experimentation on the European level. In Europe, the Council Conclusions on Regulatory Sandboxes and Experimentation Clauses were adopted in November 2020 under the German Council Presidency (EU, 2020[21]).
As this diversity of efforts demonstrates, Germany has clear plans and legal provisions for regulatory sandboxes. The country has set up-to-date regulatory sandboxes in the field of automated driving. Furthermore, a regulatory sandbox operating in Hamburg lasted seven months and offered a testbed for an autonomous delivery robot. Other countries have set up AI sandboxes for several applications. For instance, Spain created an AI regulatory sandbox in 2022 as the first pilot programme to test the EU AI Act. The UK launched two regulatory sandboxes through the Financial Conduct Authority (FCA) and the Information Commissioner’s Office (ICO). The FCA Sandbox (2016) focuses on financial technology while also admitting AI-related solutions applied in the financial sector. Inspired by the ICO regulatory sandbox, the Norwegian Data Protection Authority (Datatilsynet) Regulatory Sandbox (2020) aims to promote ethical, privacy-friendly, and responsible innovation within AI (OECD, 2023[22]).
The German federal structure determines when some areas including implementation of federal legislation are the purview of the Länder (Box 6.2), rather than the federal government. Legal competencies to implement sandboxes lie with the Länder, i.e. a competent authority at the Länder level needs to give a derogation to applicants. However, limited expertise at the Länder level oftentimes hinders provisions’ applicability. To address this issue, Germany could finance a central team/hub that provides support to the competent authorities. Another option would be to centralise the competencies for regulatory experimentation at the federal level.
Box 6.2. Länder in the German federal system
According to Article 20 of German Basic Law, Germany is a federal system. Every state (Land) shares responsibilities with the federal government and the municipalities:
The exercise of state powers and the discharge of state functions (especially administrative tasks) is a matter for the Länder; they are thus responsible for implementing federal legislation.
Federal and regional powers can overlap in areas such as justice, social welfare, civil, criminal, labour, or economic law. If in conflict, federal law takes precedence.
Länder have exclusive legislative powers regarding culture, education, universities, local authority matters, and the police.
Standardisation activities in AI
Germany is creating standards for trustworthy AI, leveraging its quality reputation towards AI products and services and aiming to shape EU standardisation. The AI trust label and the CERTAIN programme for trusted AI techniques also contribute to shaping “trustworthy AI Made in Germany”.
Germany can carry over its industry’s reputation for high-quality goods to AI products and services
Germany is working on standardisation for trustworthy AI. The German Standardisation Roadmap for Artificial Intelligence is a unique multistakeholder endeavour and an excellent step to guide domestic efforts and position Germany in the international standards ecosystem. However, the Roadmap seems to lack clear and actionable objectives and commitment from engaged parties (DIN, 2023[23]).
Germany is also engaged in international AI governance and standardisation bodies. A German AI expert from the Association for Electrical, Electronic & Information Technologies (Verband der Elektrotechnik Elektronik Informationstechnik, VDE) chairs the European Committee for Standardization (CEN) and European Committee for Electrotechnical Standardization (CENELEC) Joint Technical Committee 21 (JTC21) on Artificial Intelligence, that was mandated (together with European standardisation organisation ETSI) to develop harmonised European standards to implement the EU AI Act.
Germany has an opportunity to carry over its industry’s reputation for high-quality goods to AI products and services and spearhead AI standard setting within the European Union. Germany could show compliance with the EU AI Act and make trustworthy AI a crucial factor for competitiveness. In this regard, German companies have already set up an AI trust label in co-operation with VDE and major French companies (VDE, 2022[24]). The objective is to promote trustworthy AI that can provide the German industry with a competitive advantage while, at the same time, helping to promote AI uptake among firms.
In September 2023, another initiative contributing to this approach introducing a national implementation programme called Trusted AI was launched by the consortium Centre for European Research in Trusted AI (CERTAIN), legally part of DFKI (DFKI, 2023[25]). CERTAIN focuses on researching, developing, deploying, standardising, and promoting Trusted AI techniques, with the aim of providing guarantees for and certification of AI systems.
Recommendations
Articulate a strategic vision for AI to address Germany’s most pressing challenges
AI policy action requires political commitment, a clear vision, and effective co-operation and collaboration mechanisms. Interview participants voiced concern that, despite actions undertaken since 2018 to foster AI development in the country, Germany seems to have no strategic vision or leadership for the direction the country should take regarding AI. To address this, Germany should establish an agile, long-term, and integrated vision of how the German society wants AI to serve progress and well-being, and detailed roadmaps for implementation. An international example is the National Strategy for AI in Health and Social Care, currently being developed by the National Health Service (NHS) AI Lab in the UK within the context of the National AI Strategy, which will set the direction for AI in health and social care up to 2030 (NHS, 2023[26]).
Lead and co-ordinate AI policy design and implementation at the highest political level, and directly link Germany’s AI, data, and digitalisation policies
Oversight, co-ordination, and adequate resources at the top political level (i.e. the Federal Chancellery) are required to implement the national AI strategy and generate synergies with Germany’s digitalisation and data strategies. Promising practices elsewhere include the United Kingdom Government Office for AI – a unit of the Department for Science, Innovation and Technology responsible for overseeing the implementation of the National AI Strategy – and the US National Artificial Intelligence Initiative Office (NAIIO), located in the White House Office of Science and Technology Policy and mandated to co-ordinate and support the National AI Initiative Act. Further examples include the Secretary of State for Digitalisation and AI in Spain (depending on the newly created Spanish Ministry for the Digital Transformation and the Civil Service, which has broad competencies related to telecommunications, information society, digital transformation, and the development and promotion of AI), and the Ministry of AI in the United Arab Emirates.
Increase funding for AI ambitions
To safeguard technological independence in AI, vital for the German economy and society, interviewed experts concurred that substantial investments are imperative at both the national and EU levels. Stakeholders have advocated for these investments to be on par with those made in the People’s Republic of China and the US (Humboldt Foundation, 2023[27]).
Establish a central team/hub to help authorities foster expertise in regulatory experimentation at the Länder level
Germany set up a comprehensive and solid national legislative framework for regulatory experimentation. However, the implementation of sandboxes lies with authorities at the regional level. While some Länder have the competence to set up, oversee, and evaluate sandboxes, most lack such expertise. To address this, Germany could finance a central team to provide support to the competent Länder authorities. Another option would be to centralise the competencies for regulatory experimentation at the federal level.
Better leverage influence on EU regulation and standards development
There is an opportunity for Germany to carry over its industry’s reputation for high-quality goods to AI products and services to spearhead AI standard-setting within the European Union. Germany should ensure that it continues to be well represented in European and global AI standardisation activities, leveraging existing key positions such as the current chairmanship on CEN-CENELEC by a German from VDE.
Implement specific policies to promote the trustworthy use of AI in the workplace
Beyond actions to invest in training and social dialogue, Germany will need to address the risks that AI used in the workplace can pose to the rights and safety of workers.
Existing anti-discrimination legislation and regulation on occupational safety and health, data protection, transparency, and freedom of association – while not specific to AI – provide a framework to address related risks. Monitoring the relevant case law will allow Germany to determine whether this regulation needs to be adapted in light of the use of AI.
The EU AI Act, the Product Liability Directive and the AI Liability Directive will include provisions to ensure accountability, but Germany will need to implement measures specific to workplace uses.
References
[13] BMAS and DenkFabrik (2023), Observatorium Künstliche Intelligenz in Arbeit und Gesellschaft, https://www.ki-observatorium.de/en.
[1] BMBF (2019), Interim Report: One Year of AI Strategy, Bundesministerium für Bildung und Forschung, https://www.bmbf.de/bmbf/shareddocs/downloads/files/zwischenbericht-ki-strategie_final.pdf (accessed on 11 December 2023).
[9] BMDV (2023), “Veröffentlichung Eckpunkte Mobilitätsdatengesetz”, Bundesministerium für Digitales und Verkehr, https://bmdv.bund.de/DE/Themen/Digitales/Digitale-Gesellschaft/Eckpunkte-Mobilitaetsdatengesetz/eckpunkte-mobilitaetsdatengesetz_node.html (accessed on 18 October 2023).
[15] BMDV (2017), Automated and Connected Driving, Federal Ministry of Transport and Digital Infrastructure, https://bmdv.bund.de/SharedDocs/EN/publications/report-ethics-commission-automated-and-connected-driving.pdf?__blob=publicationFile (accessed on 11 December 2023).
[18] BMWi (2021), Praxishilfe zum Datenschutz in Reallaboren, Bundesministerium für Wirtschaft und Energie, https://www.bmwk.de/Redaktion/DE/Publikationen/Digitale-Welt/praxishilfe-zum-datenschutz-in-reallaboren.pdf?__blob=publicationFile&v=1 (accessed on 18 October 2023).
[17] BMWi (2020), Recht flexibel, Bundesministerium für Wirtschaft und Energie, https://www.bmwk.de/Redaktion/DE/Publikationen/Digitale-Welt/recht-flexibel-arbeitshilfe-experimentierklauseln.pdf?__blob=publicationFile&v=1 (accessed on 2023 October 2023).
[7] BMWi (2016), Digitale Strategie 2025, Bundesministerium für Wirtschaft und Energie, https://www.bmwk.de/Redaktion/DE/Publikationen/Digitale-Welt/digitale-strategie-2025.pdf?__blob=publicationFile&v=1 (accessed on 24 October 2023).
[20] BMWK (2023), Grünbuch Reallabore, Bundesministerium für Wirtschaft und Klimaschutz, https://www.bmwk.de/Redaktion/DE/Downloads/G/gruenbuch-reallabore.pdf?__blob=publicationFile&v=10 (accessed on 18 October 2023).
[16] BMWK (2019), Making Space for Innovation - The Handbook for Regulatory Sandboxes, Federal Ministry for Economic Affairs and Energy, https://www.bmwk.de/Redaktion/EN/Publikationen/Digitale-Welt/handbook-regulatory-sandboxes.pdf%3F__blob%3DpublicationFile%26v%3D2 (accessed on 18 October 2023).
[25] DFKI (2023), CERTAIN: The European Centre for Trusted AI starts with a kick-off celebration on September 19, 2023, https://www.dfki.de/en/web/news/certain-european-centre-for-trusted-ai-starts-with-kick-off-celebration-on-september-19-2023 (accessed on 11 December 2023).
[23] DIN (2023), Second Edition of the German Standardization Roadmap AI, Deutsches Institut für Normung e.V., https://www.din.de/en/innovation-and-research/artificial-intelligence/ai-roadmap (accessed on 18 October 2023).
[11] EC (2024), European Approach to Artificial Intelligence, European Commission, https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence (accessed on 11 December 2023).
[28] EC (2021), Proposal for a Regulation of the European Parliament and of the Council laying down Harmonised Rules on Artificial Intelligence and amending certain Union Legislative Acts (Artificial Intelligence Act), European Commission, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206 (accessed on 28 March 2023).
[10] EU (2024), Regulation (EU) 2024/ ...... of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), https://data.consilium.europa.eu/doc/document/PE-24-2024-INIT/en/pdf.
[21] EU (2020), Council Conclusions on Regulatory Sandboxes and Experimentation Clauses as Tools for an Innovation-friendly, Future-proof and Resilient Regulatory Framework that Masters Disruptive Challenges in the Digital Age, Council of the European Union, https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52020XG1223(01) (accessed on 18 October 2023).
[12] European Parliament (2023), Briefing - EU Legislation in Progress: Artificial Intelligence Liability Directive, https://www.europarl.europa.eu/RegData/etudes/BRIE/2023/739342/EPRS_BRI(2023)739342_EN.pdf.
[6] Executive Office of the President (2022), Supplement to the President’s FY 2023 Budget, National Science and Technology Council, https://www.nitrd.gov/pubs/FY2023-NITRD-NAIIO-Supplement.pdf.
[8] German Federal Government (2023), Weiterentwicklung der Datenstrategie - Fortschritt dank besserer Daten, https://www.bundesregierung.de/breg-de/themen/digitaler-aufbruch/datenstrategie-2023-2216620 (accessed on 18 October 2023).
[19] German Federal Government (2021), Mehr Fortschritt wagen - Bündnis für Freiheit, Gerechtigkeit und Nachhaltigkeit, https://www.bundesregierung.de/resource/blob/974430/1990812/1f422c60505b6a88f8f3b3b5b8720bd4/2021-12-10-koav2021-data.pdf?download=1 (accessed on 18 October 2023).
[3] German Federal Government (2020), “Economic stimulus package: An ambitious programme”, https://www.bundesregierung.de/breg-en/news/konjunkturpaket-1757640 (accessed on 11 December 2023).
[2] German Federal Government (2020), Strategie Künstliche Intelligenz der Bundesregierung - Fortschreibung 2020, https://www.ki-strategie-deutschland.de/files/downloads/201201_Fortschreibung_KI-Strategie.pdf (accessed on 11 October 2023).
[5] HM Government (2021), National AI Strategy, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1020402/National_AI_Strategy_-_PDF_version.pdf.
[27] Humboldt Foundation (2023), Sieben Empfehlungen zur Künstlichen Intelligenz (KI) an die Deutsche Bundesregierung, https://www.humboldt-foundation.de/fileadmin/Bewerben/Programme/Alexander-von-Humboldt-Professur/Positionspapier_zur_Kuenstlichen_Intelligenz_Recommendations_on_AI.pdf (accessed on 2023 October 2023).
[14] Merkur (2023), “Coalition wants to make AI applications in administration possible”, https://www.merkur.de/politik/koalition-will-ki-anwendungen-in-verwaltung-moeglich-machen-zr-92489210.html (accessed on 11 December 2023).
[4] Ministère de l’Économie, des Finances et de la Souveraineté industrielle et numérique (2023), La Stratégie Nationale pour l’Intelligence Artificielle, https://www.entreprises.gouv.fr/fr/numerique/enjeux/la-strategie-nationale-pour-l-ia.
[26] NHS (2023), The National Strategy for AI in Health and Social Care, NHS England, https://transform.england.nhs.uk/ai-lab/ai-lab-programmes/.
[22] OECD (2023), “Regulatory sandboxes in artificial intelligence”, OECD Digital Economy Papers, No. 356, OECD Publishing, Paris, https://doi.org/10.1787/8f80a0e6-en.
[29] UK Government (2021), National AI Strategy, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1020402/National_AI_Strategy_-_PDF_version.pdf (accessed on 11 December 2023).
[24] VDE (2022), “Kann Künstliche Intelligenz wertekonform sein? VDE SPEC als Grundlage künftiger Entwicklungen”, Verband der Elektrotechnik Elektronik Informationstechnik e.V., https://www.vde.com/de/presse/pressemitteilungen/ai-trust-label (accessed on 18 October 2023).
Note
← 1. “Operator” means the provider, the product manufacturer, the deployer, the authorised representative, the importer or the distributor.