This chapter outlines key opportunities for Germany to advance artificial intelligence (AI) capabilities in health care. Germany has started meaningful action to advance AI in health, including legislative measures aimed at boosting the integration of digital health technologies and actively promoting AI applications in medicine. Despite public optimism about AI’s potential to improve patient experiences and reduce the workload of health workforce, Germany faces challenges to enabling AI in health due to fragmented health data and impracticable data privacy and security measures. Germany has started its journey into AI in health and should continue work to establish a comprehensive health data governance and interoperability strategy with legislation that enables innovation while providing appropriate protections. This strategy will be vital for building a national health information framework that fosters timely access to quality data, enabled by collaboration among all key stakeholders to benefit Germany and everyone living in Germany.
OECD Artificial Intelligence Review of Germany
10. Spotlight: AI and healthcare
Abstract
AI has the potential to save lives, helping health professionals dedicate more time to care, and improving public health and safety (OECD, 2024[1]). However, these benefits are limited in reach due to a fragmented policy, data, and technology foundation. This is true in Germany and many other countries.
Germany’s 2018 AI Strategy reflected the imperative for action on AI in health as it identified opportunities to improve health outcomes, support nursing, and drive innovation. This was re-articulated in the 2020 update to the AI Strategy.
Germany is taking action to build a stronger policy, data, and technical foundation for AI, reflective of the 2018/20 AI Strategy. Developing and training AI applications requires access (policy) to large, high-quality, and detailed datasets (data) while ensuring the security of these data (technology). The art of developing AI solutions requires effective stewardship of millions of personal health records that consolidate information across populations and organisations.
Box 10.1. AI and healthcare: Findings and recommendations
Findings
Broad-based support for acts related to health data and digital tools (GDNG, Digi-G, upcoming act restructuring gematik) will strengthen foundations for AI in health in Germany.
Cautious interpretation of data protection legislation is hampering the ability to innovate with AI.
Poor interoperability is due to lack of accountability, trust, and incentives.
The public and health providers believe AI will benefit health outcomes and systems, although there are differences by age.
Recommendations
Continue with legislation and policy re-design.
Develop guidance for secondary-use access to health data that supports development of AI and protects citizens and respects privacy rights.
Establish a health-data governance and interoperability strategy and framework with accountability, a roadmap, measurements, financial levers and oversight.
Involve the public and health providers in the development of AI solutions, design of controls and oversight mechanisms for trust.
Germany’s journey to health in the digital age
Germany is advancing its digital health ecosystem, focusing on patient-centred care and leveraging AI for clinical, administrative, and research improvements. The Government's strategy includes substantial funding for diverse health-related AI projects and legislation to enhance data availability and use. These efforts aim to improve health outcomes, system efficiency, and innovation, aligning with the European Union (EU)'s health data space for better cross-border data collaboration.
Germany, an OECD country with higher-than-average health spending (OECD, 2022[2]), has made considerable strides in health digitalisation. Almost one-fourth (23%) of all residents used teleconsultation at the height of the COVID-19 pandemic, well below the average of 39% across the EU27 (OECD/EU, 2022[3]). Germany acknowledges that it lags in its digital transformation efforts. In the 2022 OECD report on the Recommendation on Health Data Governance, Germany ranked 18th among the 23 responding OECD countries for dataset governance and last (23rd) among those countries for data linking (OECD, 2016[4]). As reported at a conference in June 2023, Germany is the 18th country in Europe to adopt e‑prescription services nationwide since the first of January 2024. Germany starts to prepare for the European Health Data Space to simplify cross-border data collaboration and improve portability of personal health records later than many European peers.
Germany has taken proactive measures to address these areas. Based on a broad stakeholder consultation process (more than 500 actors), the Federal Ministry of Health (Bundesministerium für Gesundheit, BMG) has developed a strategy for health digitalisation in 2023 (BMG, 2023[5]). The strategy aims to facilitate a people-centric and learning digital health ecosystem to ensure the well-being of patients (Gerlach et al., 2021[6]).
Germany is an international pioneer in the structured assessment and reimbursement of patient-centred digital health applications. The procedure for assessing eligibility for reimbursement is open to development, including for AI-based applications. To support research and innovation in this field, the Federal Ministry of Education and Research (Bundesministerium für Bildung und Forschung, BMBF) offers a range of funding schemes:
Medical informatics initiative
Digital Hubs: Advances in Research and Healthcare
National Research Data Infrastructure
Funding Line Computational Life Sciences
Funding Line Modelling Network for Serious Infectious Diseases
Funding Line Data Analysis and Data Sharing for Cancer Research.
In addition to enhancing digitalisation for improved information sharing and individual empowerment, the German government has a parallel objective to promote AI. To this end, the German Federal Government devised a strategy for AI usage in 2018 to “to safeguard Germany’s outstanding position as a research centre, to build up the competitiveness of German industry, and to promote the many ways to use AI in all parts of society” (German Federal Government, 2018[7]).
AI in a digitalised health system – developed and adopted responsibly – provides the conditions for improved clinical care, healthcare system efficiency, protection from public health emergencies, medical research, and burgeoning innovation. AI in health can be applied to biomedicine (precision medicine, drug discovery, matching individuals to clinical trials, prediction, and prevention), administration (scheduling, billing, coding, managing workflow and payment), and clinical practice (clinical audits, diagnostic imaging interpretation for radiology, robot remote control surgery, develop personalised treatment plans) (Oliveira Hashiguchi, Slawomirski and Oderkirk, 2021[8]). Germany is actively pursuing several projects to increase AI use in medicine, including in radiology to decrease exposure to radiation (Nensa, Demircioglu and Rischpler, 2019[9]) as well as supporting the development and adoption of health applications (Lantzsch et al., 2022[10]). The BMG funded and is funding a total of 38 projects between 2020 and 2025. These projects underline the wide range of possible applications and uses of AI in healthcare and show how innovative technologies can be used to further develop research and patient care.
To help accelerate the better use of health data in Germany, including its role in AI applications, the German Federal Government has plans for legislative measures aimed at enhancing the health data ecosystem:
The Act on Health Data Use (Gesundheitsdatennutzungsgesetz, GDNG) aims to promote the common use of health data by creating the basis for better availability of health data and paving the way for the European Health Data Space (EHDS).
Another critical piece of legislation will be the Registry Act (Registergesetz), for which the first official draft is expected in the first half of 2024. It aims to strengthen and regulate medical registries as well as to enhance transparency through the establishment of a centre for medical registries. The centre would keep a register directory, offering an overview of master and process data of these registries. The legislation seeks to promote treatment reviews, research, and care by improving data usability and accessibility for research and care and offering support for medical registries to develop quality and utility.
When implemented, these measures will collectively help improve the accessibility and linkability of data across Germany (and across Europe via the EHDS). This will be accomplished in part with the health data hub (Forschungsdatenzentrum Gesundheit). The better use of health data will help improve health outcomes for individuals, enable health system-level insights for population health and safety, support preparedness for public health emergencies, and drive research and innovation for long-term systemic improvements.
The overall goal is to provide patients with the highest quality of care, including new, innovative medical technology applications. Achieving this goal requires the best possible knowledge from medical research which, in modern medicine, involves the use of AI.
Public and healthcare provider perspectives
In Germany, attitudes are optimistic about AI's potential in healthcare, with many recognising its benefits in diagnosis and disease detection when accompanied by human oversight. Despite some fears, there's an emphasis on the need for inclusive digital literacy to prevent a divide. Healthcare workers view AI as an augmentation tool rather than a replacement, advocating for its application in administrative tasks to enhance efficiency and patient experiences. At the same time, they highlight a cautious approach to adoption out of trust and value concerns.
Several recent surveys have gauged the opinion of Europeans on AI, including of people living in Germany (see Chapter 6). In this context, there is more optimism among people living in Germany for the use of AI in health, compared to other AI applications, as 80% of respondents believe there are good or balanced opportunities for AI for disease detection (bidt, 2023[11]). 85% believe that AI would have benefits in diagnosis with a large majority preferring a human intermediary between the AI and the patient. These were perceived to be the highest beneficial opportunities of AI. The report (bidt, 2023[11]) provided a caution where: “In a country comparison, there is a relatively large digital divide among the population in Germany”. People with limited digital literacy, possess low digital skills and thus face the risk of being left behind. For Germany to be able to keep pace internationally with digitalisation and not to fall behind from an economic and social perspective, the backlogs in the identified problem areas must be made up as quickly as possible and existing differences in skills among the population must be mitigated.
A second survey demonstrated similar positive sentiment towards AI among the public with 81% perceiving AI as an opportunity, 70% believing that doctors should be supported by AI, and 87% acknowledging the need for regulation. It is also important to note that 23% express fear of AI, although the survey did not go into more detail (Wintergerst, 2023[12]).
On the broader topic of digitalisation, some groups may have overemphasised the demands for privacy and security among the public. In practice, patients have repeatedly shown positivity towards digital infrastructure (Schmitt, Haarmann and Shaikh, 2022[13]; Heidel and Hagist, 2020[14]). The same sentiment, of acceptance towards digitalisation, is echoed among service providers.
Discussions with health workers revealed their pragmatism that their jobs were unlikely to be replaced by advancements in AI. Nevertheless, there is a perception that there has been slow adoption of digital tools that are connected across a broader network of health organisations. Rationale for the slow adoption includes: i) a lack of trust due to early missteps in the implementation of digital tools; ii) concerns about a loss of autonomy in the role of health providers due to digital tools; and iii) physicians not seeing the value of investing their time to create better outcomes for providers and their patients.
To that end interviews noted that valuable areas for implementation of AI would include functions that reduce the workload for healthcare providers and improve the patient experience (e.g. appointment booking, clinical documentation, and invoicing) to improve adoption of AI in clinical settings. These would also have the advantage of being lower risk applications of AI as clinical outcomes are not directly impacted. This would require a shift of focus from senior decision makers, funders, and innovators whose apparent focus is on advanced clinical care including diagnostics, robotics, or genomics.
Barriers to adoption of AI for health in Germany
Germany's approach to integrating AI in healthcare faces challenges, including cautious interpretation of data protection laws and fragmented health data systems. While initiatives like the Act on Health Data Use aim to improve data availability, practical issues such as data silos and varying state laws complicate data sharing and AI development. To fully utilise AI, Germany must balance data protection with the need to use health data to improve care and enhance data interoperability and public trust in AI solutions.
As Germany embarks on its AI for health journey, it faces several barriers to harnessing the potential of health data and AI. While the legislation noted above (Registergesetz, GDNG) will help, additional efforts are needed to identify and resolve long-standing barriers that are preventing the development and use of AI for health. Specifically, the progress of AI for health is hindered by factors that challenge the ability to scale innovations across health care organisations. Scaling AI relies on: i) data access with protections; ii) data interoperability; iii) trust from impacted stakeholders – notably providers and the public; and iv) sufficient human and computing capacity to develop, deploy, operate, and sustain AI solutions.
Cautious interpretation of data protection legislation
Data privacy and security legislation is designed to protect patient information. This legislation is grounded in the EU regulations such as General Data Protection Regulation and the fundamental right to informational self-determination enshrined in the German constitution. Germany has developed and is in process of updating federal data protection laws. In addition, the Länder have developed state-specific laws regarding health data (Schmitt, 2023[15]). In general, these legislations set guidelines for health data access and use, including necessary controls and obligations on the part of data holders, intermediaries and users.
The current balance between enabling the positive goals of health data research and avoiding associated data protection risks is reportedly skewed towards risk avoidance. This makes reaching the goals of health data research and other secondary data uses extremely difficult. This is important because “it is widely recognised that there is an ethical imperative to use health data to improve care” (McLennan et al., 2022[16]). Current interpretations of data protection also create a problematic conflict with Germany’s ambitions to be a leader in AI (McLennan et al., 2022[16]).
The result is that health-related datasets in Germany often remain isolated in silos, making them unavailable for secondary use. Therefore, accessing comprehensive health data may be more challenging than in other countries. This challenge is related to cautious interpretations of data privacy and security regulation (anecdotally related to historic concerns over data mis-use causing harms), multiple regulations, and decision makers concerning privacy and access across German states, and a lack of coherent technical standards. While acts such as GDNG and Digi-G signal positive advancements – along with the activities to participate in the EHDS – it is important to engage with people so their actions will bring the intended outcomes of the acts and EHDS into being.
Regulations in health are particularly important because health care involves highly sensitive data that, if mismanaged, can have major negative consequences. For instance, privacy breaches resulting from data sharing can cause emotional harm and financial implications, among other challenges. Conversely, non‑data sharing may result in poor-quality care, duplicative health services, and systemic health inequities. Both perspectives should be considered when making decisions about data access and use to minimise harms and optimise health outcomes. This is especially important for AI, given that timely access to quality representative data is essential for its effectiveness (Box 10.2).
Box 10.2. AI diagnostics and the importance of training data
In the world of AI diagnostics, a common drawback is the inability to scale and inaccuracies across different populations due to a lack of access to comprehensive datasets. Furthermore, training AI applications requires extensive patient datasets, but the use of such data can inadvertently introduce biases, rendering the results less applicable to specific population sub-groups, thus prompting concerns about their appropriateness. For example: “Among women with breast cancer, Black women had a lower likelihood of being tested for high-risk germline mutations compared with white women, despite carrying a similar risk of such mutations. Thus, an AI algorithm that depends on genetic test results is more likely to mischaracterise the risk of breast cancer for black patients than white patients” (Parikh, Teeple and Navathe, 2019[17]).
Another popular example to highlight the issue is ‘Watson for Oncology’, which, due to a lack of training data, experienced a downturn in accuracy (O’Leary, 2022[18]).
An AI tool produced by Google DeepMind showed promising results in the early prediction of acute kidney injury. They trained the AI system on data containing 703 782 adult patients from the United States (US) Department of Veterans Affairs. However, the dataset contained majority male patients (94% male), making other researchers concerned about the representativeness and, thus, the generalisability and accuracy of predictions across other populations. Cao et al. (2022[19]) evaluated the model performance using the female veteran population and found that the model performed worse on females than males. These results make imperative the need of training the AI systems with quality, interoperable, and diverse data, to create accurate and sensitive models for applying them to population panels. This will accelerate a reliable risk stratification for people.
Timely access to quality data is the catalyst to develop and scale accurate AI systems and models. It is notable that the BMBF is funding interdisciplinary projects to develop new approaches for data analysis and data sharing for cancer research including the development of training data sets to be provided to the broader research community.
Source: Parikh, R., S. Teeple and A. Navathe (2019[17]) , “Addressing bias in artificial intelligence in health care”, https://doi.org/10.1001/jama.2019.18058; O’Leary, L. (2022[18]), “How IBM’s Watson went from the future of health care to sold off for parts”, https://slate.com/technology/2022/01/ibm-watson-health-failure-artificial-intelligence.html (accessed on 7 November 2023); Cao, J. et al. (2022[19]), “Generalizability of an acute kidney injury prediction model across health systems”, https://doi.org/10.1038/s42256-022-00563-8.
While there are clauses within legislations that allow for the accessibility of data in scenarios for the public good, the clarity around what constitutes uses in the public good is not defined, even as guidelines. The uncertainty around what constitutes the public good will often cause decisions that err on the side of caution. This results in either requests being denied, or requests being fulfilled with a high level of data aggregation that renders the data unusable for secondary use and AI applications.
Without addressing this barrier, it could be considered unethical to invest significant amounts of public funds into AI development while limiting data access through strict privacy measures, as this constitutes an inefficient use of public resources. The AI revolution in healthcare can only realise its full potential if a transparent process spells out the values underlying national data governance policies and their impact on AI development and priorities are set accordingly (Bak et al., 2022[20]).
Fragmented data interoperability without guardrails for progress
Disparities in data collection and data standards between stakeholders and across German states are making it difficult to develop AI solutions and integrate them into the German health system. From the researchers’ perspective, fragmented decision structures lead to administrative burdens that decrease the effectiveness and productivity of resources with excess time spent on data acquisition, assessment of data quality, and data management to normalise the data for use in AI systems.
Without clear accountability and guardrails, individual health organisations (or states) develop their own standards to achieve their projects’ delivery, without consideration for the contribution of the project to the broader health ecosystem. This lack of interoperability may cause inefficiencies and harms in several ways. Patients and their providers may struggle to gain the information they need to provide quality care for people who receive care from multiple health facilities. This leads to over-testing, patients falling between the cracks with missed diagnoses, or harmful results when potential drug interactions are missed. Researchers and innovators need to invest time acquiring data and cleansing the data to be useful for their purposes. Public health professionals are not able to respond quickly and with precision to public health emergencies. Health ministries across German states and at the federal level are challenged to generate timely evidence-based decisions and monitor health policy effectiveness.
Measuring the interoperability across organisations – from both the perspective of data exchange and the policy environment – can expose where end-to-end processes are taking too long or cost too much. Without a policy, data, and technical approach to interoperability, the process to access data can far exceed necessary timelines and appropriate costs. This lack of interoperability across policy and data hindered innovation, as others have also encountered similar challenges (Box 10.3).
Progress is being made in Germany. The BMBF is taking action by funding the Medical Informatics Initiative to improve interoperable data exchange for medical research. The intention is to establish a common data infrastructure across all university hospitals.
Box 10.3. Incremental policy and data development can lead to excess cost and time
Health innovation is an economic driver. Fragmented policies designed without consideration for the overall objectives of health systems can prevent or impair innovation.
In 2018 the province of Ontario in Canada found that innovation had stalled despite its world-leading reputation and rich health data assets. Innovators complained that they could not get access to patient health data despite receiving expressed consent from the patients who wanted to benefit from their innovation. A task force examined the problem and found that the end-to-end process to granting data access was complex, ineffective and inefficient to support innovation and better personal health outcomes.
Over the span of many years, organisational, personnel, and priority changes had created what was referred to as a ‘Franken-process’ for granting access to health data for innovation. A team mapped the process, including which procedures needed to be followed (e.g. privacy impact assessments and threat risk assessments), where hand-offs occurred between organisations, and the estimated duration of each step.
The analysis showed that the process had ballooned to involve ten different approvals across three different legal entities in more than fifty process steps. Overall, there were 40 different hand-offs across organisations for the end-to-end data access process to work. Looking at the average time for each step, it was determined that for an innovator to gain access to a data asset, it would take a minimum of 18 months and cost the innovator CAD 50 000.
The overall process had evolved in idiosyncratic pieces with each organisation ensuring that their local risks were appropriately and fully mitigated. While those organisations had the intention of mitigating potential harm from privacy and security, they resulted in actual harms where patients could not receive benefits of innovation due to a lack of coherence across the overall process. The overall process had evolved in a way that was reasonable for each individual organisation but failed to achieve collective objectives of improved health outcomes.
The team re-designed the process so that privacy and security risks would still be mitigated; however, they minimised the hand-offs across organisations and consciously leveraged collective strengths and controls. The re-vamped process took less than three months for access to be granted. The new process provided the same level of privacy and security control while driving innovation and benefit for patients.
Implementations of AI will be more complex and involve more organisations each of which will have their own approaches to access and privacy. When designing AI solutions that involve multiple organisations, it will be important to collaborate to keep focus on achieving collective objectives while mitigating local risks.
Lack of involvement of the public and providers in developing AI solutions
There is a growing perception that governments lack trustworthiness as responsible managers of data, which hinders the progress of AI projects for health. Rebuilding trust and working collaboratively towards the common goal of delivering high-quality care will necessitate transparency and engagement at every stage of AI solution design, development, implementation, and sustainability. This is also necessary to overcome the inherent fear that 23% of the population feels towards AI.
Without focusing on the trust of the public in the use of AI solutions, there is a risk that many in Germany will choose to opt out of the use of their data for legitimate public purposes, such as to improve patient safety, or to prepare for future public health emergencies. Without data that are representative of the population, AI solutions will be biased, possibly ineffective, and at an extreme, could cause harm.
In addition, failing to consider the perspective of health care providers can also lead to adverse outcomes. Providers are, for many, the “face” of the health care system and the person that patients trust to help them in their times of need. Providers may be resistant to the use of new tools due to a lack of involvement in their design, resulting in concern that the new tool will add administrative burden, will question their professional judgement, or cause them to lose their autonomy while still being accountable for delivery of high-quality care.
The European Union Regulation on Artificial Intelligence (the “EU AI Act”) (EU, 2024[21]) sets common guardrails for designing, implementing, and maintaining AI solutions. The tenets of this act will be operationalised by the EU member states and contextualised for AI in health.
Box 10.4. AI helps to prevent patients falling between the cracks
In the healthcare sector, substantial volumes of data are generated daily and stored within local electronic medical record (EMR) systems. This vital information, essential for enhancing health outcomes, exists in diverse, non-uniform formats across EMR systems. Where patients experience care from a single institution, this has limited impact; however, when care spans practitioners and institutions, the impacts can be significant. The inefficiency of clinicians manually sifting through multiple EMRs, but the true harm lies in the potential lack of access to these data, posing a risk to patients' well-being. This challenge spans the entire healthcare spectrum, affecting individuals ranging from leading researchers and renowned physicians to global pharmaceutical companies. This challenge is addressed by the electronic patient record (elektronische Patientenakte, ePA) which will soon be automatically provided to every person insured in the German statutory health insurance unless they choose to opt out. Healthcare providers transfer data from their local medical record to the patient’s ePA so that other practitioners and institutions can access them (again unless the patient chooses to opt out). In the ePA, data should be documented in a structured way in accordance with standardised specifications and be interoperable. If this is the case, possible future filtering and search functions can assist the physician in analysing the data. In addition, AI can also help the physician to check the abundance of data stored in the ePA from a medical point of view.
AI systems have the capability to unlock value automatically and seamlessly from millions of clinical data points embedded in complex textual documents. AI-powered tools ensure accessibility and the active utilisation of critical insights, preventing patients from slipping through the gaps in conventional data handling. For instance, tools have been employed to extract real-world patient-level data, exploring treatment patterns and outcomes for patients with advanced lung cancer and identifying patients that would benefit from adjustments to their treatment programme to align with clinical best practices.
In addition to improving health outcomes, AI tools can do the work of scanning data repositories in a fraction of the time that it would take humans to identify where relevant medical records exist to simplify the search for comprehensive health records. Estimates are that AI can scan health records for a health practice to identify patients that would benefit from changes in care in less than an hour, whereas a health providers would take more than 200 hours – an improvement in time and quality.
This application at scale demonstrates the evolution of treatment patterns and clinical covariates impacting real-world patient outcomes.
Source: Cheung, W. et al. (2021[22]), “82P Exploring treatment patterns and outcomes of patients with advanced lung cancer (aLC) using artificial intelligence (AI)-extracted data”, https://doi.org/10.1016/j.annonc.2021.10.100.
Recommendations
Develop guidance for secondary-use access to health data that supports AI development, protects citizens, and respects privacy rights
It is necessary to update current data access and privacy practices to enable the networked (and exponential) value of data and the implementation of responsible AI to align with the EU Data Governance Act and EU AI Act. Legacy approaches to affirmative consent are effective in predictable, paper-based and linear processes. However, the potential AI uses of health data are broader and challenging to predict, requiring a modern approach to consent. Such an approach would clarify scenarios under which data may never be used, scenarios were affirmative consent is required, and scenarios where data must be shared to protect communities and the public good, including controls and measures that protect individual privacy in that scenario. This would improve the use of health data created in the early transition to the digital age so that longitudinal records can be analysed for patterns that identify prevention, promotion, and treatment opportunities.
Establish a health-data governance and interoperability strategy and framework with accountability, a roadmap, measurements, financial levers, and oversight
As a part of the German strategy for health and care digitalisation (BMG, 2023[5]), Germany will clarify accountability for a digital health agency (required as part of the EHDS) to support the development and adoption of digital tools. Further, the strategy will clarify accountability for the digitalisation of processes and identification of interoperability standards. Germany could strengthen accountability by reorganising existing structures. Reforming the digital health space by simplifying and clarifying authority aims to contribute to a more transparent interpretation of regulations for the reuse of data.
The work of gematik and other stakeholders would benefit from developing and implementing a strategy that advances health data governance and interoperability, including guidance on data access that minimises harm and optimises outcomes. The strategy would establish a target for timely and high-quality collection of health data; the ability to access, link, and use health data across organisations; and governance that oversees progress towards achievement of the target. This strategy could be inspired by recent work from Canada on a Shared Pan-Canadian Interoperability Roadmap (Canada Health Infoway, 2023[23]), in particular, as Canada has a federal structure similar to Germany’s. Efforts in interoperability in Germany should align with the EHDS and projects such as XpandH.
Health data are the catalyst for high-quality care and research, and work on interoperability is critical to Germany’s aspirations for “one patient, one record” established through electronic patient records, which will soon be automatically provided to every person insured in the German statutory health insurance – unless they choose to opt-out. It is planned that, by 2025, 80% of persons insured in the German statutory health insurance shall have an electronic patient record (BMG, 2023[24]). This will reinforce record-keeping and enhance the use of e-prescription, telemedicine, and health applications. These valuable data assets will be available (with appropriate protections) to generate high quality insights through AI and other secondary uses. Financial incentives embedded in the strategy should encourage adoption by i) risk-sharing with early adopters; and ii) penalising late adopters when a lack of compliance causes demonstrable harm (poor outcomes, waste, etc.).
Liability around the use of AI in health will need to be determined. A standard practice has yet to be defined, but leading practice assigns liability to the health provider regardless of whether AI is used in the provision of care. An anticipated outcome, once AI systems have been determined to be trustworthy, would be requiring the use of AI in the medical standard for care, while treating physicians would have the final determination of diagnosis and treatment.
Involve the public and health providers in developing AI solutions, control design, and oversight mechanisms for trust
The public and health service providers must accept AI for the successful and sustainable transition to and deployment of the technology. To accomplish this, stakeholder groups – like patients or nurses – should be approached about specific problems and be provided relevant information.
The OECD Recommendation on Health Data Governance (OECD, 2016[4]) recommends engagement and participation, clear provision of information, and transparency in the governance of heath data. Furthermore, the OECD [OECD/LEGAL/0449]. and G20 highlight trust in their principles for AI (OECD.AI, 2019[25]).
Support (re-)education for health providers and technology professionals for AI development and operations
Building knowledge among the public and healthcare providers about the use of health data and the methodology of AI is an antidote to mistrust and negative backlash when implementing new technologies. Finland strategically increased knowledge about AI in the population. As a result, Finland aimed to benefit from AI being accepted by the public as a tool for the common good (University of Helsinki, 2023[26]). Other initiatives to increase public AI and digital knowledge include the Australian training module for electronic health records (Australian Digital Health Agency, 2023[27]) and the Norwegian AI course (Norwegian Cognitive Center, 2020[28]).
With that knowledge, stakeholder groups are more able to shape the success of AI in health. This ties to the human factor of acceptance, and thus trust, as the foundation for stakeholder support for the uptake of AI. Maassen et al. (2021[29]) surveyed practitioners and found the acceptance of AI to be associated with people’s self-rated technical affinity. A reasonable conclusion is that fostering acceptance and trust requires a knowledge base among stakeholders, which translates into successful implementation as AI in health progresses.
For practitioners, AI literacy must go beyond acceptance. As AI integrates into health systems, practitioners need competency to assess an AI model’s potential bias and suboptimal prediction. This can be difficult, as AI models often provide outputs without describing how it got them, also referred to as “black box” due to the lack of explainability. Clinical support AI systems in healthcare sometimes produce false positives and negatives, such as in predicting sepsis (Goodman, Rodman and Morgan, 2023[30]). Accordingly, engaging with providers to share and learn how to respond and evaluate AI systems is elemental for the success of AI. Consolidation and co-operation in the development and design of AI systems are important for user-friendliness and, ultimately, value of AI in health.
Other considerations
Other areas where investment in healthcare related to AI could contribute to improving overall adoption of AI in Germany are outlined below.
First, it is necessary to ensure sufficient computing power to develop, implement, and sustain the use of AI for health. While there are concerns about ensuring that AI systems have sufficient and appropriate data, once the data become available, the computing systems will also need to be ready.
Second, Germany should collaborate with peers to investigate the use of privacy-enhancing technologies (OECD, 2023[31]). These capabilities could reduce the risk of privacy breaches while optimising the use of data through AI. Given parallel work in many countries and across industries, there is value in engaging in this collaboratively.
Finally, Germany should continue to investigate the use of federated learning and technologies such as data mesh to support cross-regional data collaboration (Rieke et al., 2020[32]), which is already being funded through projects such as PrivateAIM (PrivateAIM, 2024[33]) and FAIrPaCT (UMG, 2024[34]). Federated learning reduces privacy risks by minimising data copies and optimising the use of data for analytics, such as in public health, health system oversight, and research. Prioritisation of federated learning models requires strong policy, data, and technical foundations. The advancement of those foundations can happen in parallel with, or be directed towards, the implementation of federated learning solutions.
References
[27] Australian Digital Health Agency (2023), Online Learning Portal, https://training.digitalhealth.gov.au/ (accessed on 27 October 2023).
[20] Bak, M. et al. (2022), “You can’t have AI both ways: Balancing health data privacy and access fairly”, Frontiers in Genetics, Vol. 13, https://doi.org/10.3389/fgene.2022.929453.
[11] bidt (2023), Autorinnen und Autoren: Das bidt-Digitalbarometer. international, Bayerisches Forschungsinstitut für Digitale Transformation, https://doi.org/10.35067/xypq-kn68.
[24] BMG (2023), Act to Accelerate the Digitalization of the Healthcare System (Digital Act – DigiG), Bundesministerium für Gesundheit, https://www.bundesgesundheitsministerium.de/ministerium/gesetze-und-verordnungen/guv-20-lp/digig.html (accessed on 7 November 2023).
[5] BMG (2023), Digital Together - Germany’s Digitalisation Strategy for Health and Care, Bundesministerium für Gesundheit, https://www.bundesgesundheitsministerium.de/fileadmin/Dateien/3_Downloads/D/Digitalisierungsstrategie/Germany_s_Digitalisation_Strategy_for_Health_and_Care.pdf (accessed on 11 December 2023).
[23] Canada Health Infoway (2023), Shared Pan-Canadian Interoperability Roadmap, https://www.infoway-inforoute.ca/en/component/edocman/resources/interoperability/6444-connecting-you-to-modern-health-care-shared-pan-canadian-interoperability-roadmap (accessed on 11 December 2023).
[19] Cao, J. et al. (2022), “Generalizability of an acute kidney injury prediction model across health systems”, Nature Machine Intelligence, Vol. 4/12, pp. 1121-1129, https://doi.org/10.1038/s42256-022-00563-8.
[22] Cheung, W. et al. (2021), “82P Exploring treatment patterns and outcomes of patients with advanced lung cancer (aLC) using artificial intelligence (AI)-extracted data”, Annals of Oncology, Vol. 32, p. S1407, https://doi.org/10.1016/j.annonc.2021.10.100.
[21] EU (2024), Regulation (EU) 2024/ ...... of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), https://data.consilium.europa.eu/doc/document/PE-24-2024-INIT/en/pdf.
[6] Gerlach, F. et al. (2021), Executive Summary of the Council Report 2021 “Digitalisation for Health-Aims and Framework Conditions for a dynamically learning Health System”, https://www.svr-gesundheit.de/fileadmin/Gutachten/Gutachten_2021/Executive_Summary_Englisch.pdf.
[7] German Federal Government (2018), Strategie Künstliche Intelligenz der Bundesregierung, http://www.ki-strategie-deutschland.de (accessed on 11 December 2023).
[30] Goodman, K., A. Rodman and D. Morgan (2023), “Preparing physicians for the clinical algorithm era”, New England Journal of Medicine, Vol. 389/6, pp. 483-487, https://doi.org/10.1056/nejmp2304839.
[14] Heidel, A. and C. Hagist (2020), “Potential benefits and risks resulting from the introduction of health apps and wearables into the German statutory health care system: Scoping review”, JMIR mHealth and uHealth, Vol. 8/9, p. e16444, https://doi.org/10.2196/16444.
[10] Lantzsch, H. et al. (2022), “Digital health applications and the fast-track pathway to public health coverage in Germany: Challenges and opportunities based on first results”, BMC Health Services Research, Vol. 22/1, https://doi.org/10.1186/s12913-022-08500-6.
[29] Maassen, O. et al. (2021), “Future medical artificial intelligence application requirements and expectations of physicians in German university hospitals: Web-based survey”, Journal of Medical Internet Research, Vol. 23/3, p. e26646, https://doi.org/10.2196/26646.
[16] McLennan, S. et al. (2022), “Practices and attitudes of Bavarian stakeholders regarding the secondary use of health data for research purposes during the COVID-19 pandemic: Qualitative interview study”, Journal of Medical Internet Research, Vol. 24/6, p. e38754, https://doi.org/10.2196/38754.
[9] Nensa, F., A. Demircioglu and C. Rischpler (2019), “Artificial intelligence in nuclear medicine”, Journal of Nuclear Medicine, Vol. 60/2, pp. 29S-37S, https://doi.org/10.2967/jnumed.118.220590.
[28] Norwegian Cognitive Center (2020), “Free AI course for everyone”, https://norwegiancognitivecenter.com/blog/free-ai-course-for-everyone (accessed on 7 November 2023).
[1] OECD (2024), AI in Health: Huge Potential, Huge Risks, OECD Publishing, https://www.oecd.org/health/AI-in-health-huge-potential-huge-risks.pdf.
[31] OECD (2023), “Emerging privacy-enhancing technologies: Current regulatory and policy approaches”, OECD Digital Economy Papers, No. 351, OECD Publishing, Paris, https://doi.org/10.1787/bf121be4-en.
[2] OECD (2022), Health Spending (data), OECD, Paris, https://data.oecd.org/healthres/health-spending.htm.
[4] OECD (2016), Recommendation of the Council on Health Data Governance, OECD, Paris, https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0433.
[25] OECD.AI (2019), G20 AI Principles, OECD, Paris, https://oecd.ai/en/wonk/documents/g20-ai-principles.
[3] OECD/EU (2022), Health at a Glance: Europe 2022: State of Health in the EU Cycle, OECD Publishing, Paris, https://doi.org/10.1787/507433b0-en.
[18] O’Leary, L. (2022), “How IBM’s Watson went from the future of health care to sold off for parts”, Slate, https://slate.com/technology/2022/01/ibm-watson-health-failure-artificial-intelligence.html (accessed on 7 November 2023).
[8] Oliveira Hashiguchi, T., L. Slawomirski and J. Oderkirk (2021), “Laying the foundations for artificial intelligence in health”, OECD Health Working Papers, No. 128, OECD Publishing, Paris, https://doi.org/10.1787/3f62817d-en.
[17] Parikh, R., S. Teeple and A. Navathe (2019), “Addressing bias in artificial intelligence in health care”, Journal of the American Medical Association, Vol. 322/24, p. 2377, https://doi.org/10.1001/jama.2019.18058.
[33] PrivateAIM (2024), Privacy-Preserving Analytics in Medicine, https://privateaim.de/eng/index.html (accessed on 11 December 2023).
[32] Rieke, N. et al. (2020), “The future of digital health with federated learning”, npj Digital Medicine, Vol. 3/1, https://doi.org/10.1038/s41746-020-00323-1.
[15] Schmitt, T. (2023), “Implementing eectronic health records in Germany: Lessons (yet to be) learned”, International Journal of Integrated Care, https://doi.org/10.5334/ijic.6578.
[13] Schmitt, T., A. Haarmann and M. Shaikh (2022), “Strengthening health system governance in Germany: looking back, planning ahead”, Health Economics, Policy and Law, Vol. 18/1, pp. 14-31, https://doi.org/10.1017/s1744133122000123.
[34] UMG (2024), FAIrPaCT - Federated Artificial Intelligence Framework to Optimise the Treatment of Pancreatic Cancer, University Medical Center Göttingen, https://bioinformatics.umg.eu/research/projects/fairpact/ (accessed on 11 December 2023).
[26] University of Helsinki (2023), Elements of AI, https://www.elementsofai.com/ (accessed on 31 October 2023).
[12] Wintergerst, R. (2023), Digital Health.