Governments increasingly recognise the need to strengthen information integrity, particularly in three key areas: enhancing transparency and accountability, fostering resilience to disinformation, and improving governance to protect information ecosystems. Some are moving beyond self-regulation to improve platform transparency and content moderation. Governments are also focusing on supporting independent media and building capacity within society and government to combat mis- and disinformation, including via improving education and media literacy. However, action has been slow relative to the rapid rise of these issues. Governments need to work together and with all of society to find comprehensive and constructive solutions.
The OECD Reinforcing Democracy Initiative
1. Public Governance for Combatting Mis- and Disinformation
Copy link to 1. Public Governance for Combatting Mis- and DisinformationAbstract
1.1. Introduction
Copy link to 1.1. IntroductionA healthy democracy requires populations to have access to reliable information. It is what allows the public to better understand the decisions made by government, and thus allow them to form informed political opinions and hold public figures to account. However, the accelerated spread of seriously harmful false and misleading information in recent years, often through deliberate disinformation campaigns by domestic or foreign actors, poses a fundamental threat to democratic institutions, jeopardising access to reliable information, creating confusion, polarising different groups within society, and undermining the very essence of democracy. This has driven the issue of combatting mis- and disinformation to the top of the government policy agenda in many OECD countries.
In the results from the 2023 Trust Survey (OECD, 2024[1]), an average of 11% identified mis- and disinformation as one of the main three issues facing their country, and in Czechia, Korea and the Slovak Republic this figure exceeds 20%. Trust in traditional media is also suffering from challenges in the information environment, as on average just 39% of people have high or moderately high trust in news media, a level similar to national government, while 44% report low to no trust in the media. The results also highlight stark differences by age group, as a higher share of people aged 50 and above rely on journalist or organisation reports on stories, whereas for younger people social media has become the primary news source, with 68% obtaining news this way. These recent results highlight the scale of the challenges currently faced by governments in this area and highlight the relevance of the areas for action included in the Action Plan and discussed in the chapter below.
While mis- and disinformation have always existed, technological advancements in communication via digital platforms and search engines have fundamentally altered the way in which information is spread, making it far easier for false information to spread quickly. Furthermore, developments in generative artificial intelligence tools have allowed convincing false information, including photos and videos, to be made quickly and at low cost. Against this backdrop, promoting information integrity and tackling disinformation is central to the OECD Reinforcing Democracy Initiative.
The Action Plan on Public Governance for Combatting Mis- and Disinformation (henceforth “the Action Plan”) identified three key areas and the steps needed to tackle them:
Key area 1 – Implementing government policies to build more resilient societies against mis and disinformation;
Key area 2 – Supporting the design of policy and regulatory measures to increase transparency and data sharing to prevent the spread of mis and dis-information; and
Key area 3 – Identifying regulatory and policy responses that reduce economic and structural drivers of mis and disinformation.
Since the welcoming of the Action Plan by Ministers in Luxembourg, the OECD Information Integrity Hub (previously, OECD DIS/MIS Resource Hub) and the OECD Expert Group on Governance Responses to Mis- and Disinformation, have delivered the “Facts not Fakes: Tackling Disinformation, Strengthening Information Integrity” (OECD, 2024[2]).
As reflected in the report, the OECD has outlined three complementary dimensions of governments' policy responses (OECD, 2024[2]):
Implementing policies to enhance the transparency, accountability, and plurality of information sources: This includes promoting policies that support a diverse, plural, and independent media sector, with a needed emphasis on local journalism. It also comprises policies that may be utilised to increase the degree of accountability and transparency of online platforms, so that their market power and commercial interests do not contribute to disproportionately vehicle disinformation.
Fostering societal resilience to disinformation: This involves empowering individuals to develop critical thinking skills, recognise and combat disinformation, as well as mobilising all sectors of society to develop comprehensive and evidence-based policies in support of information integrity.
Upgrading governance measures and public institutions to uphold the integrity of the information space: This involves the development and implementation of, as appropriate, regulatory capacities, co-ordination mechanisms, strategic frameworks, and capacity-building programmes that support a coherent vision and approach to strengthening information integrity within the public administration, while ensuring clear mandates and respect for fundamental freedoms. It also involves promoting peer-learning and international co-operation between democracies facing similar disinformation threats.
This chapter identifies progress made by OECD countries in their efforts to promote information integrity and combat mis- and disinformation and potential gaps against the Action Plan adopted in Luxembourg, building on the above report and the work of other OECD Committees.
1.2. Key area 1 - Implementing government policies to build more resilient societies against mis- and dis-information
Copy link to 1.2. Key area 1 - Implementing government policies to build more resilient societies against mis- and dis-information1.2.1. Expand on and create new partnerships with non-governmental and international organisations to build resilience to the spread of false and misleading information
The OECD 2022 RDI report recognised the importance of governments engaging with media and civil society organisations to address the threats posed by mis- and disinformation, highlighting this as an area where numerous actors had already been active, particularly during the COVID-19 crisis (OECD, 2022[3]). The speed of the proliferation of false and misleading content has made countries aware of the need to develop a comprehensive view on how to improve levels of information integrity (OECD, 2024[2]). Recent OECD data shows that new partnerships, including co-operation with international organisations, the private sector, partner countries, and non-governmental partners all feature frequently as areas where governments seek to improve over the next few years (Figure 1.1).
Several countries have already taken measures to further improve collaboration on information integrity. For example , in 2022, Spain created the “Forum against Disinformation Campaigns in the Field of National Security”, a platform designed to foster public and private collaboration on the risks posed by disinformation to national security. Latvia developed the “2023-2027 National Concept on Strategy Communication and Security of Information Space”, which includes actions on partnerships with organised civil society, the private sector, and academia. Collaboration efforts have also increased within government: half of respondent countries (54%) have at least one cross-government mechanism to identify and respond to disinformation.
Collaboration between governmental and non-governmental actors can also help ensure that measures to improve information integrity continue to uphold free speech principles. An example of such collaboration was evident in Lithuania, where the Government Chancellery worked with Lithuanian Civil Service Organisation Debunk.eu to collect examples of accounts that had been blocked on Facebook for expressing pro-Ukraine opinions, without violating Meta’s content policies. The Chancellery then brought this information to Meta, providing valuable context on Lithuanian cultural and linguistic traditions to better inform the company’s content moderation policy.
Many countries have recognised the borderless nature of information flows, and thus the need for greater collaboration at the international level. Much of this collaboration has taken place via international organisations. The OECD’s Information Integrity Hub has served as platform for policy analysis and dialogue, as expanded on in the dedicated ‘Working collectively through the OECD’ section. UNESCO has undertaken a comprehensive set of consultations across 134 countries on how to tackle mis- and disinformation, culminating in a set of guidelines for the governance of digital platforms, launched in 2023 (UNESCO, 2023[4]). While the European Union has provided a platform to collaborate on countering disinformation via its European Digital Media Observatory since 2021, in late 2022 it notably expanded the reach of this Observatory, adding six new hubs, in order to reach 100% of the EU population (European Commission, 2022[5]).
While the value of such collaboration is clear, there is a risk that overly close relations between government and civil society or private sector organisations can lead to these organisations unduly influencing government interests. Some research has found correlation between fact-checkers’ political affiliations and their findings (Louis-Sidois, 2022[6]). Collaboration with such entities would carry a clear risk. Equally, such relations can lead to government unduly influencing organisations and biasing public discourse. Therefore, safety nets are needed to ensure that such co-operation does not lead to political bias.
1.2.2. Build capacity for more proactive, responsive and effective public communication in counteracting mis- and dis-information by:
The OECD 2022 RDI report highlights the need for strengthening societal resilience against mis- and disinformation by ensuring the public is informed and aware of what information is reliable, and what is false and misleading. Effective public communication of relevant and timely information is crucial to this. Results from the 2024 Trust Survey show that people remain quite divided about the perceived impact of government communication, as 39% say the government would likely explain the impact of reforms, while 40% remain sceptical (OECD, 2024[1]). In addition, perception of the government’s communication regarding the impact of reforms are closely related to trust in the national government (see Figure 1.2).
The Action Plan identified the need for governments to build capacity for proactive, responsive and effective public communication. This includes the institutionalisation of public communication responses and the use of behavioural insights.
OECD data shows that public communications are a priority for OECD countries: 90% of responding countries indicated the importance of building the capacity of public officials to track and respond to disinformation threats. This is reflected in the training and guidance aimed at such officials on handling disinformation, established by several governments. For example, in January 2022, the Dutch Ministry of the Interior and Kingdom Relations developed “Guidance on dealing with information”, offering advice on responding to disinformation and on communicating with the media on this topic. Similarly, the United Kingdom Government Communication Services released a “Wall of Beliefs” toolkit to assist communicators in understanding the roles of identity, relationships and worldview in belief formation, in order to develop more sophisticated communication strategies to counter false information (Government Communication Service, 2022[7]). Canada’s School of Public Service offers training programmes to equip officials with the skills to identify and react to disinformation, including how to effectively use communication via social media platforms to debunk and pre-bunk any disinformation identified. It also compiles information on how disinformation spreads and detection methods on a dedicated website (Government of Canada, 2024[8]). Despite these initiatives, 45% of respondents to the OECD survey reported the absence of regular training programmes for countering disinformation (OECD, 2024[2]).
In 2023, the OECD released a set of Good Practice Principles for Public Communication Responses to Mis- and Disinformation. These principles suggest that public communication interventions should be transparent, independent from political influence, and coherent across government, and acknowledge the rapid spread of disinformation aiming to pre-empting when possible (OECD, 2023[9]). There is great scope for countries to better use these principles in the future, in particular reducing the politicisation of public communications.
Moreover, several countries have conducted research on behavioural patterns related to the consumption of disinformation to better understand the factors that lead individuals to engage with and disseminate false information, and thus devising more targeted and effective policy solutions. A Randomised Controlled Trial, conducted by Canada in collaboration with the OECD and the French government, explored ways to reduce the spread of misinformation online (OECD, 2022[3]). The study tested two policy interventions grounded in behavioural science. The first one involved prompting users to rate the accuracy of a single random headline prior to engaging with Facebook-style headlines online. The second intervention provided a list of media literacy tips. While participants were generally good at accurately identifying true and false headlines, they often shared news headlines they identified as false or questionable. The results show that digital media literacy prompts reduced their willingness to share fake news by 21% compared to the control group, underscoring the efficacy of simple, scalable online interventions in enhancing the quality of information circulating online. Impact Canada continues to assess whether susceptibility to misinformation varies depending on the content and context (Impact Canada, 2022[10]). In Greece, the National Transparency Authority has also designed a behavioural insights intervention aimed at raising citizens’ awareness during natural disasters and encouraging reliance on information from reliable sources, including official channels of competent authorities. This intervention also includes a public communication element, with government providing guidelines for identifying and avoiding unreliable sources of information during such emergencies.
1.2.3. Pursue a whole-of-society approach to strengthening media and information ecosystems:
A whole-of-society approach, calling for the collective engagement of governments, private sector entities, educational institutions, and the civil society, stands as a key priority to enhance the resilience and strength of information ecosystems. While many countries had invested in this area its importance has since been recognised even further, with several countries publishing guides on recognising and tackling mis- and disinformation. For example, in 2022 the Latvian State Chancellery released a “Handbook against disinformation: recognise and oppose”, aimed at both government officials and Latvian residents. The Ministry of Interior in the Netherlands financed the creation of a website teaching people how to identify mis- and disinformation (OECD, 2024[2]).
The increasing recognition of media literacy’s value in combatting mis- and disinformation is reflected in the efforts of civil society organisations, which make a critical contribution in this field. Several OECD countries have identified the need to work alongside such organisations through a whole-of-society approach to develop media literacy activities. However, it is important to recognise the limitations of media literacy, for example in the face of highly convincing deep fakes. The OECD Truth Quest Survey is a tool that helps to measure the ability of people to identify false and misleading information and was applied in 21 countries (OECD, 2024[11]).
The emphasis on inclusion is evident in the UK’s Media Literacy Taskforce Fund, established in 2022 to boost media literacy skills for people particularly vulnerable to false information. The Taskforce collaborates with 17 organisations to offer a variety of programmes, including access to digital media skills for the elderly, literacy training for teachers in disadvantaged schools, and workshops for vulnerable and marginalised women to tackle online abuse (OECD, 2024[2]). Similarly, the German National Security Strategy, released in June 2023, highlights the importance of dealing with disinformation. It pledges to contribute to this by promoting application-oriented research and development focused on disinformation, including strengthening digital, data and media literacy (German Ministry of Defence, 2023[12]).
1.3. Key area 2 - Support the design of policy and regulatory measures to increase transparency and data sharing to prevent the spread of mis- and dis-information
Copy link to 1.3. Key area 2 - Support the design of policy and regulatory measures to increase transparency and data sharing to prevent the spread of mis- and dis-informationRegulatory responses to promote transparency and data sharing have developed rapidly since 2022. In part, this has been due to the recognition of the limitations posed by existing self- and co-regulatory regimes, including the risk that they will not sufficiently mitigate the threats posed by those actors that do the most to undermine information integrity in democracies, as well as by those who merely do not wish to engage. Such risks point to the importance of government involvement in designing, enforcing, and updating regulatory responses (OECD, 2024[2]). Several laws have recently been implemented or discussed that focus on a wide range of transparency issues. The European Union’s Digital Services Act (DSA) and the UK Online Safety Act, for example, reflect growing demands for greater platform transparency (Lai, Shiffman and Wanless, 2023[13]).
The rapid growth of generative artificial intelligence (AI) – that is, machine-based systems that infer from the input they receive how to generate outputs such as predictions, content, recommendations or decisions (including in the form of images and other media) – has also increasingly been a cause for policymaker concern in relation to the role of platforms. The development of tools such as OpenAI’s ChatGPT, a chatbot which surpassed 100 million users just two months after launching (Hu, 2023[14]), has highlighted the added risk to the dissemination of mis- and disinformation that the possibility of artificial articles, images and other media poses. In this light, the EU Artificial Intelligence Act adopted in March 2024 requires, inter alia, that AI models comply with a variety of transparency requirements, including publishing detailed summaries of the content used for training, performing model evaluation and reporting any incidents, and clearly labelling any artificial audio or video content as such. The United States has also started to take action in this area, with the 2023 President’s Executive Order on Safe, Secure and Trustworthy Artificial Intelligence, which seeks in part to protect citizens from AI-enabled fraud by “establishing standards and best practices for detecting AI-generated content and authenticating official content” (U.S. White House, 2023[15]) (OECD, 2024[2]).
1.3.1. Promote data transparency of online platforms to build greater understanding of mis- and dis-information narratives and how such content spreads by:
The OECD RDI 2022 report and the action plan underline the importance of promoting data transparency of online platforms, and highlight several approaches to doing so, including promoting partnerships between social media platforms and researchers such that these platforms provide some level of data access to researchers. This focus aligns with the recent trends in this space, particularly in Europe, where the DSA provides significant impetus across OECD European countries to make progress in this area.
However, given commercial interests, the relatively opaque nature of both social media platforms and search engines means that it can often be difficult to understand their algorithms, and in particular how they make use of advertising. Furthermore, many platforms have become increasingly reticent in their sharing of information. TikTok’s strict approach to allowing researchers access to its data (Bloomberg, 2023[16]) and X’s (formerly Twitter) decision to reduce access to researchers (Calma, 2023[17]) have legitimised other major platforms to follow suit – including Meta, who decided in March to shut down CrowdTangle, a public insights tools from Facebook used by journalists, researchers and fact checkers to track how information spreads across the platform.
Recognising these issues, the EU 2022 Digital Services Act (DSA) requires Very Large Online Platforms and Very Large Online Search Engines (defined as those with more than 45 million users in the European Union) to ensure public access to their repositories of advertisements, including the content of these advertisements and related data on the advertiser, as well as any detail on how the advertisement was targeted (EUR-Lex, 2022[18]). In some cases, country governments have taken matters into their own hands – for example, France’s Ambassador for Digital Affairs has developed an open source tool to track changes to online platforms’ terms of service, as well as a tool to identify suspicious accounts using an algorithm that calculates the probability a given account is a bot (OECD, 2024[2]). The DSA has also gone some way to addressing the issue of data availability for researchers. In particular, Article 40 states that Very Large Online Platforms or Search Engines must provide data to approved researchers when requested, provided that the research is conducted solely with the intention of identifying “systemic risks in the Union” (EUR-Lex, 2022[18]).
1.3.2. Improve transparency of the processes and mechanisms used by online platforms to moderate content and shape information flows
The OECD 2022 RDI report noted that, up to that date, the approach to content moderation in the social media sector had been largely self-regulatory, and that content-specific regulations beyond this presented risks to freedom of speech. It suggested requiring online platforms to be more transparent, including clarifying their content moderation policies and the functioning of their algorithms as well as explaining how and why content is removed or de-prioritised, and requiring that online platforms be subject to regular audits.
The limitations of relying solely on self-regulation of social media sites and search engines continue to be recognised. The limitations posed by existing self-regulatory regimes increase the risk that they will not sufficiently mitigate the threats posed by those actors that do the most to undermine information integrity in democracies, as well as by those who merely do not wish to engage (OECD, 2024[2]). However, there is equally an awareness that excess direct regulation could risk inhibiting freedom of expression. Countries have identified the possibility for co-regulatory approach, where platforms self-govern while allowing government to play an oversight and enforcement role. An example of this is the European Code of Practice, which was updated in 2022 to include issues such as demonetisation of spreaders of disinformation. However, the limitations of even co-regulatory approaches were made clear when X announced in May 2023 that it was withdrawing from its participation in the 2018 European Union Code of Practice on Disinformation, after its first transparency report for the Code fell short of the standards set by other platforms (OECD, 2024[2]). Similarly, in November 2023, the signatory status of X to the Australian Code of Practice on Disinformation and Misinformation (voluntary code) was withdrawn after an independent Complaints Sub-Committee found that it failed to provide publicly accessible channels to report misinformation and disinformation during Australia’s Voice to Parliament referendum. This was the only consequence for X and demonstrated a clear gap in digital platform accountability measures. This indicates that even when governments have some oversight capacities, non-binding approaches mean that such platforms are able to ignore regulations as soon as they no longer feel they benefit from them.
As noted, this area has seen active developments since 2022, particularly at European level. The Digital Service Act was enacted in August 2023: it requires for social media platforms and social engines to provide researchers and regulators with greater insight into how their algorithms moderate, prioritise and recommend content. It further requires the publication of transparency reports – a condition similar to that in the European Code of Practice, although making it obligatory.
The UK Online Safety Act was enacted in October 2023, with requirements that algorithms must be designed to help protect individuals online – although unlike the DSA, it does not require the parameters of these algorithms to be disclosed (Hagedorn et al., 2023[19]). The Australian Government has also taken significant steps in this direction with the release of the exposure draft Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill in June 2023. The Bill would provide the independent regulator, the Australian Communications and Media Authority (ACMA), with new powers to increase transparency and to hold digital platforms to account in addressing seriously harmful misinformation and disinformation.
1.4. Key area 3 - Identify regulatory and policy responses that reduce economic and structural drivers of mis- and dis-information
Copy link to 1.4. Key area 3 - Identify regulatory and policy responses that reduce economic and structural drivers of mis- and dis-informationSince the adoption of the Action Plan, several regulations have been implemented to address competitive harms and improve the health of markets for actors in the information space. For online platforms, strategies seek to encourage new entrants and innovation and to maintain diverse markets as a means to spur competition between online platforms, potentially encouraging market-based incentives to healthier information spaces, though this outcome is far from certain (OECD, 2024[2]). The EU Digital Markets Act (DMA) is the most prominent example. Furthermore, the 2024 European Media Freedom Act is a notable example of an effort to strengthen the market for traditional media, reduce the risks posed by media market concentration, and maintain media pluralism.
1.4.1. Promote more responsible behaviour of online platforms
In some sense, digitalisation has reduced barriers to entry for media providers, thus increasing the potential for plurality. However, dependence on social media platforms and search engines for advertising revenue has also increased the risk of media capture – that is, situations in which individuals, groups or organisations exert significant control over media organisations in a way that risks influencing content and coverage (OECD, 2024[2]). In this light, the OECD 2022 RDI report identified the issue of responsible online behaviour of platforms as relevant, with the action plan inviting countries to take steps to promote a fairer business environment in the digital media sector. The European Union has been a significant player through its 2023 Digital Markets Act, which regulates the market power of digital platforms in a variety of ways (Box 1.1).
Box 1.1. The European Digital Markets Act
Copy link to Box 1.1. The European Digital Markets ActThe Digital Markets Act (DMA) is designed to regulate so called ‘gatekeeper’ power, with gatekeepers defined as large digital platforms providing core platform services, such as online search engines, app stores, and messenger services. The European Commission initially proposed the Act in December 2020, and it entered into force in November 2022 and became applicable on 1 May 2023. The Act establishes a variety of “do’s” and “don’ts” for the gatekeepers”:
Examples of “do’s” include:
allowing business users to access the data that they generate while using the gatekeeper’s platform
providing companies advertising on their platform with the tools and information necessary for advertisers and publishers to carry out their own independent verification of their advertisements
allowing their business users to promote their offer and conclude contracts with their customers outside the gatekeeper’s platform.
Examples of “don’ts” include:
treating services and products offered by the gatekeeper itself more favourably in ranking than similar services or products offered by third parties on the gatekeeper's platform
preventing users from un-installing any pre-installed software or app
tracking end users outside of the gatekeepers' core platform service for the purpose of targeted advertising without consent.
Fines for non-compliance can be up to 10% of the company’s total worldwide annual turnover, or up to 20% in the event of repeated infringements. Alternatively, period penalties can be incurred of up to 5% of the average daily turnover.
Source: European Commission (n.d.[20]), “About the Digital Markets Act”.
On top of this, in March 2024 the European Parliament passed the European Media Freedom Act, which will, inter alia, requires that Very Large Online Platforms (those with more than 45 million users in the European Union) intending to take down media content would have to inform the media provider before doing so. The use of the “Very Large Online Platforms” definition has also been constructed with competition in mind – platforms that do not fall under this definition will not be required to take on a significant number of the transparency requirements mandated in the European Digital Services Act, so that compliance does not become a barrier to entry (OECD, 2024[2]).
The European Union has also taken measures to prevent very large platforms abusing their power through its Digital Services Act. In particular, it obligates those falling under the Very Large Online Platform category to conduct analysis on the systemic risks they create, as detailed in Box 1.2.
Box 1.2. The European Union’s Digital Services Act (DSA): Risk Assessment Requirements for Very Large Online Platforms and Search Engines
Copy link to Box 1.2. The European Union’s Digital Services Act (DSA): Risk Assessment Requirements for Very Large Online Platforms and Search EnginesThe Digital Services Act highlights that Very Large Online Platforms and Very Large Online Services VLOP and VLOS) can be used in a way that can significantly influence online safety as well as the shaping of public opinion and discourse. Given the systemic risks that this can pose, the Act requires VLOP and VLOS to assess risks stemming from the design, functioning and use of their services, and take steps to mitigate any risks identified.
The DSA specifies four types of systemic risk that should be assessed in-depth by these platforms:
Risks associated with the dissemination of illegal content, including child sexual abuse, illegal hate speech, and the sale of illegal products or services.
Risks impacting the exercise of fundamental rights as protected by the EU Charter of Fundamental Rights, including freedom of expression and information, media freedom, the right to private life, and the right to data protection.
Risks concerning negative effects on democratic processes, civil discourse, electoral processes, and public security.
Risks related to the design and functioning of such platforms having a negative impact on physical and mental well-being, including from interface design stimulating behavioural addictions and from disinformation campaigns related to public health.
The DSA highlights that when assessing these risks, VLOP and VLOS should also consider content that is not illegal, or does not go against their terms and conditions. It further states that the impact of algorithmic systems, in particular recommender systems and advertising systems, should be considered, and that cases where algorithmic amplification of information contributes to systemic risks should be reflected in the relevant risk assessments. In order to make such assessments possible, platforms are required to preserve all supporting documents related to assessments carried out, including underlying data.
In order to ensure that these risk assessments, and subsequent risk mitigation efforts, are based on the best available information, the DSA states that VLOP and VLOS should involve, where appropriate, other parties in their research, including recipients of the service and independent experts.
Source: EUR-Lex (2022[18]), Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act), https://eur-lex.europa.eu/eli/reg/2022/2065/oj.
Canada has also tried to ensure that news businesses are able to compete with the monopoly-like power of digital platforms with the 2023 Online News Act, which creates a bargaining framework to encourage platforms to reach voluntary agreements regarding compensation with news businesses, with mandatory bargaining and arbitration processes occurring if unsuccessful. While in theory this gives media platforms greater strength in negotiating with digital giants, it risks that the platforms remove access to news sources entirely. Indeed, in 2023 Meta announced that people in Canada would no longer be able to view or share news content on Facebook and Instagram (Meta, 2023[21]).
With an aim to update media competition laws in light of the sector’s digitalisation, France has recently developed an approach to evaluating the legality of mergers in the media sector. While the current system is predominantly based on transactions that exceed a certain revenue threshold, the new system would be based on a variety of factors, including what such a merger would mean for diversity of content, independence of information, as well as quantitative factors such as audience reach. This makes it easier to determine if certain media structures are competitive, even in situations where revenue structures are hidden or unclear (Government of France, 2022[22]).
1.4.2. Develop and apply lessons, including analysing potential market and financial consequences for business, from regulatory responses and approaches undertaken in other sectors by:
Analysing regulatory models and insights, identifying new ways of working and creating new regulatory bodies and agencies
OECD governments are adapting their institutions and policy frameworks to respond to threats posed by disinformation and to create an enabling environment for accurate, reliable, and plural information to thrive. A number of countries have stepped up their efforts, putting in place national strategic frameworks, administrative co-ordination units, task forces, and capacity building efforts – namely, institutional architecture – as they respond to disinformation and implement measures that enhance information integrity.
In order to ensure a coherent approach to and provide a clear impetus for tackling mis- and disinformation, many countries have developed strategic frameworks identifying the respective roles and requirements of different departments regarding information integrity. The Netherlands, for example, published a government-wide strategy for combatting disinformation in December 2022, highlighting the cases in which the government is responsible for debunking such information, and designating the Minister of the Interior and Kingdom Relations as a primary point of contact (Government of the Netherlands, 2022[23]). In other countries, strategies for combatting disinformation appear as part of other, wider policies. Estonia, for example, puts forward a set of measures for tackling disinformation campaigns in its National Security Concept, while Australia mentions policies to tackle disinformation in both its Cyber and Critical Tech Engagement Strategy and its Counter Foreign Interference Strategy (OECD, 2024[2]).
More effective collaboration between different parts of government has also been encouraged via an increase in cross-government co-ordination units. One recent example is Lithuania’s National Crisis Management Centre, whose roles include collecting information and data related to potential threats and co-ordinating any responses to emergencies (Government of the Republic of Lithuania, 2023[24]).
Several countries have created specific fora in this area. Australia’s Digital Platform Regulators Forum allows member regulators to share information about, and collaborate on, activities relating to the regulation of digital platforms, with a focus on considering how competition, consumer protection, privacy, online safety and data intersect in issues that the various regulators consider (Digital Platform Regulators Forum, 2022[25]). In a similar manner, the United Kingdom's Digital Regulation Cooperation Forum and the Netherlands' Digital Regulation Cooperation Platform were created to formalising co-operation structures between regulators in order to address the regulatory challenges brought about by digitalisation (Digital Regulation Cooperation Forum, n.d.[26]). (Authority for Consumers & Markets, n.d.[27])
The uniquely multi-faceted challenge that digital platforms present has also required regulators to experiment with innovative approaches. For example, France’s Centre of Expertise for Digital Platform Regulation (PEReN) has been enabled, through a regulatory exemption, to test regulatory tools directly on digital platforms. These platforms are legally mandated to co-operate, allowing PEReN to effectively explore future regulation possibilities, including ways to combat child sexual abuse, control for age online, and mitigate the negative impacts of AI. While such an approach can be effective, it is important that the experiment remains limited in its scope and timespan, as it otherwise risks undermining public trust in regulatory processes (Amaral and Hernandez, forthcoming[28]).
Countries have also developed innovative approaches through regulatory experimentation, with pilot programmes, waivers, hackathons, and exemptions via sandboxes. The latter, while broadly defined, generally refers to situations that allow companies to test new, innovative products while in a controlled regulatory environment. While regulatory sandboxes have become increasingly popular in sectors such as renewable energy and fintech, their use within the digital media sector has been relatively limited. However, this may be set to change, with the European Commission’s proposal for an Interoperable Europe Act creating a legal basis for launching sandboxes to test innovative solutions for digital public services in cross-border contexts, allowing European administration to collaborative more effectively both with digital platforms and with each other (OECD, 2024[29]).
Promoting and maintaining a diverse and independent media sector, and establishing independent mechanisms by which to support not-for-profit foundations, local and public service media
Beyond regulation, a plural and independent media landscape matters. In this light, the Action plan also identifies the need to promote and maintain a diverse and independent media sector, encouraging diversity, editorial independence, and high quality news provision. In many countries, independence of the media was often taken for granted as part of democratic landscape.
However, this is a more contrasted area, which has been under threat for some time – indeed, the World Press Freedom Index reveals that while 49% of OECD countries were ranked as having a good environment in 2015, this fell to 21% in 2023. Globally, the share fell from 21% to 4% in the same time period, emphasising the relative strength of OECD members (RSF, 2023[30]). While there a variety of factors contributing to this, including threats to the safety of journalists, the digitalisation of access to media, with the difficulties of traditional media to maintain sufficient market share, with subsequent difficulties in generating revenue, plays a key role (OECD, 2024[2]).
Countries seem to have recently gauged the new importance of this area. In order to help promote media plurality, the 2024 European Media Freedom Act, which was passed by the European Parliament, promotes the stable funding of public service media, and requires member states to assess the impact of media market concentrations on media pluralism and editorial independence (OECD, 2024[2]). Several countries have already taken such measures – for example, Italy’s “Single Fund for Pluralism and Digital Innovation in the Information and Media Publishing Sector” favours funding media sources that recruit journalists in the fields of digital publishing, communication and cybersecurity, with a focus on disinformation (Gazzetta Ufficiale, 2023[31]). Estonia supports Russian language content creation, designed to provide reliable information to non-Estonian speakers in the country in order to compete with Russian state-funded propaganda (ERR, 2023[32]).
Many countries also support media in other countries that fulfil certain criteria. In France, for example, the Ministry of Europe and Foreign Affairs supports Canal France International, which in turn supports media organisations committed to providing free, democratic and unbiased information in countries receiving development aid often in a francophone context. In a similar manner, Spain’s development agency AECID launched its “Programa Democracia” in 2023, (OECD, 2024[2]), one pillar of which pledges support of journalists, activists and academics who defend a diverse and plural media space within Spain, Latin America, and the Caribbean. Further, Germany’s development agency GIZ is currently supporting a 2022-2025 project aiming to help media outlets in the Western Balkans improve their reporting and revenue-generating capacities, in order to help promote media freedom and pluralism in the region.
1.5. Working collectively through the OECD and priorities going forward
Copy link to 1.5. Working collectively through the OECD and priorities going forward1.5.1. Working collectively through the OECD
Information Integrity Hub serves as a platform for active engagement to exchange best practices with OECD Expert Group on Governance Responses to Mis- and Disinformation. The Hub Steering Group includes Belgium, Canada, Chile, Colombia, Finland, France, Greece, Korea, Italy, Lithuania, Luxembourg, Netherlands, Norway, the United Kingdom, and the United States (OECD, 2024[33]).
A conference Tackling disinformation: Strengthening democracy through information integrity was held in November 2023, with over 400 participants, 41 speakers from 23 countries, as well as experts from non members such as Argentina, Brazil, Cameroon, and Ukraine.
Flagship report Facts not Fakes: Tackling Disinformation, Strengthening Information Integrity launched on 4 March 2024: (OECD, 2024[2]).
Building on this work, the OECD is currently developing a draft Recommendation on Information Integrity, to be discussed at the Global Forum in 2024.
1.5.2. Conclusions and priorities for the future
Promoting information integrity and tackling mis- and dis-information is a priority for countries around the world. Governments have started to take action by:
Fostering societal resilience to mis- and disinformation through proactive, responsive and effective public communication informed by behavioural insights. Countries are also seeking to equip citizens with the tools to recognise and combat mis- and dis-information by focussing on critical thinking skills, digital skills and media literacy. Governments increasingly recognise that a “whole of society” approach is needed and understand the important role that civil society organisations, journalists, businesses and other stakeholders can play in raising public awareness and promoting information integrity.
Enhancing transparency and accountability by recognising the limitations of self- and co-regulatory regimes and taking steps to introduce regulatory requirements for key players such as social media platforms and search engines. In this context, the emergence and rapid diffusion of generative artificial intelligence (AI) has prompted calls for, and in some cases the adoption of, transparency requirements for the datasets used in AI training, evaluations of AI models, and clear labelling of artificial audio or video content.
Improving governance measures to uphold the integrity of information ecosystems. To keep pace with rapidly changing media landscapes, countries are adopting innovative approaches and regulatory experimentation including pilot programmes, waivers, hackathons, and exemptions via regulatory sandboxes. Governments have also strengthened their own capacity to respond effectively by introducing national strategic frameworks, administrative co-ordination units and task forces.
However, action has been slow given the rapid rise of mis- and disinformation and their damaging effects over the past decade. Governments need to work together against a rapidly moving technological frontier, engaging with all of society to deliver comprehensive and constructive solutions. The new OECD Council Recommendation on Information Integrity should offer opportunities to catalyse countries’ efforts in this area through peer learning, sharing of best practices, regular reporting and information dissemination, leveraging the Information Integrity Hub (OECD, 2024[33]).
References
[28] Amaral, M. and G. Hernandez (forthcoming), “Regulatory experimentation: Moving ahead on the agile regulatory governance agenda”.
[27] Authority for Consumers & Markets (n.d.), “The Digital Regulation Cooperation Platform (SDT)”, https://www.acm.nl/en/about-acm/cooperation/national-cooperation/digital-regulation-cooperation-platform-sdt (accessed on 22 March 2024).
[16] Bloomberg (2023), “TikTok’s Rules Deter Researchers From Crunching Data on Users, Misinformation”, https://www.bloomberg.com/news/articles/2023-09-21/tiktok-terms-of-service-strict-for-researchers?sref=MTy2GeXk (accessed on 20 March 2024).
[17] Calma, J. (2023), “Twitter just closed the book on academic research”, https://www.theverge.com/2023/5/31/23739084/twitter-elon-musk-api-policy-chilling-academic-research (accessed on 20 March 2024).
[25] Digital Platform Regulators Forum (2022), “DP-REG Terms of Reference”, https://dp-reg.gov.au/publications/dp-reg-terms-reference (accessed on 22 March 2024).
[26] Digital Regulation Cooperation Forum (n.d.), “About the DRCF”, https://www.drcf.org.uk/about-us (accessed on 22 March 2024).
[32] ERR (2023), Estonian Russian-language private media receive €1 million from state, https://news.err.ee/1608898790/estonian-russian-language-private-media-receive-1-million-from-state.
[18] EUR-Lex (2022), “Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act)”, https://eur-lex.europa.eu/eli/reg/2022/2065/oj (accessed on 23 May 2024).
[5] European Commission (2022), “Hubs of the European Digital Media Observatory now extend to the whole EU27”, https://digital-strategy.ec.europa.eu/en/news/hubs-european-digital-media-observatory-now-extend-whole-eu27 (accessed on 11 April 2024).
[20] European Commission (n.d.), “About the Digital Markets Act”, https://digital-markets-act.ec.europa.eu/about-dma_en (accessed on 14 March 2024).
[31] Gazzetta Ufficiale (2023), LEGGE 30 dicembre 2023, n. 213., https://www.gazzettaufficiale.it/eli/gu/2023/12/30/303/so/40/sg/pdf.
[12] German Ministry of Defence (2023), “Integrierte Sicherheit für Deutschland: Nationale Sicherheitsstrategi [Integrated Security for Germany: National Security Strategy]”, https://www.bmvg.de/resource/blob/5636374/38287252c5442b786ac5d0036ebb237b/nationale-sicherheitsstrategie-data.pdf (accessed on 13 March 2024).
[7] Government Communication Service (2022), The Wall of Beliefs, https://gcs.civilservice.gov.uk/wp-content/uploads/2022/09/Wall_of_Beliefs_-publication.pdf.
[8] Government of Canada (2024), “Online disinformation”, https://www.canada.ca/en/campaign/online-disinformation.html.
[22] Government of France (2022), Concentration in the media sector in the digital era: From legal rules to regulation - Executive Summary, https://www.igf.finances.gouv.fr/files/live/sites/igf/files/contributed/IGF%20internet/2.RapportsPublics/2022/Executive_summary_anti_concentration.pdf.
[23] Government of the Netherlands (2022), “Government-wide strategy for effectively tackling disinformation”, https://www.government.nl/documents/parliamentary-documents/2022/12/23/government-wide-strategy-for-effectively-tackling-disinformation (accessed on 11 April 2024).
[24] Government of the Republic of Lithuania (2023), “Lithuania’s new crisis management model presented at Baltic States Centres of Government Meeting”, https://lrv.lt/en/news/lithuanias-new-crisis-management-model-presented-at-baltic-states-centres-of-government-meeting/ (accessed on 23 May 2024).
[19] Hagedorn, K. et al. (2023), “The UK’s Online Safety Act and EU’s Digital Services Act: What Online Service Providers Should Know”, https://www.orrick.com/en/Insights/2023/11/The-UKs-Online-Safety-Act-and-EUs-Digital-Services-Act-What-Online-Service-Providers-Should-Know (accessed on 14 March 2024).
[14] Hu, K. (2023), “ChatGPT sets record for fastest-growing user base - analyst note”, https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/ (accessed on 14 March 2024).
[10] Impact Canada (2022), Comparing Susceptibility to Misinformation Across Climate Change and COVID-19, OECD OPSI BI Projects Archive, https://oecd-opsi.org/bi-projects/comparing-susceptibility-to-misinformation-across-climate-change-and-covid-19/ (accessed on 29 March 2023).
[13] Lai, S., N. Shiffman and A. Wanless (2023), “Operational Reporting By Online Services: A Proposed Framework”, https://carnegieendowment.org/2023/05/18/operational-reporting-by-online-services-proposed-framework-pub-89776 (accessed on 25 September 2024).
[6] Louis-Sidois, C. (2022), “Checking the French Fact-checkers”, SSRN Electronic Journal, https://doi.org/10.2139/ssrn.4030887.
[21] Meta (2023), Changes to News Availability on our Platforms in Canada, https://about.fb.com/news/2023/06/changes-to-news-availability-on-our-platforms-in-canada/.
[2] OECD (2024), Facts not Fakes: Tackling Disinformation, Strengthening Information Integrity, OECD Publishing, Paris, https://doi.org/10.1787/d909ff7a-en.
[33] OECD (2024), “OECD Hub on Information Integrity: Joining forces to fight dis- and misinformation”, https://www.oecd.org/en/blogs/2023/03/oecd-hub-on-information-integrity-joining-forces-to-fight-dis--and-misinformation.html (accessed on 26 September 2024).
[1] OECD (2024), OECD Survey on Drivers of Trust in Public Institutions – 2024 Results: Building Trust in a Complex Policy Environment, OECD Publishing, Paris, https://doi.org/10.1787/9a20554b-en.
[29] OECD (2024), “Regulatory experimentation: Moving ahead on the agile regulatory governance agenda”, OECD Public Governance Policy Papers, No. 47, OECD Publishing, Paris, https://doi.org/10.1787/f193910c-en.
[11] OECD (2024), “The OECD Truth Quest Survey: Methodology and findings”, OECD Digital Economy Papers, No. 369, OECD Publishing, Paris, https://doi.org/10.1787/92a94c0f-en.
[9] OECD (2023), “Good practice principles for public communication responses to mis- and disinformation”, OECD Public Governance Policy Papers, No. 30, OECD Publishing, Paris, https://doi.org/10.1787/6d141b44-en.
[3] OECD (2022), Building Trust and Reinforcing Democracy: Preparing the Ground for Government Action, OECD Public Governance Reviews, OECD Publishing, Paris, https://doi.org/10.1787/76972a4a-en.
[30] RSF (2023), 2023 World Press Freedom Index – journalism threatened by fake content industry, https://rsf.org/en/2023-world-press-freedom-index-journalism-threatened-fake-content-industry.
[15] U.S. White House (2023), Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/.
[4] UNESCO (2023), “Guidelines for the governance of digital platforms: safeguarding freedom of expression and access to information through a multi-stakeholder approach”, https://unesdoc.unesco.org/ark:/48223/pf0000387339 (accessed on 11 April 2024).