The chapter begins by introducing the role of quality evidence for policy making, the opportunities for evidence-informed policy-making to improve public sector effectiveness, and the current challenges to using evidence in decision-making processes. In response to these challenges, the chapter explores a core set of characteristics in terms of standards and governance principles to support policy makers in the use of evidence for decision-making. The chapter then presents the goal and focus of the report, along with the methodology. Lastly, the chapter summarises the principles and standards that are examined in chapter 2 and chapter 3.
Mobilising Evidence for Good Governance
1. Introduction
Abstract
The role of quality evidence to support evidence-informed policy-making
This report reviews the principles and standards that OECD countries use to support the design, implementation and evaluation of public interventions, as part of their efforts to foster evidence-informed policy-making. The report addresses two challenges in realising the potential of evidence-informed policy-making. The first challenge is that not all evidence is equal: some evidence is more robust and trustworthy and deserves to be given more weight in decision-making. Determining when evidence is robust and sufficient, and then communicating this to decision makers is challenging. The evidence concerned in this report is the whole range of analysis that can be made using various information and data sources that are brought to be bear in the design, implementation and evaluation of policy interventions. Standards of evidence have been developed by knowledge brokers in many countries to help communicate the strength of evidence of policies and programmes to policy makers. The second challenge is that evidence is one of many inputs of policy-making and policy makers must balance various considerations such as ethics, equity, values, economics, special interests, privacy, security, and political objectives in order to maintain trust in the policy making process. Principles for the good governance of evidence help to ensure that evidence is used appropriately and with reference to the wider administrative and political system into which it is being introduced. At the same time, it is important to recognise the messy nature of the policy process where conflicting values, interests and constraints have to be addressed. While evidence can should inform policy-making, it certainly cannot “determine” it, as political decision needs to be given a discretionary space of appreciation, which reflects the political responsibility entrusted to policy makers and ministers in government. This is the reason why the approach is qualified as “Evidence-Informed Policy Making”, against “Evidence-Based Policy Making”, recognising that policies are also driven by values, conflicts and other non-evidential factors.
What are the challenges of contemporary policy-making?
Contemporary policy problems are increasingly ‘wicked’ in nature, involving social pluralism, institutional complexity, and scientific uncertainty (Head and Alford, 2015[1]). Simultaneously, public trust in government and political organisations has been eroded, following the global financial crisis in a large number of countries at a time when arguably, policy makers need it most (OECD, 2017[2]). Trust has since then recovered on average to pre-crisis levels, but not in the countries that were most affected (OECD, 2019[3]).
Countries, however, have now fallen into a much deeper crisis in public confidence caused by the COVID 19 pandemic. In particular, it has tested governments’ capacities to maintain and ensure trust in decision-making processes that have an immediate and deep impact on the lives of their citizens. The impacts of eroding trust in government and institutions are wide ranging, and include consequences for the effective functioning of government, which may result in costly micromanagement between organisations, government, and citizens (Prange-Gstöhl, 2016[4]). The COVID 19 pandemic has also put science under increased public scepticism: “Trust in research and its role in political decision making and policy changes have never been more at the forefront of public discussion and scrutiny than during the current public health crisis” (The Lancet, 2020[5]). Indeed, during this Covid-19 crisis the fabric of our democracies appears threatened by an inability of government to present a convincing model of the role of science and evidence in decision-making. This report is preceded by years of research and discussion. It arrives in a timely manner, asking the question: “But what is everyone's role in strengthening this trust?” (The Lancet, 2020[5]); In particular, the role of knowledge brokers and government agencies in providing evidence to policymakers as they navigate a complex and challenging political landscape.
Moreover, the questioning of the role of traditional institutions has coincided with the digitalisation of societies, including within the public sector, and the adoption of digital communications platforms. There is growing concern about the potential for disinformation (so called fake news nowadays) through traditional and social media, where the origins and motivations of traditional and new sources of evidence are questioned. Both trends have only been exacerbated during the COVID 19 crisis. This has led to the traditional hierarchical approach to the dissemination of knowledge being replaced by peer-to-peer recommendations and algorithms, even if the need for authoritative voices remains (D’Ancona, 2017[6]). Whilst citizens still hold science in high regard and there is a high level of trust in scientists (Yarborough, 2014[7]) what can no longer be taken for granted is the authority of science in the face of other pressures and alternative sources of ‘knowledge’. The many “unknown unknowns” and uncertainty created by, for example, the COVID 19 crisis have also generated significant challenges while increasing the reliance on science in the decision making process.
Another enabler for trustworthy and evidence-informed decision making is the increasing capacity of governments to collect, process and store digital data, and to integrate these into policy processes. The OECD has had for a long time a recommendation for good statistical practice, which is directed at the legal and institutional frameworks for official statistics, the professional independence of national statistical offices, the resources, data privacy issues, the right to access administrative sources, as well as the impartiality, objectivity and transparency of official statistics, together with sound methodology and professional standards.1 More recently, the use and application of data in crisis management efforts has aimed at enabling more transformative, open, collaborative, pinpointed and agile data driven approaches while reemphasising challenges to good data governance, including within the public sector, such as lacking health data standards and crisis-adjusted data ethics (OECD, 2019[8]). The OECD Recommendation on Health Data Governance (2019[9]) fosters co-ordination within government and among organisations to adopt common data elements and formats, quality assurance, and data interoperability standards; as well as, robust, objective and fair reviews and approval procedures conducted by the expertise necessary. In the case of COVID 19, several countries have turned to digital technologies and advanced analytics to collect, process and share data for effective front-line responses. However, this has also come with new data governance and privacy challenges, which have had to be addressed in a crisis context. (OECD, 2020[10]). For instance, few countries do not count with frameworks to contact-tracing and population-wide surveillance. Some have countered with laws specifying how data collection will be limited to a certain population, for what time, and purpose.
The result is that a wide range of new and alternative sources of information are now available to governments, a plethora of which could all feed into the policy-making and decision making process. It is therefore all the more important that some form of principles and standards for mobilising evidence be developed to strengthen the coherence and the trustworthiness of such decision-making processes. This of course is complementary to all the efforts that are needed upstream to address data certification or data provenance in order to avoid data corruption and therefore misleading outputs. Moreover, in the COVID 19 crisis, public authorities on the field have a key role to play in advising on proposed new legislations and providing clarity regarding the application of existing privacy and data protection frameworks (OECD, 2020[10]).
Sound public governance has a critical role to play in maintaining trust as it can promote fair processes and outcomes for citizens. Good public governance is also need to improve the quality, access and responsiveness of public services. This hinges on the good or “appropriate” use of evidence, to feed into the design, implementation, and evaluation of public programmes and interventions. Both are key features of a smart and agile state.
Defining the role of evidence and evidence-informed policy-making
Defining what ‘evidence-informed policy-making’ and ‘evidence’ mean in a policy context is a necessary step in defining how to improve it. Evidence-informed policy-making can be defined as a process whereby sources of information–including statistics, data or available published research–are consulted before making a decision to design, implement and/or alter policies, interventions, programmes. (Langer, Tripney and Gough, 2016[11]). This report adopts a correspondingly broad definition of evidence: ‘a systematic investigative process employed to increase or revise current knowledge’ (Langer, Tripney and Gough, 2016[11]). The use of evidence contributes to good governance as it enables actors to evaluate, design and update public interventions.
The evidence used to inform policy making processes may be derived from (i) ‘scientific evidence’ , ii) policy evaluation (iii) anecdotal observations, (iv) subjective opinions polls, each with varying strength of evidence (see Box 1.1). Scientific evidence on the quality and impact of public interventions, policies and programmes is based on the application of the scientific method to data coming from either:
a. Randomized control trials (RCT), i.e. data from a controlled experiments, or
b. Observational data (i.e., data collected through the observation of individuals, communities, organisations).
Box 1.1. Background definitions of types of evidence
Scientific evidence consists of the results of observations and experiments that serve to support, refute, or modify a scientific hypothesis (belief or proposition) when collected and interpreted in accordance with a widely accepted scientific method. The scientific method is an empirical method of acquiring knowledge that has characterized the development of science since at least the 17th century. It involves:
the formulation of hypotheses as tentative responses to questions arising from what is already known in a specific field of knowledge and as a way to generate new knowledge;
the testing of the hypothesis through a controlled experiment or other methods;
the analysis and ongoing investigation of the experimental findings
Policy evaluation is a structured and objective assessment of an ongoing or completed policy or reform initiative, its design, implementation and results. It determines the relevance and fulfilment of objectives, efficiency, effectiveness, impact and sustainability as well as the worth or significance of a policy.
Anecdotal evidence is evidence from anecdotes, i.e. evidence collected in a casual or informal manner and relying heavily or entirely on personal testimony.
Opinion polls are a human research survey of public opinion from a particular sample of a target population.
Sources: Adapted from Oxford Dictionary, Wikipedia. OECD (2016[12]), Open Government: the Global Context and the Way Forward, https://doi.org/10.1787/9789264268104-en. OECD-DAC (2009[13]), Guidelines for Project and Programme Evaluations https://www.oecd.org/development/evaluation/dcdndep/47069197.pdf.
Appropriate scientific methodology needs to be applied to data to transform them into evidence and knowledge. Observational data is often treated with methods labelled quasi-experimental, because they compensate for the departures from an experimental setting via appropriate statistical methodology; (see e.g. (Crato and Paruolo, 2019[14]). Overall strength of evidence comes from the quality and reliability of the data and data sources, from the appropriateness of the methodology applied to control for bias, and from the concurrence of different parts of the evidence.
The role of the use of evidence in improving public sector effectiveness
Achieving the promise of evidence-informed policy-making
Evidence has a critical role to play in improving the quality, responsiveness, and accessibility of public interventions, services or programmes. Evidence also plays a role throughout the key stages of the policy cycle when public interventions are designed, implemented, and evaluated. This may involve an interactive process, where citizens and users can share views, collaborate with peers or express dissatisfaction as part of a feedback loop to better understand needs and embrace innovation. (OECD, 2020[15]).
Policy design benefits from ‘policy memory’: an understanding of where challenges have been experienced in the past and what previous good practice could be incorporated into the current reform effort. For instance, evidence synthesis, such as systematic reviews (more information in section: Evidence Synthesis), helps policymakers to prevent one-sided policy design, avoid duplication and ensure that scarce resources are directed at areas of policy requiring further solutions. It is also used to identify policies and practices that have been found to be ineffective, suggesting caution should be exercised regarding further investment in the absence of additional refinement and testing (Gough, Oliver and Thomas, 2013[16]; Torgerson and Torgerson, 2003[17]).
The implementation of public interventions and policies requires significant planning and management. Evidence provides an understanding of how policies should be adapted to meet local needs, whilst safeguarding against changes that may affect outcomes: this can make the difference between a successful implementation of an intervention and one that is ineffective or potentially even harmful (Moore, Bumbarger and Cooper, 2013[18]). Gathering evidence on factors that help and hinder implementation can facilitate dissemination of effective interventions at scale and the delivery of outcomes at the population level (Castro, Barrera and Holleran Steiker, 2010[19]). The attention given to implementation aspects has been subject to growing interest from the economics profession over recent years. (Duflo, 2017[20])
The evaluation of public policies and interventions is also critical to understand why some complex policies work and why others do not. As one source of relevant knowledge, policy evaluation supports policy choices through an evidence-informed decision-making process. Solid evidence and its strategic use throughout the policy cycle can foster policy effectiveness, value for money, accountability, and transparency - enriching public scrutiny, facilitating shared learning and ultimately increasing public trust (OECD, 2020[21]). Ex post evaluation can also be complemented by a more agile feedback loop, so that adjustments may be made in real time.
The need to create a supportive institutional environment
An institutionalized approach to evidence gathering and evidence usage contributes to aligning isolated and unplanned evidence efforts into more formal and systematic practices, with the ability to set guidelines and incentives for evidence creation and use (Gaarder and Briceño, 2010[22]).The OECD comparative report on policy evaluation (2020[21]) identifies that legal frameworks constitute a basis for embedding the practice of evaluations across government in a systematic way. Policy frameworks also give strategic direction to a specific sector or thematic area of policy evaluation. A related OECD report (2020[23]) shows that building capacity for evidence use requires strengthening organisational tools, resources and processes, mandates, legislation and regulation. As these two related OECD report provide significant institutional information on country frameworks and good practices, the current report focuses on a core set of characteristics that are relevant to the content of evidence and the process for integrating it in policy making, in terms of standards and governance principles.
Promoting the use of evidence and improving the evidence to policy interface through knowledge brokerage
Ensuring the effective use of evidence in policy-making depends on the capacity, motivation and skill of policy makers and researchers as well as the quality of the political-administrative interface. Whilst many OECD countries have adopted mechanisms to promote a culture of evidence-informed policy-making, the systematic use of evidence and data in policy making often remains an aspiration and requires further progress to be engrained in the civil service as a whole (OECD, 2015[24]).
On the data as such, there is also a need for a coherent data governance approach not only to support the access and sharing of data within the public sector or across different sectors, but to secure the deployment of controls to ensure that data itself is trustworthy and, depending on the data accessed or shared, protected without compromising basic rights in adherence to ethical values (OECD, 2019[25]).
Addressing the challenges of promoting the use of evidence and improving the evidence to policy interface requires an acknowledgement of the fact that the worlds of policy-making and research are very different. Policy-making is intrinsically political and often reflects a mix of complex interactions and deliberative processes. This reflects different professional cultures, resources, imperatives and time frames than those of science (Olejniczak, Raimondo and Kupiec, 2016[26]). Scientific language and discourse is also distinct from the language of policy-making (Meyer, 2010[27]). As a result, policy makers may not have the time and capacity to synthesize the research literature and face a number of obstacles when accessing the latest knowledge and research (Burkhardt et al., 2015[28]; Oliver et al., 2014[29]). The process of translating knowledge and research so that it can be used in the policy-making cycle is messy and complex and requires governments and other stakeholders to create favourable contexts, incentives and a supportive culture for evidence-informed policy-making (Ellen et al., 2013[30]).
One solution for strengthening the evidence-policy interface is establishing ‘knowledge brokers’ to bridge the gap between researchers and decision makers. These are also called ‘intermediary organisations’ in the literature (Proctor et al., 2019[31]; Association of Children’s Welfare Agencies, 2018[32]). Knowledge brokers are organizations or individuals, who thanks to their intermediary position in the system, can establish and maintain links and cooperation between knowledge producers and users. Harnessing their deep knowledge about both the research process and the policy cycle as well as their extended connections with representatives of these two worlds, they can increase availability of the robust knowledge and build a culture and capabilities for evidence use.
Acting as the bridge between knowledge users and producers, broker organizations must fulfil a wide range of functions. To be able to introduce more evidence into decision-making process, they need to be sure that reliable knowledge is accessible from the field. They synthesise what researchers and end users already know, identify what kind of evidence is missing in relation to decision makers’ information needs and fill those gaps by performing new analysis. They need to ensure that gathered evidence is introduced to the right recipients in a timely and attractive manner. They can disseminate knowledge through general channels of communication and also through targeted messages. Their role is to maximise the chances that evidence will be taken on board by decision makers. They must also foster trust in their results, for example through addressing concerns for reliability and for conflict of interest. For this purpose, they have to promote an evidence – based culture among stakeholders, to translate knowledge into usable tools and to build networks of knowledge producers and users.
The need for standards of evidence to bring coherence to policy making processes
While governments recognise the importance of using evidence in the policy-making process, not all evidence is equal. Some evidence is more robust and trustworthy and deserves to be given more weight in decision-making. Figuring out which evidence is robust and communicating this to decision makers is challenging. Standards of evidence attempt to strengthen the evidence of policies and programmes transmitted to policy makers. Existing knowledge brokers, which include government agencies, evidence-based clearinghouses and “What Works centres” have already made extensive progress in developing standards of evidence that are firmly rooted in the academic literature.
Although standards of evidence have great potential to improve the quality of evidence-informed policy-making, current approaches face several limitations that impede their use. Due to the proliferation of different approaches to standards of evidence, it is possible to reach different judgements about the strength of evidence of the same policy or programme. It is not surprising that stakeholders are confused about the difference between the standards of evidence used by different organisations (Puttick, 2018[33]; Means et al., 2015[34]). This can even happen within the same government, or across two sides of the same ministry, where different key parameters such as the implicit value of a life saved can be used to obtain incoherent calculations supporting a range of apparently equally valid public interventions. In other policy areas, standards cover only a restricted set of the issues that are important for the policymaking process. The existing stocktaking exercises have found that many of the approaches focus on how to determine whether an intervention is efficacious and effective (Gough and White, 2018[35]), which ignores key features of intervention design, implementation and potential for scale-up. Some approaches to evidence standards may also not serve the needs and realities of public policy (Parkhurst and Abeysinghe, 2016[36]).
Ensuring the good governance of evidence
Consideration of the standards of evidence is necessary but not sufficient for an evidence-informed approach to policy-making. It is necessary to also consider balancing these technical features with the inherently political nature of policymaking (Parkhurst, 2017[37]). As evidence is but one input to policy-making and policy and practice decisions actors must also weigh up broader considerations, such as ethics, equity, values and political considerations. Decision makers need to think critically about what evidence is needed in the context of a particular decision – and whose voices, interests and values need to be heard and how power shapes knowledge production and use (Oliver and Pearce, 2017[38]). This ensures that ethical values and appropriate safeguarding can be assured (Leadbeater et al., 2018[39]). The governance of evidence is therefore essential to limit undue bias in decision making, to reduce the potential impact of lobbying and to ensure that government can act in the best general public interest.
Goal and focus of this report
This report aims to improve the quality and use of evidence both for central government agencies and for knowledge brokers through greater uptake of standards and principles, with a view to improving public sector effectiveness and strengthening evidence at the decision-making interface.
The report offers a first mapping of existing principles for the good governance of evidence, followed by a stocktaking of standards of evidence. The methods are described in detail below. Each substantive part offers details of the principles and standards and provides a coherent narrative about their features and focus. For each principle or standard, the report offers detailed case study examples describing the use of principles and standards in different jurisdictions. The report treats these principles and standards as cutting across different parts of the policy cycle. Each chapter offers a self-assessment checklist for the use of agencies and organisations to help them think about key questions that they may consider including in their own approaches. A consolidated presentation of the principles and standards is offered at the end of this introduction. The principles and standards presented in this report should not be seen as a full comprehensive mandatory checklist but as a sort of a mental check of the issues that might need to be addressed. For example, having a logic model, or a theory of change is not always possible, in the case of multiple, competing or conflicting policy goals. However, when feasible, this can be useful and the report offers a range of insights and relevant sources.
At this stage, the current mapping remains descriptive. It opens the floor for future discussions on the issues that should be consider for the review and production of evidence. This report provides an initial resource for agencies and national experts thinking about the role of standards and principles in their evidence architecture. Reflecting both the nature of existing practice and the thematic interests of the project, the current review has primarily focused on public interventions and especially in the social policy area. As a result, the report makes a focused contribution to the full range of considerations involved in providing evidence for policy-making. For example, it is recognised that evidence plays a role in policy issues such as developing a financial aid system in the area of development, and in developing clinical guidelines for how physicians respond to an issue in the health area2. However, such issues are not directly addressed in the current report which is focused on domestic policy intervention for the public sector as a whole. Similarly, whilst the chapter of efficacy does refer to OECD’s work on the evaluation of regional development policy (OECD, 2017[40]) and industrial policy (OECD, 2014[41]), these issues are not addressed in detail in this report.
This report complements two other existing OECD reports on building capacity for evidence-informed policy-making (OECD, 2020[23]), and on improving governance through policy evaluation (OECD, 2020[21]) (See more in Box 1.2).
Box 1.2. An overview of related OECD work on Evidence-Informed Policy-Making
From a governance perspective, the OECD has also focused on building capacity for evidence informed policy making, through appropriate use of evidence, and has also analysed the institutionalisation, quality and use of policy evaluation systems:
1. Building capacity for evidence-informed policy-making, this report analyses the skills and capacities governments require to reinforce evidence-informed policy-making (EIPM) and the capacity to engage with stakeholders. At the individual level, the report presents a core skillset, including: the capacity for understanding; obtaining; assessing; using; engaging with stakeholders; and applying evidence. At the organisational level, the report discusses diagnostic tools to evaluate and build governmental capacities for EIPM as well as broader approaches to strengthen an evidence driven culture, such as the implementation of champions for an evidence driven approach, such as chief evaluators, chief economists or statisticians in Ministries and agencies. (OECD, 2020[23]).
2. Improving governance through policy evaluation, this report presents the results of the first significant cross-country survey of policy evaluation practices covering 42 countries. This report adopts a systemic approach around institutionalisation, quality and use of evaluation across countries. It discusses the main actors, the key institutional frameworks and includes a wealth of concrete experiences and good practices to make evaluation happen. The analysis of how to ensure the quality of evaluations and how to institutionalise them in the policy process are also related to the discussion of standards and principles for the governance of evidence in the current report (OECD, 2020[21]).
From a well-being perspective, the OECD has also conducted work on the policy use of well-being metrics, analysing specifically the institutional frameworks through which these metrics are being integrated in policy making (Exton and Shinwell, 2018[42]). In the environmental area, the report on Cost Benefit Analysis and the environment, also provides an overview of methods and conditions for use, and is also referenced in the standards section of this report (OECD, 2018[43]). Other contributions flagged in the text above refer to the evaluation of regional development policy or of industrial policy. As an evidence driven organisation, the OECD focuses on evidence driven processes in many sectoral areas. The contribution of the approaches highlighted here is to focus on the institutional or methodological conditions that best allow for evidence to be incorporated into policy processes.
Source: Adapted from OECD (2020[23]), Building Capacity for Evidence-Informed Policy-Making: Lessons from Country Experiences https://doi.org/10.1787/86331250-en. OECD (2020[21]), Improving Governance with Policy Evaluation: Lessons From Country Experiences, https://doi.org/10.1787/89b1577d-en.
Methodology
Conceptual framework
The elaboration of the conceptual framework was prepared following an initial literature review on evidence-informed policy-making, accompanied by an early engagement from a set of experts from OECD countries, and complemented by an exploratory expert meeting in October 2018. The list of the experts is available in the acknowledgements and their help is gratefully acknowledge.
Principles for the good governance of evidence: This outlook was informed by the research of Justin Parkhurst’s (2017[37]) in “The politics of evidence: from evidence-based policy to the good governance of evidence”. This led to the extraction of key principles (listed in below), which were reviewed, expanded and refined in cooperation with experts as well as with the OECD public governance community at large. This provided a framework to extract information from a range of other sources relevant to the good governance of evidence.
Standards of evidence: The starting point was “Standards of Evidence for Efficacy, Effectiveness, and Scale-up Research” Gottfredson
(2015[44])- , which was complemented with expert inputs and other OECD sources3. This provided a framework to synthesise and analyse other approaches to the standards of evidence, along with existing mapping efforts that had been done originally in the UK and which have now been expanded to an international level through the current report (Puttick, 2018[33]) (Breckon et al., 2019[45]) (Gough and White, 2018[35]). This led to the extraction of standards of evidence (listed in below) which the report then used as a framework in order to extract information from a range of other relevant sources.
Desk research
Comprehensive and iterative desk research was used to examine and synthesize existing approaches to both the principles and standards of evidence. This included the analysis of the clearinghouses and organizations’ websites and their available online resources, restricted to the information published in English that was available to the Secretariat by the end of 2019. Therefore, to be included in the report the approaches had to be a) publicly available with accessible resources that could be accessed and assessed and b) available in English.
Engaging with experts
This report was also supported by thorough engagement with experts. Experts came from across OECD countries and from a range of organisations, including Ministries of Finance, Cabinet Offices, What Works Centres, Science Advisors, and other knowledge brokers with an interest in the principles and standards of evidence.
The expert group met twice over the course of the project and the Secretariat maintained an iterative engagement with the expert community throughout the project. The first meeting took place in October 2018 and aimed to integrate different approaches and country’s experiences towards the good governance of evidence, to shape the framework for the study. The second meeting took place in June 2019 and this meeting focused on the findings from the mapping exercise and offered significant opportunity for expert feedback. The expert group’s feedback has been incorporated into this version of the report while experts were mobilised again twice in 2020.
Country coverage
Principles for the Good Governance of Evidence
Overall, the report mapped 29 approaches that covered one or more principles for ensuring the appropriate use of evidence. The sources come from a range of OECD countries: New Zealand, Japan, United Kingdom, and United States, as well as the European Union, the OECD, and from the International Science Council- INGSA. Thirteen approaches are from United Kingdom (UK), five from the European Union, four from United States, two from New Zealand and the OECD, and three from Malawi, Japan and INGSA (See Overview of the Principles for the use of Evidence across Jurisdictions). Whilst not part of the formal mapping of principles presented in Annex, the report also draws on a broader range of examples from a range of OECD jurisdictions including from Canada, Finland, France Norway, OECD and Portugal among others.
Standards of evidence
This report has mapped 50 standards of evidence, which come mainly from seven OECD countries: Australia, Canada, Germany, New Zealand, Spain, United Kingdom, and United States, as well as approaches from the European Union, and one with an international focus. Twenty-three approaches are from United States (USA), fourteen from United Kingdom (UK), and eight from Canada, Australia, Germany, New Zealand, and Spain. The European Union has four approaches drawn from existing EU agencies and the Joint Research Center of the European Commission (See Mapping of Existing Standards of Evidence across a range of Jurisdictions). Whilst not part of the formal mapping of principles presented in annex, the report also draws on a broader range of examples from a range of OECD jurisdictions including from Canada, Finland, France Norway, OECD and Portugal among others.
Consolidated overview of the principles and standards
This section presents a consolidated overview of the Principles for the Good Governance of Evidence as well as the related Standards of evidence. Figure 1.1 below presents the analytical framework while the following sections outline each of the principles and standards.
Principles for the good governance of evidence
Appropriate evidence for the policy concern
Evidence should be selected to address multiple political considerations; useful to achieve policy goals; be considerate of possible alternatives; and consider the local context. It should be useful for learning how to improve policy design, identifying both the positive and negative effects.
Ensuring integrity (honest brokerage)
Individuals and organisations providing evidence for policy-making need processes to ensure the integrity for such advice, managing conflicts of interest and ethical conduct to avoid policy capture and maintain trust in evidence and evidence-informed processes for policy-making. This notion of integrity includes both the processes used to select and analyse the evidence, but also the processes through which the advice is then provided to policy-making.
Accountability
Accountability in decision-making means that the agent setting the rules and shape of official evidence advisory systems used to inform policymaking should have a formal public mandate, and the final decision authority for policies informed by evidence should lie with democratically representative and publicly accountable officials.
Contestability
Evidence must be open to critical questioning and appeal, which can include enabling challenges over decisions about which evidence to use. The challenge function can be organised within government, and through organisations such as chief economists, central review boards, or government science advisors.
Public representation in decision-making
Public engagement enables stakeholders and members of the public to bring their multiple competing values and concerns to be considered in the evidence utilisation process, even if not all concerns can be selected in the final policy decision.
Transparency in the use of evidence
Evidence should be clearly visible, understandable and open to public scrutiny. The public should be able to see how the evidence used to inform a decision was collected, analysed, and used in the decision-making process and for what purpose.
Building evidence through emerging technologies and mobilising data
The use of data and emerging technologies for the purpose of evidence-informed policy making needs to pay due attention to the increase in data governance instruments that address ethics, protection, privacy and consent, transparency and security; and to the development of relevant data standards that build evidence. It is important that the data and AI processes that build evidence are designed, generated, applied and used in ethical ways that respect the rights of individuals so that resulting evidence contributes to the fostering of public trust.
Standards of evidence
Standards concerning evidence synthesis.
Evidence syntheses provide a vital tool for policy makers and practitioners to find what works, how it works, and what might do harm based on thorough literature reviews that help to ensure good knowledge management of the existing and previous research. As with primary studies readers can and should appraise the quality of evidence synthesis.
Given the breadth of the published literature, including impact evaluations, RCTs, being published each year, knowledge management is essential as it becomes more difficult for policy makers and practitioners to keep abreast of the literature. Furthermore, policies should ideally be based on assessing the full body of evidence, not single studies, which may not provide a full picture of the effectiveness of a policy or programme.
Theory of change and logic underpinning an intervention: Should the intervention work?
A theory of change can be defined as a set of interrelated assumptions explaining how and why an intervention is likely to produce outcomes in the target population. A logic model sets out the conceptual connections between concepts in the theory of change to show what intervention, at what intensity, delivered to whom and at what intervals would likely produce specified short term, intermediate and long term outcomes. In some cases, a single theory of change might be difficult to identify due to multiple and complex interactions, it might be difficult to identify a unique course of action and the underlying policy goals could be multiple and conflicting; this should not impede the activation of evidence processes.
Design and development of policies and programmes: Can it work?
Standards concerning the design and development of policies and programmes focus on evidence that tests the feasibility of delivering a policy in practice. At the design and development stage, analysts are often perform important work in testing theories of change and logic models, carrying out process evaluations and pre/post studies.
Efficacy of an intervention: Does it work?
Once an intervention has been identified as ‘promising’ in preliminary research, many standards of evidence emphasise the need for rigorous efficacy testing. Efficacy studies typically privilege internal validity, which pertains to inferences about whether the observed correlation between the intervention and outcomes reflect and underlie causal relationship. In order to maintain high internal validity, efficacy trials often test an intervention under ‘ideal’ circumstances. This can include a high degree of support from the intervention developer and strict eligibility criteria thus limiting the study to a single population of interest.
Effectiveness of interventions: Does it work in the real world?
Efficacy trials often tell us little about the impact of an intervention in ‘real world’ conditions, because the evaluation is often overseen by the developer of the policy or programme, with a carefully selected sample. What are the benefits or damages, independently from the policy goals? Therefore, standards of evidence often stipulate that a policy or programme demonstrates effectiveness, in studies where no more support is given than would be typical in ‘real world’ situations. This requires flexibility in evaluation design to address cultural, ethical and practice challenges. During policy implementation, evidence is useful to understand for whom its work and for whom it does not work. Therefore it is important to learn how to maximize benefits and minimize damages, also within a no policy change scenario.
Cost effectiveness of interventions: Is it worth it?
Positive impacts at a very high price may not be in the interests of governments and citizens. Using economic evidence is important to demonstrate value for money for public programmes in a context of continued fiscal constraints. Increased understanding of interventions that achieve impact at too high a price would enable decision-makers to make more efficient decisions.
Implementation and scale up of interventions
Knowledge of ‘what works’ – of which policies and programmes are effective, is necessary but not sufficient for obtaining outcomes for citizen. Increasingly, there is recognition that ‘implementation matters’- that the quality and level of implementation of an intervention of policy is associated with outcomes for citizens. Thus, it is important to understand the features of policies and programmes, of the organisation or entity implementing them, along with the other factors that are related to adoption, implementation and sustainability of a policy or programme. It can also be important to ensure a systematic monitoring framework to contribute to the process of implementation and scaling up. This enables practical guidance to enable successful implementation.
References
[32] Association of Children’s Welfare Agencies (2018), Using Evidence in Practice: The Role of Intermediaries, http://www.acwa.asn.au/using-evidence-in-practice-the-role-of-intermediaries (accessed on 25 March 2020).
[45] Breckon, J. et al. (2019), Evidence vs Democracy, Alliance for Useful Evidence, http://www.alliance4usefulevidence.org/join (accessed on 22 March 2019).
[28] Burkhardt, J. et al. (2015), “An overview of evidence-based program registers (EBPRs) for behavioral health”, Evaluation and Program Planning, Vol. 48, pp. 92-99, http://dx.doi.org/10.1016/J.EVALPROGPLAN.2014.09.006.
[19] Castro, F., M. Barrera and L. Holleran Steiker (2010), “Issues and Challenges in the Design of Culturally Adapted Evidence-Based Interventions”, Annual Review of Clinical Psychology, Vol. 6/1, pp. 213-239, http://dx.doi.org/10.1146/annurev-clinpsy-033109-132032.
[14] Crato, N. and P. Paruolo (2019), Data-driven Policy Impact Evaluation: How Access to Microdata is Transforming Policy Design, Springer. ISBN 978-3-319-78461-8, https://doi.org/10.1007/978-3-319-78461-8, Springer Nature, http://dx.doi.org/10.1007/978-3-319-78461-8 (accessed on July 19, 2019).
[6] D’Ancona, M. (2017), “Post truth”, in Post truth: the new war on truth and how to fight back, Ebury Press, London.
[20] Duflo (2017), The economist as plumber, National Bureau of Economic Research, https://www.nber.org/papers/w23213.
[30] Ellen, M. et al. (2013), “What supports do health system organizations have in place to facilitate evidence-informed decision-making? a qualitative study”, Implementation Science, Vol. 8/1, p. 84, http://dx.doi.org/10.1186/1748-5908-8-84.
[42] Exton, C. and M. Shinwell (2018), Policy use of well-being metrics: Describing countries’ experiences, http://www.oecd.org/officialdocuments/publicdisplaydocumentpdf/?cote=SDD/DOC(2018)7&docLanguage=En.
[22] Gaarder, M. and B. Briceño (2010), “Institutionalisation of government evaluation: balancing trade-offs”, Journal of Development Effectiveness, Vol. 2/3, pp. 289-309, http://dx.doi.org/10.1080/19439342.2010.505027.
[16] Gough, D., S. Oliver and J. Thomas (2013), Learning from Research: Systematic Reviews for Informing Policy Decisions: A Quick Guide..
[35] Gough, D. and H. White (2018), Evidence standards and evidence claims in web based research portals, Centre for Homelessness Impact, https://uploads-ssl.webflow.com/59f07e67422cdf0001904c14/5bfffe39daf9c956d0815519_CFHI_EVIDENCE_STANDARDS_REPORT_V14_WEB.pdf (accessed on 8 March 2019).
[1] Head, B. and J. Alford (2015), “Wicked Problems: Implications for Public Policy and Management”, Administration & Society, Vol. 47/6, pp. 711-739, http://dx.doi.org/10.1177/0095399713481601.
[11] Langer, L., J. Tripney and D. Gough (2016), The Science of Using Science Researching the Use of Research Evidence in Decision-Making e PPI EPPI-Centre, http://eppi.ioe.ac.uk/cms/Default.aspx?tabid=3504 (accessed on 31 October 2019).
[39] Leadbeater, B. et al. (2018), “Ethical Challenges in Promoting the Implementation of Preventive Interventions: Report of the SPR Task Force”, Prevention Science, pp. 1-13, http://dx.doi.org/10.1007/s11121-018-0912-7.
[34] Means, S. et al. (2015), “Comparing rating paradigms for evidence-based program registers in behavioral health: Evidentiary criteria and implications for assessing programs”, Evaluation and Program Planning, Vol. 48, pp. 100-116, http://dx.doi.org/10.1016/J.EVALPROGPLAN.2014.09.007.
[27] Meyer, M. (2010), “The Rise of the Knowledge Broker”, Commentary Science Communication, Vol. 32/1, pp. 118-127, http://dx.doi.org/10.1177/1075547009359797.
[18] Moore, J., B. Bumbarger and B. Cooper (2013), “Examining Adaptations of Evidence-Based Programs in Natural Contexts”, The Journal of Primary Prevention, Vol. 34/3, pp. 147-161, http://dx.doi.org/10.1007/s10935-013-0303-6.
[23] OECD (2020), Building Capacity for Evidence Informed Policy Making, https://doi.org/10.1787/86331250-en.'.
[15] OECD (2020), “Digital Government in Chile – Improving Pubilc Service Design and Delivery”, OECD Digital Government Studies, https://doi.org/10.1787/b94582e8-en.
[10] OECD (2020), “Ensuring data privacy as we battle COVID-19”, OECD Policy Responses to Coronavirus (COVID-19), http://www.oecd.org/coronavirus/policy-responses/ensuring-data-privacy-as-we-battle-covid-19-36c2f31e/.
[21] OECD (2020), Improving Governance through Policy Evaluation: : Lessons From Country Experiences, OECD Publishing, https://doi.org/10.1787/89b1577d-en.
[8] OECD (2019), Access to and Sharing of Data: Reconciling Risks and Benefits for Data Re-use across Societies-, https://doi.org/10.1787/276aaca8-en.
[3] OECD (2019), Government at a Glance.
[9] OECD (2019), Recommendation of the Council on Health Data Governance.
[25] OECD (2019), The Path to Becoming a Data-Driven Public Sector, OECD Digital Government Studies, OECD Publishing, Paris, https://dx.doi.org/10.1787/059814a7-en.
[43] OECD (2018), Cost-Benefit Analysis and the Environment: Further Developments and Policy Use, OECD Publishing, https://doi.org/10.1787/9789264085169-en.
[40] OECD (2017), “Making policy evaluation work: The case of regional development policy”, OECD Science, Technology and Industry Policy Papers, https://dx.doi.org/10.1787/c9bb055f-en.
[2] OECD (2017), “Trust and Public Policy”, in Trust and Public Policy: How Better Governance Can Help Rebuild Public Trust, OECD Publishing, Paris.
[12] OECD (2016), Open Government: The Global Context and the Way Forward, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264268104-en.
[24] OECD (2015), Estonia and Finland: Fostering Strategic Capacity across Governments and Digital Services across Borders, OECD Publishing, https://doi.org/10.1787/9789264229334-en.
[41] OECD (2014), “Evaluation of Industrial Policy: Methodological Issues and Policy Lessons”, OECD Science, Technology and Industry Policy Papers, Vol. 16, https://doi.org/10.1787/5jz181jh0j5k-en.
[13] OECD-DAC (2009), “Guidelines for Project and Programme Evaluations”, https://www.entwicklung.at/fileadmin/user_upload/Dokumente/Projektabwicklung/Englisch/Guidelines_for_Project_and_Progamme_Evaluations.PDF (accessed on 20 September 2019).
[26] Olejniczak, K., E. Raimondo and T. Kupiec (2016), “Evaluation units as knowledge brokers: Testing and calibrating an innovative framework”, Evaluation, Vol. 22/2, pp. 168-189, http://dx.doi.org/10.1177/1356389016638752.
[29] Oliver, K. et al. (2014), “A systematic review of barriers to and facilitators of the use of evidence by policymakers”, BMC Health Services Research, Vol. 14/1, http://dx.doi.org/10.1186/1472-6963-14-2.
[38] Oliver, K. and W. Pearce (2017), “Three lessons from evidence-based medicine and policy: increase transparency, balance inputs and understand power”, Palgrave Communications, Vol. 3/1, p. 43, http://dx.doi.org/10.1057/s41599-017-0045-9.
[46] Paris (ed.) (2018), Cost-Benefit Analysis and the Environment: Further Developments and Policy Use, OECD Publishing, https://doi.org/10.1787/9789264085169-en.
[37] Parkhurst, J. (2017), The politics of evidence : from evidence-based policy to the good governance of evidence, Routledge, London, http://researchonline.lshtm.ac.uk/3298900/ (accessed on 23 November 2018).
[36] Parkhurst, J. and S. Abeysinghe (2016), “What Constitutes “Good” Evidence for Public Health and Social Policy-making? From Hierarchies to Appropriateness”, Social Epistemology, Vol. 30/5-6, pp. 665-679, http://dx.doi.org/10.1080/02691728.2016.1172365.
[4] Prange-Gstöhl, H. (2016), “Eroding societal trust: a game-changer for EU policies and institutions?”, Innovation: The European Journal of Social Science Research, Vol. 29/4, pp. 375-392, http://dx.doi.org/10.1080/13511610.2016.1166038.
[31] Proctor, E. et al. (2019), “Intermediary/purveyor organizations for evidence-based interventions in the US child mental health: Characteristics and implementation strategies”, Implementation Science, Vol. 14/1, p. 3, http://dx.doi.org/10.1186/s13012-018-0845-3.
[33] Puttick, R. (2018), Mapping the Standards of Evidence used in UK social policy Alliance for Useful Evidence About Nesta About The Alliance for Useful Evidence, http://www.alliance4usefulevidence.org/join (accessed on 17 September 2018).
[44] Society for Prevention Research Standards of Evidence (2015), “Standards of evidence for efficacy, effectiveness, and scale-up research in prevention science: Next generation”, Society for Prevention Research Standards of Evidence, Vol. 16/7, pp. 893-926.
[5] The Lancet (2020), “COVID-19: a stress test for trust in science”, Vol. 396/10254, https://doi.org/10.1016/S0140-6736(20)31954-1.
[17] Torgerson, C. and D. Torgerson (2003), “The Design and Conduct of Randomised Controlled Trials in Education: Lessons from health care”, Oxford Review of Education, Vol. 29/1, pp. 67-80, http://dx.doi.org/10.1080/03054980307434.
[7] Yarborough, M. (2014), Taking steps to increase the trustworthiness of scientific research, FASEB, http://dx.doi.org/10.1096/fj.13-246603.
Notes
← 2. As reflected in the work of the so called COCHRANE organisations, https://www.cochrane.org/evidence.
← 3. See report on cost benefit analysis by the OECD Environment directorate.