This chapter provides an in-depth and comparative analysis of the mechanisms in place to evaluate the Strategic Plan 2015-2030. To this end, the chapter adopts a systemic approach to policy evaluation, providing an overview of the Nuevo Leon’s government evaluation system. It includes a comparative analysis of Nuevo León’s institutional set-up for evaluation, as well as its tools for promoting the quality and use of evaluation. OECD findings suggest that creating a sound evaluation system for the Plan will involve clarifying the resources available for evaluations, as well as their intended use, by establishing a policy framework for evaluation. Strengthening the competences of the council in regard to evaluation will also play a key role in ensuring the quality of its evaluations.
Monitoring and Evaluating the Strategic Plan of Nuevo León 2015-2030
4. Creating a Sound Evaluation System for the Strategic Plan of Nuevo León
Abstract
Introduction
Evaluation is critical in order to understand whether policies are improving and sustaining wellbeing and prosperity. Providing an understanding of what policies work, why, for whom, and under what circumstances, contributes to generating feedback loops in the policy-making process. Policy evaluation also makes a fundamental contribution to sound public governance by helping governments effectively design and implement reforms with better outcomes. It can therefore promote public accountability, increase public sector effectiveness, and ensure progress towards long-term government goals (OECD, 2020[1]). This is particularly pertinent in Nuevo León: policy evaluation and its strategic use throughout the policy cycle can support strategic planning by improving the links between policy interventions and their outcomes and impact.
This chapter provides an in-depth analysis of Nuevo León’s evaluation system, including a comparison with OECD member practices on institutionalisation, quality and use of policy evaluation. The chapter takes a systemic approach to policy evaluation, allowing for a full discussion of the mechanisms through which policy evaluation can contribute to Nuevo León’s policy cycle, strategic planning and policy tools such as regulation and performance budgeting. It provides a description of Nuevo León’s institutional framework for evaluation, as well as its tools for promoting quality and use. Moreover, it frames Nuevo León’s evaluation system in an overall Mexican context, benchmarking it against selected states, such as Jalisco and Mexico City among others. The analysis of current gaps in Nuevo León’s evaluation system provides an opportunity to propose concrete and precise recommendations for improvement.
Evaluation reflects a mainstream concern to improve government performance and to promote evidence informed policy-making. Policy evaluation is indeed referenced in the legal framework of a large majority of countries and in all Mexican States (CONEVAL, 2017[2]), and at the constitutional level in some cases. However, most OECD countries also face significant challenges promoting policy evaluation across government. They also face challenges in using evaluation results in policymaking, as there is no coherent whole-of-government approach in this area, and a lack of appropriate skills and capacities (OECD, 2020[1]). The creation of an evaluation eco-system depends on the local political and cultural context as well as on the different drivers and objectives (such as improving budgetary performance, or having a better understanding of the impact of social policies). In Nuevo León, monitoring, performance budgeting and including citizens in the public decision-making process are mobilised to ensure government effectiveness and efficiency, and to promote accountability. This is particularly true in regard to the Strategic Plan, for which the Nuevo León Council for Strategic Planning was created, with the purpose of advising the state on strategic planning and its evaluation (Article 7 of the Strategic Plan). However, the state of Nuevo León still lacks a sound and robust evaluation system as is commonly understood among OECD countries, both from a whole-of-government perspective and for its Strategic Plan. For instance, the council seems to “monitor” rather than “evaluate” the Strategic Plan. However, Nuevo León has institutions, actors and capacities that have the potential to be leveraged in order to establish a sound evaluation system.
Building a sound institutional framework for the evaluation of the Strategic Plan
A sound institutional framework for evaluation can help to incorporate isolated and unplanned evaluation efforts into more formal and systematic approaches, which aim to prioritise and set standards for methodologies and practices (Gaarder and Briceño, 2010[3]). An institutional framework can provide incentives to ensure that quality evaluations are effectively conducted. Such a framework can also contribute to improving the comparability and consistency of results across time, institutions, and disciplines (OECD, 2020[1]).
The institutionalisation of evaluation is key to sustaining a sound evaluation system, and ultimately delivering good results. Although there is no one-size-fits-all approach for how countries have proceeded with institutionalising their evaluation practices, a solid institutional framework usually includes:
Some clear and comprehensive definition(s) of evaluation. Indeed, OECD data (OECD, 2020[1]) shows that more than half of the survey respondents (27 countries) have adopted a formal definition of policy evaluation.
A practice embedded in a legal and policy framework. OECD data shows that a majority of countries (29 countries, 23 OECD countries) have developed a legal framework that guides policy evaluation while half of surveyed countries (21 in total, including 17 OECD countries) have developed a policy framework for organising policy evaluation across government (OECD, 2020[1]).
The identification of institutional actors with allocated resources and mandates to oversee or carry out evaluations, as is the case in the vast majority of countries. OECD data shows that 40 countries (including 34 OECD countries) have at least one institution with responsibilities related to policy evaluation across government (OECD, 2020[1]).
Macro-level guidance on who carries out evaluation, when and how.
Nuevo León pursues several complementary objectives through evaluation
Although Nuevo León explicitly recognises the importance of evaluation for accountability, it does not fully communicate its value in terms of policy learning. Policy evaluation facilitates learning, as it clarifies why and how a policy was or has the potential to be successful or not by informing policy makers about the reasons and causal mechanisms leading to policy success or failure. At the same time, policy evaluation has the potential to improve public accountability and transparency. It also legitimises the use of public funds and resources as it provides citizens and other stakeholders with information on whether the efforts carried out by the government, including allocating financial resources, are producing the expected results (OECD, 2018[4]). Learning and accountability are therefore complementary objectives that Nuevo León can explicitly pursue while conducting evaluations.
The council’s evaluation function currently focuses on increasing accountability, rather than on improving policy learning. This is partially due to the conflation of evaluation and monitoring of the Strategic Plan (see chapter 2). The annual evaluation report of 2016-2017 dedicates a whole section to how the council promotes accountability through its responsibility to inform the public about the progress of the Plan and its evaluation. More explicitly, this evaluation report is qualified as the council’s “first accountability exercise” (Nuevo Leon Council, 2017[5]). As such, there is an apparent lack of recognition of the full objectives and potential value of the evaluation of the Strategic Plan, which is further reinforced, as strategic planning law does not explicitly state the objectives for evaluating the plan. Evaluation is not only about accountability, but also about understanding why the objectives have or have not been achieved, and what can be done in order to improve the public intervention in question.
Making the objectives for evaluation clear and communicating those objectives in the Strategic Planning Law would create a shared understanding for decision-makers, evaluators and citizens about the importance and purpose of this policy tool.
The definitions of evaluation adopted across government in Nuevo León could be more precise
According to OECD data, a majority of OECD countries (23 out of 35) have one (11 countries) or several (12 countries) definition(s) of evaluation (See Figure 4.1). In some cases, this definition is embedded in a legal document. For instance, Japan defines evaluation in a law, the Government Policy Evaluations Act (Act No. 86 of 2001), while Argentina defines evaluation in the decree 292/2018, which designates the body responsible for preparing and executing the annual monitoring and evaluation plan for social policies and programmes (2018[92]). Other countries define evaluation in guidelines or manuals as is the case for Mexico (general guidelines for the evaluation of the general public administration programmes (2007[6])), Costa Rica (manual of evaluation for public interventions (2018[7])), and Colombia (guide for the evaluation of public policies (2016[8])).
In Nuevo León, two definitions of evaluation co-exist. Firstly, according to the General Guidelines of the Executive Branch for the Consolidation of Results-Based Budget and the Performance Evaluation System, evaluation can be defined as “ the systematic and objective analysis of policies, budgetary programmes and institutional performance, which aims to determine the relevance and achievement of its objectives and goals, as well as its efficiency, effectiveness, quality, results and impact” (State of Nuevo Leon, 2017[9]).
The existence of this definition is a useful first step in creating a shared understanding within the public sector of the aims, tools and features of policy evaluation. Secondly, the Strategic Planning Law provides a specific definition of evaluation applicable to the Strategic Plan and the role of the council in this regard (Article 18). According to this law, monitoring and evaluation refer to the measurement of effectiveness and efficiency of planning instruments and their execution. Additionally, it states that these should be carried out by the council, in collaboration with dependencies and entities of the state administration. Yet, this definition conflates monitoring and evaluation, which are two complementary but distinct practices, with different dynamics and goals (see Table 4.1). From an analytic and practical perspective, conflating the practices of monitoring and evaluation within a single definition makes the provisions related to them unclear.
Table 4.1. Comparing policy monitoring and policy evaluation
Policy monitoring |
Policy evaluation |
---|---|
Ongoing (leading to operational decision-making) |
Episodic (leading to strategic decision-making) |
Monitoring systems is generally suitable for the broad issues/questions that were anticipated in the policy design |
Issue-specific |
Measures are developed and data are usually gathered through routinized processes |
Measures are usually customized for each policy evaluation |
Attribution is generally assumed |
Attribution of observed outcomes is usually a key question |
Because it is ongoing, resources are usually a part of the programme or organisational infrastructure |
Targeted resources are needed for each policy evaluation |
The use of the information can evolve over time to reflect changing information needs and priorities |
The intended purposes of a policy evaluation are usually negotiated upfront |
Source: Open Government in Biscay (2019[10])
Indeed, policy monitoring refers to a continuous function that uses systematic data collection of specific indicators to provide policy makers and stakeholders with information regarding progress and achievements of an ongoing public policy initiative and/or the use of allocated funds (OECD, 2016[11]; OECD-DAC, 2009[12]). Policy evaluation refers to the structured and objective assessment of the design, implementation and/or results of a future, ongoing or completed policy initiative. The aim is to determine the relevance and fulfilment of policy objectives, as well as to assess dimensions such as public policies’ efficiency, effectiveness, impact or sustainability. As such, policy evaluation refers to the process of determining the worth or significance of a policy (OECD, 2016[11]; OECD-DAC, 2009[12]). Therefore, while monitoring is descriptive and an important (but not exclusive) source of information that can be used within the context of an evaluation, policy evaluation is a different activity that seeks to analyse and understand cause-effect links between a policy intervention and its results.
The absence of a clear and comprehensive definition applicable to the Strategic Plan undermines the ability of the state public administration and of the council to promote and sustain a robust evaluation system. Having a clear and comprehensive definition of evaluation would raise the awareness of practitioners and stakeholders alike, as well as clarify the goals and methods of evaluation in comparison to other similar practices (such as monitoring, spending reviews, performance management, etc.).
Therefore, Nuevo León would benefit from updating the Strategic Planning Law and/or its regulations to include two distinct definitions of monitoring and evaluation. In doing so, Nuevo León may wish to use –or refer to the already existing definition in the General guidelines of the Executive Branch for the consolidation of the Results-Based Budget and the Performance Evaluation System.
Similarly to a majority of OECD countries, Nuevo León has embedded the evaluation of its Strategic Plan in a legal framework
Another key component of the institutionalisation of policy evaluation is the existence of a legal or policy framework, insofar as they provide a key legal basis for undertaking evaluations, as well as guidance on when and how to carry them out. Legal and policy frameworks may also formally determine the institutional actors, their mandates and the resources needed to oversee and carry out evaluations (OECD, 2020[1]).
Several paths exist for the legal institutionalisation of evaluation practices. As shown through the OECD (2018) survey, the need for policy evaluation is recognised at the highest level, with a large majority of countries having requirements for evaluation in their primary and secondary legislation, and sometimes even in their constitution. Mexico, for instance, has provisions related to policy evaluation embedded at all three levels. The National Council of Social Development Policy Evaluation (CONEVAL) is an example of an institution responsible for evaluation whose mandates are inscribed at various legal levels (see Box 4.1).
Box 4.1. CONEVAL, the autonomous agency responsible for evaluation in Mexico
Creating and embedding CONEVAL in a multi-level legal framework:
The National Council of Social Development Policy Evaluation (Consejo Nacional de la Política de Desarrollo Social, CONEVAL), was created in 2004 through the General Law of Social Development. The law, which aims to create a monitoring and evaluation system in Mexico, established CONEVAL as a decentralised body with budgetary, technical and management autonomy charged with measuring poverty and evaluating social development policy.
In addition, the Federal Budget and Fiscal Responsibility Law (2006), that established the Performance Evaluation System and the General Evaluation Guidelines, determined the coordination between CONEVAL and other entities in charge of evaluation in the federal public administration.
The Political Constitution of the United Mexican States was amended in 2014, with a decree that defined CONEVAL as an autonomous constitutional body with its own legal personality and heritage. Accordingly, it has the mandate to set standards, co-ordinate the evaluation exercises of the National Social Development Policy and its subsidiary actions, and provide guidelines to define, identify and measure poverty. In practice, the agency carries or contracts out evaluations of the social policies developed by the Mexican government.
Source: Adapted from (Cámara de Diputados del H.Congreso De la Unión, 2018[13])Federal Budget and Fiscal Responsibility Law [Ley General de Desarollo Social] ; and (Cámara de Diputados del H.Congreso De la Unión, 2020[14])Constitución Política de los Estados Unidos Mexicano [Political Constitution of the United Mexican States].
At the state level, data from CONEVAL collected in 2017 shows that all federal entities of Mexico have a normative instrument that establishes obligations regarding the evaluation of social policies and programmes, guidelines on carrying out evaluations and requirements to publish their results (CONEVAL, 2017[2]).
Nuevo León is one such state that has implemented a legal framework for evaluation, as part of their Performance-Based Management System (“Gestion para Resultados”), in line with the national guidelines. Indeed, since the 2006 Law on Budget and Treasury Responsibility (LFPRH), Mexico has transitioned to a Performance-Based Management System. One major step has been the implementation of a Result-Based Budgeting process (RBB), which attempts to link allocations to the achievement of specific results, and the development of a Performance Evaluation System (SED). The SED is defined as a set of methodological instruments used to conduct objective assessments of programme performance and impacts based on the monitoring of actions and the use of indicators (SHCP, 2018[15]).
At the national level, the main institutions responsible for the implementation of the RBB-SED system are the Secretariat of Finance and Public Funds (SHCP) and the CONEVAL. Created in 2008, the National Council for Accounting Harmonization (CONAC) also plays an important role at the federal and municipal level. It is in charge of issuing accounting norms and establishing guidelines on programme classification and public accounts structure.
Nuevo León and the other Mexican states, are bound by the national frameworks. In particular, paragraphs 1 and 2 of the Mexican constitution state that resources are not evaluated by an independent unit, but by the instances established by the Federal Government and the states. This evaluation process is carried out through an Annual Evaluation Programme (PAE) devised by each state, which establishes a list of programmes and funds that need to be evaluated within the year.
In Nuevo León, the Secretariat of Finance and General Treasury of the State devises the Annual Evaluation Programme in coordination with the Comptroller. In 2020, 25 programmes are to be evaluated, such as the “Nuevo León Respira”, a programme monitored by the Secretariat for Sustainable Development. In 2019, the final reports of 13 evaluations were published on the Nuevo León website (Gobierno de Nuevo Leon, 2020[16]).
The PAE specifies which type of evaluation should be conducted (design, processes, results or impact) by the designated administrative units, court or institutions. They must follow the guidelines established by the CONEVAL and the Secretariat of Finance who is also in charge of coordinating the evaluations and of monitoring the selection of external evaluators, (research institutes, universities, development organisations, etc.). Nuevo León has put a management response mechanism in place for evaluations conducted within the Annual Evaluation Programme. The administrative entities must consider the evaluation’s recommendations (“Aspectos Susceptibles de Mejora”), by implementing a Management Improvement Action Plan (PAMGE).
The specific evaluation of the Strategic Plan is embedded in primary legislation, the most common level chosen among OECD and non-OECD countries. In particular, the evaluation of the Strategic Plan is embedded in the state Strategic Planning Law, the primary legislation document that organises and supports the Strategic Plan. This law is also accompanied by regulations intended to regulate its provisions.
The law and its regulations give a clear and comprehensive mandate to the council to carry out the evaluation of the Strategic Plan, in a number of different aspects:
Article 7 of the law states that the council will be an advisory body of the state executive in terms of strategic planning and evaluation, and specifies the actors that form the council.
Concerning the commissioning of evaluations to external evaluators, article 9 of the regulations also mentions that the commissions can hire additional personnel depending on the evaluation to be carried out and the availability of resources.
Article 19 of the law states that the council should evaluate the results of the Strategic Plan and the State Plan annually, based on the evolution of strategic projects and priority programmes, and the measurement of economic and social activity.
Regarding the use of evaluation results, article 21 of the law states that the results of the performance evaluations carried out by the council shall be presented to the Head of Executive, and article 22 bis specifies that they should be sent to state congress after decisions are made upon the implementation of evaluation recommendations.
In Nuevo León, macro-level guidance on who carries out the evaluation of the Strategic Plan 2015-2030 and when is currently lacking
Having a legal framework for the evaluation of the Strategic Plan, in the form of a primary law accompanied by regulations, demonstrates the importance that the state of Nuevo León attributes to this practice, within the council and across government. However, the presence of a legal framework for the evaluation of the Plan is not enough to sustain a robust evaluation system. As per the evaluation of the Strategic Plan, a robust evaluation system needs to specify what should be evaluated, the actors involved, their mandates, the timeline, the methodology and tools for evaluating the Plan.
Article 19 of the Strategic Planning Law specifies that the council should evaluate the Strategic Plan on a yearly basis. However, given the large scope and number of objectives of the Plan, it is not possible to evaluate the plan in its totality – or even a single broad objective of the plan – within a single year. Conducting a proper evaluation requires time and significant resources, and most importantly needs to be supported by a clear methodology.
A proper evaluation would require a more focused approach. The council should define a limited number of evaluations to be carried out in a given year, which should mainly focus on how specific policies or programmes, which the government is implementing, are contributing to the achievement of a specific plan’s objectives. For this, the council needs to develop a specific timeline for evaluations (for example for data collection, delivering the evaluation report, etc.). Another crucial element that currently lacks any form of methodology is the way in which the recommendations based on the evaluation results should be communicated – in particular to the state public administration – and what types of formal responses, if any, should follow them. For example, the legal framework of Jalisco contains a large set of provisions, including evaluation planning; the criteria for undertaking and publishing evaluations, and following-up on results, among others (see Box 4.2).
Box 4.2. Jalisco’s legal framework for evaluation
Jalisco, like all other federal entities of Mexico, has a legal framework that establishes obligations to evaluate social policies and programmes, as well as guidelines that include criteria for performing evaluations and publishing them and their results. However, Jalisco stands out as the only state that also has legal instruments that include provisions regarding evaluation planning and follow-up on evaluation results. The state’s legal framework also specifies the objectives of evaluation, the use of evaluation results for budgetary decisions, the actors for evaluation and the definition of indicators (Coneval, 2017[89]). Precisely, in terms of:
Evaluation planning: Jalisco’s evaluation plan specifies the programmes to be evaluated, the types of evaluation that apply, and the deadlines by which the evaluations should be undertaken.
Publication of evaluations: Jalisco’s legal framework for evaluation requires the publication of evaluations, including information on those responsible for the evaluation and evaluators, the cost and purpose of the evaluation, the methodology used, the main results, recommendations and an executive summary. In practice, the state publishes evaluations and their results accordingly.
Follow-up on evaluation results: Jalisco’s legal framework includes provisions on the use of evaluation results. Specifically, an evaluation must specify the actions to undertake, the deadlines, responsible actors and coordination mechanisms.
The state’s main legal instrument is the General Guidelines for Monitoring and Evaluation of the Public Programmes of the Government of Jalisco, which include definitions and provisions regarding the programmes to be evaluated, the actors responsible and the evaluation process, types and scope of evaluation, communication and use of results (State Government of Jalisco, 2017[102]). These are described in detail in Box 4.4.
Source: (EVALUA Jalisco, 2017[17])
The council could consider making the mandates and timing of evaluations explicit in a policy framework
Good evaluation planning is important to ensure its quality, as well as its use. Indeed, many researchers emphasise the importance of timeliness of evaluation results to promote their use in decision-making (Leviton and Hughes, 1981[18]): the consensus is that evaluations should be thought of well in advance and the evaluation process planned-out carefully. The general guidelines for the consolidation of Results-Budget and the Performance Evaluation System, outline to the state public administration when to conduct evaluations that are linked to budgetary programming (State of Nuevo Leon, 2017[9]).
Many country frameworks for evaluations contain provisions regarding the purpose, scope (time-period, target population, etc.) and objectives of evaluations. In Nuevo León, the guidelines mentioned above provide a useful definition of the different types of evaluations; their objectives and methodologies (see Box 4.3).
Box 4.3. Nuevo León State Government’s typology of evaluation
In its Guidelines for the Consolidation of Results-Budget and the Performance Evaluation System, the Nuevo León state executive provides a typology of evaluation. They state that different types of evaluations should apply throughout the Annual Evaluation Programme:
Design evaluation: analyses the order of the budgetary programme, considering its definition of purpose and objectives, its outputs (goods and services produced by the programme and delivered to the target population), the activities identified as necessary to produce the outputs, the assumptions under which the objectives of the programme were developed, as well as the problem that gave rise to the programme and how it has evolved as a diagnostic element. Its application is recommended for newly created budgetary programmes that have been implemented for one to two years.
Process evaluation: analyses the operational processes (activities) developed to transform inputs into public goods and services with public value for the target population. It optimises these processes to achieve better programme results (effectiveness, efficiency). It also generates information for improving the operational management of programmes. Its application is recommended during the third year of operation of a budgetary programme.
Consistency and results evaluation: analyses the consistency of the budgetary programme or use of state resources; the programmable federalised expenditure in terms of its design and strategic planning; the coverage, targeting, operation, and perception of the target population; and results obtained.
Specific performance evaluation: based on a synthetic review, analyses the performance of the budgetary programme, focusing on the progress of the achievement of objectives and goals through the monitoring of performance indicators.
Impact evaluation: analyses, with rigorous methodologies, the efficiency and effectiveness of budgetary programmes, the use of state resources, programmable federalised expenditure, as well as user satisfaction. The analysis includes quantitative and qualitative aspects. It identifies the effect that goods and services have on the beneficiaries once they use them.
Source: (State of Nuevo Leon, 2017[9])
Building on the state public administration’s existing guidelines and definitions, the council could establish an evaluation framework, in order to specify what type of evaluations it will conduct, for what purpose, with what resources, and in what timeframe. This can help to support the implementation of quality evaluation and provide high-level guidance and clarity for institutions by outlining overarching best practices and goals.
Evaluation policy frameworks or requirements for government institutions to undertake regular evaluation of their policies are common practice (OECD, 2020[1]). Such frameworks exist in Spain, where there is an action plan for spending reviews and an annual plan on normative impact. Likewise, since 2017, Mexico has published an evaluation programme every year. Interestingly, CONEVAL data shows that in 2017, 30 federal entities planned their evaluations, including 16 entities that have updated their evaluation plans in the last two years, specifying the programmes to be evaluated, the types of evaluation that apply and the deadlines for carrying them out (CONEVAL, 2017[2]). Another relevant example on the subnational level that could serve as a model is the state of Jalisco’s General Guidelines for the Monitoring and Evaluation of the Programmes of the Government of Jalisco (see Box 4.4). Interestingly, the state of Jalisco has an evaluation strategy (Evalúa Jalisco) whose implementation is supported by a technical council (Consejo Evalúa Jalisco) composed of academics and experts in the evaluation of public policies at national and local level (State Government of Jalisco, 2019[19]).
Box 4.4. Jalisco’s General Guidelines for the Monitoring and Evaluation of the Programmes of the Government of Jalisco
These guidelines provide the technical bases for the implementation of the methods, guidelines and procedures of the State Monitoring and Evaluation Strategy for the programmes and public policies executed by the state of Jalisco (EVALUA Jalisco, 2020[20]).
More precisely, the guidelines include relevant elements for the implementation of an evaluation plan or strategy (EVALUA Jalisco, 2017[17]):
Definitions: including the definitions of external evaluation entities, external and internal evaluation, evaluation, indicators (results, management and performance), monitoring, public programme, terms of references, evaluation units.
Objects of evaluation: including planning instruments, budgetary programmes, etc. which must be subject to evaluation.
The implementation of evaluations: including the evaluation process (planning, selection of the external evaluator, follow-up and verification of evaluation products, devising an improvement agenda). It also includes indications on the terms of reference (who drafts them and what information they contain, i.e. object of the evaluation, objectives, type of evaluation, methodology, data sources, expected product, criteria for selection of external evaluators), financing the evaluation, and selection of external evaluators.
The types and scope of evaluation: including information on internal and external evaluation, a number of evaluation types (such as process, results, impact, etc.).
The evaluation reports: including executive summary, recommendations derived from the evaluation results (given in order of priority for achieving improvement), and the technical, operational, financial and legal implications of the implementation of the recommendations.
The Nuevo León council may wish to include several types of evaluations in its policy framework. In fact, beyond the evaluation of the impact of policy objectives, the council may also wish to carry out design evaluations. These would enable the council to provide support to the state public administration in devising action plans related to the implementation of the Strategic Plan. In particular, such evaluations would allow the state to identify the existing inconsistencies in conditions for funding and implementing the action plans with measurable and assessable goals.
Finally, the council may wish to specify whether there will be any formal response mechanisms to the evaluations it will conduct, similarly to the SED’s Management Improvement Action Plans. Specifically, it could distinguish between:
evaluations of the Strategic Plan conducted on the basis of the council’s mandate and of its own initiative, for which no formal response mechanism would be necessary;
evaluations at the request of the state public administration, where a formal response mechanism from the administration to the council may be useful to ensure greater use of the evaluation evidence. The council may also wish to share these evaluations with Congress for information. An example of such a mechanism can be found in Australia’s productivity commission (see Box 4.5).
Box 4.5. Australia’s productivity commission: An autonomous government body
The Australian government’s productivity commission is an autonomous research and advisory body that focuses on a number of economic, social and environmental issues affecting the wellbeing of Australians. At the request of the Australian Government, it provides independent and quality advice and information on key policy and regulatory issues. It also conducts self-initiated research to support the Government in its performance reporting and annual reporting, and acts as a secretariat under the council of Australian government for the inter-governmental review of government service provision.
The commission is located in the Government’s treasury portfolio and its activities range across all levels of governments. It does not have executive power and does not administer government programmes. The Commission is nevertheless effective in informing policy formulation and the public debate thanks to three characteristics:
Independence: it operates under its own legislation, and its independence is formalised through the productivity commission act. Moreover, it has its own budget allocation and permanent staff working at arm’s length from government agencies. Even if the commission’s work programme is largely defined by the government, its results and advice are always derived from its own analyses.
Transparent processes: all advice, information and analysis produced and provided to government is subject to public scrutiny through consultative forums and release of preliminary findings and draft reports.
Community-wide perspective: under its statutory guidelines, the Commission is required to take a view that encompasses the interests of the entire Australian community rather than particular ones.
Source: Australian Government. “About the Commission” and “How we operate”. Accessed September 2nd 2019. https://www.pc.gov.au/about, https://www.pc.gov.au/about/operate
Promoting the quality evaluations
Quality and use of evaluations are essential to ensuring impact on policy-making, and thus on the capacity of evaluations to serve as tools for learning, accountability and better decision-making. However, both quality and use are widely recognised as some of the most important challenges faced by policy-makers and practitioners in this area. This is due to a mix of skill gaps, heterogeneous oversights in the evaluation processes, and insufficient mechanisms for quality control and capacity for uptake of evidence (OECD, 2020[1]).
Quality and use of evaluations are also intrinsically linked, thereby increasing their significance for policy-makers. Use can be considered as a key quality factor, since the extent to which an evaluation meets the needs of different groups of users dictates its quality (Patton, 1978[21]; Kusters et al., 2011[22]; Vaessen, 2018[23]) . Likewise, evaluations that adhere to the quality standard of appropriateness – that is, evaluations that address multiple political considerations, are useful to achieve policy goals and consider possible alternatives as well as the local context – are by very definition more useful to intended users.
Quality should also be conducive to greater potential for use. Insofar as good quality evaluations benefit from greater credibility, they are likely to be given more weight in decision-making. Similarly, unused data are likely to suffer because they are not subject to critical questioning. However, in practice, it is important to recognise that quality may be associated with greater complexity of results, due to methodological requirements and limits with the use of quantitative methods, which may sometimes make the results difficult to read and to interpret for a lay audience (OECD, 2020[1]).
The council may wish to develop both quality assurance and quality control mechanisms
A majority of OECD countries have developed one or several explicit mechanisms to promote the technical quality of evaluations (OECD, 2020[1]). On the one hand, quality assurance mechanisms seek to ensure credibility in how the evaluation is conducted, that is to say that the process of evaluating a policy respects certain quality criteria. On the other hand, mechanisms for quality control ensure that the evaluation design, its planning and delivery have been properly conducted to meet pre-determined quality criteria. In that sense, quality control tools ensure that the end product of the evaluation (the report) meets a certain standard for quality. Both are key elements to ensuring the robustness of policy evaluations (HM Treasury, 2011[24]).
In Nuevo León, the council’s commissions are partially composed of subject-matter experts, which could be mobilised to ensure the quality of evaluations. However, the council does not have explicit quality assurance mechanisms at present (quality standards for the evaluation process, competence requirements or skill development mechanisms, organisational measures for the promotion of quality) or control mechanisms (peer reviews of the evaluation product, meta-evaluations, self-evaluation tools and checklists, audits of the evaluation function). The council could develop one or several quality assurance or control mechanisms amongst the ones presented below.
OECD data shows that most surveyed countries have developed standards regarding both the technical quality of evaluation and its good governance. Standards are a form of quality assurance, as they enable the evaluation to be properly conducted, or ensure that the process respects certain pre-established quality criteria. In many countries, standards for good quality evaluations are embedded in guidelines, that is, non-binding documents or recommendations that aim to support governments in the design and implementation of a policy and/or practice (examples include white-books and handbooks).
International organisations have also adopted such guidelines in order to set standards for quality evaluations and the appropriate principles for their oversight (United Nations Evaluation Group, 2016[25]). At the OECD, the Development Assistance Committee’s Quality Standards for Development Evaluation (OECD, 2010[26]) includes overarching considerations regarding evaluation ethics and transparency in the evaluation process, as well as technical guidelines for the design, conduct and follow-up of development evaluations by countries. Similarly, the World Bank Group’s Evaluation Principles set out core evaluation principles for selecting, conducting and using evaluations (World Bank et al., 2019[27]) aimed at ensuring that all World Bank Group evaluations are technically robust as well as credible.
Guidelines have also been developed by sub-national entities. At the state level for example, Queensland, in Australia, has Government Programme Evaluation Guidelines, which outline a set of principles to support the planning and implementation of evaluation of programmes funded by the state government (see Box 4.6).
Box 4.6. Queensland Government’s Programme Evaluation Guidelines
Australia’s state of Queensland offers evaluation guidelines that provide minimum requirements to be met by those planning, implementing and managing evaluations of government-funded programmes. These guidelines outline principles intended to foster the quality of evaluations, both in their governance and technical aspects.
Ensuring the good governance of evaluations:
With an introduction containing several definitions and the evaluation objectives, the guidelines explain the key steps in planning an evaluation, with a particular focus on the definition of purpose and outcomes of the evaluation. They also outline practical considerations regarding the governance of the evaluation, such as determining appropriate roles, responsibilities and resources for the public officials planning and conducting the evaluation.
Ensuring the technical quality of evaluations:
The guidelines are attached to specific and practical indications regarding evaluation approaches as well as data collection. The first attachment gives detailed guidance and further resources for designing, implementing and delivering, effective and efficient evaluations. The second attachment on collecting evaluation data describes different types of data collection methods, recommendations for selecting the right method and further resources.
In 2017, all Mexican federal entities had one or several instruments that specified certain quality criteria for conducting evaluations. These include the states of Quintana Roo and Yucatán, who have developed standards for evaluator competencies, types of evaluation and evaluation planning (CONEVAL, 2017[2]) (see Box 4.7).
Box 4.7. Evaluation standards in Quintana Roo and Yucatán
Quintana Roo has guidelines and criteria for conducting and coordinating evaluations, which establish the different types of evaluations to be carried out, the use of evaluation results, the frequency at which evaluations should be carried out and specific requirements that must be met by evaluators (CONEVAL, 2018[29]). For instance, Quintana Roo’s Standards for the dissemination of evaluation results from federal resources administered to federal entities (CONAC, 2015[30]) give a number of recommendations regarding:
Annual evaluation programme: this should establish the programmes subject to evaluation, the types of evaluation that apply, and a calendar for their execution.
Commissioning the evaluation: the commissioning, operation and supervision of the evaluation should be objective, impartial, transparent and independent.
Drafting the Terms of References: these should include objectives, scopes, methodologies, profiles of evaluators and expected products of the evaluation.
Yucatán has a number of regulations regarding the evaluation processes undertaken in the states, which aim to strengthen the mechanisms for conducting quality, systematic and participatory evaluations (State Government of Yucatán, 2019[31]). For instance, the state’s General Guidelines of the Performance Monitoring and Evaluation System (State Government of Yucatán, 2016[32]) include:
Requirements for contracting out external evaluators: such as recognised experience in the type of evaluation to be carried out, submission of a work proposal including the purpose of evaluation, the methodology, the profiles of the evaluators constituting the evaluation team, etc.
Definition of evaluation types and their objects of application: such as performance, development, strategic, process, consistency and results, and impact evaluation.
Recommendations on the evaluation methodology: the Technical Secretariat is responsible for approving the methodologies that apply in external and internal evaluations. The guidelines also recommend that instruments to gather data (questionnaires, interviews, etc.) be suitable and relevant.
Guidelines developed by countries address a wide variety of specific topics including the design of evaluation approaches, the course of action for commissioning evaluations, planning out evaluations, designing data collection methods, evaluation methodologies or the ethical conduct of evaluators. Table 4.2 below gives an overview of the different quality standards, in terms of governance and quality that OECD and non-OECD countries have included in their guidelines.
Table 4.2. Quality standards included in evaluation guidelines
Technical Quality of evaluations |
Good Governance of evaluations |
||||||||
---|---|---|---|---|---|---|---|---|---|
Countries |
Identification and design of evaluation approaches |
Course of action for commissioning evaluations |
Establishment of a calendar for policy evaluation |
Identification of human and financial resources |
Design of data collection methods |
Quality standards of evaluations |
Independence of the evaluations |
Ethical conduct of evaluations |
None Of The Above |
Australia |
○ |
○ |
○ |
● |
○ |
○ |
○ |
○ |
○ |
Austria |
○ |
○ |
○ |
● |
● |
○ |
○ |
○ |
○ |
Canada |
● |
○ |
○ |
○ |
● |
● |
● |
● |
○ |
Czech Republic |
● |
○ |
○ |
○ |
● |
● |
● |
● |
○ |
Estonia |
● |
● |
○ |
● |
● |
● |
● |
● |
○ |
Finland |
○ |
● |
● |
○ |
○ |
● |
● |
● |
○ |
France |
● |
○ |
○ |
○ |
○ |
○ |
○ |
○ |
○ |
Germany |
● |
● |
● |
● |
● |
● |
● |
● |
○ |
Great Britain |
● |
○ |
● |
● |
● |
● |
● |
● |
○ |
Greece |
● |
● |
● |
● |
● |
● |
● |
○ |
○ |
Ireland |
● |
○ |
○ |
○ |
● |
● |
● |
○ |
○ |
Italy |
○ |
● |
○ |
● |
○ |
○ |
● |
○ |
○ |
Japan |
● |
○ |
● |
○ |
● |
● |
○ |
○ |
○ |
Korea |
● |
○ |
● |
○ |
● |
● |
○ |
○ |
○ |
Latvia |
● |
● |
● |
● |
● |
● |
○ |
○ |
○ |
Lithuania |
● |
○ |
○ |
● |
● |
○ |
● |
○ |
○ |
Mexico |
● |
● |
● |
○ |
○ |
● |
● |
● |
○ |
Netherlands |
○ |
○ |
○ |
○ |
○ |
● |
○ |
○ |
○ |
New Zealand |
● |
● |
○ |
● |
● |
● |
● |
● |
○ |
Norway |
● |
○ |
○ |
● |
● |
○ |
○ |
○ |
○ |
Poland |
○ |
○ |
● |
○ |
● |
● |
● |
○ |
○ |
Portugal |
○ |
● |
○ |
○ |
○ |
○ |
○ |
○ |
○ |
Slovakia |
● |
○ |
○ |
○ |
○ |
● |
● |
○ |
○ |
Spain |
● |
● |
○ |
● |
○ |
● |
● |
● |
○ |
Switzerland |
○ |
○ |
● |
● |
● |
● |
● |
● |
○ |
United States |
● |
○ |
● |
● |
● |
● |
● |
● |
○ |
OECD Total |
|||||||||
● Yes |
18 |
10 |
11 |
14 |
17 |
19 |
17 |
11 |
0 |
○ No |
8 |
16 |
15 |
12 |
9 |
7 |
9 |
15 |
26 |
Argentina |
○ |
● |
● |
○ |
● |
○ |
○ |
○ |
○ |
Brazil |
● |
● |
○ |
● |
● |
● |
● |
○ |
○ |
Colombia |
● |
● |
○ |
○ |
○ |
○ |
● |
● |
○ |
Costa Rica |
● |
● |
● |
● |
● |
● |
● |
● |
○ |
Kazakhstan |
○ |
○ |
● |
○ |
○ |
○ |
○ |
○ |
○ |
Note: n=31 (26 OECD member countries). 11 countries (9 OECD member countries) answered that they do not have guidelines to support the implementation of policy evaluation across government. Answers reflect responses to the question, “Do the guidelines contain specific guidance related to the: (Check all that apply)”.
Source: OECD Survey on Policy Evaluation (2018)
In Nuevo León, for the SED’s Annual Evaluation Programme (PAE), each programme evaluation should follow several guidelines and good practices defined at the national and state level. For instance, the CONEVAL has devised the Guidelines for the Evaluation of Federal Programmes of the Federal Public Administration that establish the different types of evaluations to be carried out, the use of a matrix of indicators, the frequency at which evaluations should be carried our etc. (CONEVAL, 2007[6]). The CONAC also published its Guidelines for the Construction and Design of Performance Indicators through the Logical Framework Methodology (CONAC, 2013[33]) to help states design efficient monitoring and performance indicators. In addition, Nuevo León has published its own guidelines for the implementation of a result-based budgeting and performance evaluation system (RBB-SED): general Guidelines for the Consolidation of a Results-Based Budget and the Performance Evaluation System of 2017.
Moreover, each evaluation unit is in charge of collecting and analysing a primary source of data composed of information contained in administrative records, databases, internal and/or external evaluations, public documentation and regulatory documents. Then, the unit has to answer a set of pre-defined methodological questions regarding the quality of the Result-based Indicator Matrix associated with the programme, the efficiency of programme management, the results in terms of outputs, transparency, citizen satisfaction, etc. The Result-based Indicator Matrix (MIR) is a major strategic planning tool that establishes the objectives of each programme. It includes several indicators measuring objectives, impacts and expected results, and identifies the sources of information needed for the performance management process.
The Nuevo Léon council may be interested in developing its own regulations for the evaluation of the Strategic Plan. These regulations could also be applicable across government. A methodology for evaluating the Plan should provide the objectives of the evaluation, its precise scope, the types of evaluation (design, implementation, and impact), the methods of data collection and guidance for carrying out or commissioning evaluations. In order to do so, Nuevo León may wish to draw inspiration from the guidelines developed by France Stratégie (see Box 4.8), as well as from the general positioning of this institution in the French government apparatus.
Box 4.8. France Stratégie and its evaluation guidelines
France Stratégie is a French agency attached to the Prime Minister but operating at arms’ length from the government that provides expertise on major social and economic issues through ex post evaluations of public policies, analysis notes, debates and consultative exercises. The institution has issued guidelines on impact evaluations to help decision-makers and practitioners conduct evaluation and analyse evaluation results (Comment évaluer l’impact des politiques publiques: un guide à l’usage des décideurs et des praticiens, 2016).
Firstly, these guidelines present different methods for conduct scientifically reliable impact evaluations in order to deduce a causal relationship between the public intervention being evaluated and the effects it has on its beneficiaries (relevant indicators include health, employment, education, etc.). Several methods are explained in detail (including ‘differences in differences’ and randomized controlled trials), and the guidelines emphasise - among others - the importance of building a credible counterfactual, of choosing relevant indicators, and of avoiding selection mechanisms that could skew the results of the evaluation. In the next section, the guidelines address the question of how to analyse evaluation results and identify the reasons of the success or failure of a public policy. Finally, the guidelines explain how to compare the effects of multiple policies with the same goal and choose the most efficient one. This last section covers cost-benefit and cost-effectiveness analyses.
France Stratégie’s guidelines are concrete and user-oriented, as they take into account a wide range of scientific and operational constraints that often surround the implementation of evaluations. Such constraints include the availability, breadth and quality of data and the evaluation budget. These constraints often determine the evaluation method to use, as different methods require different types of data (for instance, the matching method requires rich data on individuals and their social and economic environment).
Finally, France Stratégie’s guidelines are policy-oriented and help decision-makers bridge the gap between policy evaluation and decision making, as they give clear recommendations on how to use evaluation results to improve public policies, and how to strengthen the evaluation capacities of policy-makers. The guidelines recommend, inter alia, conducting systematic reviews of the existing evidence in order to assess whether evaluation results converge and diverge depending on the institutional context of the policy. The need for policy-makers to institutionalise and operationalise the production and access to data is also emphasised in the guidelines.
Source : (France Stratégie, 2016[34])
The council could develop competencies to commission or conduct in-house evaluations
While quality guidelines and standards provide evaluators with resources to help them make the appropriate decisions when conducting evaluations, they may also benefit from the appropriate competencies. Evaluators’ competencies imply having the appropriate skills, knowledge, experience and abilities (Stevahn et al., 2005[35]; American Evaluation Association, 2015[36]). The American Evaluation Association, for instance, has developed a list of core evaluator competencies (American Evaluation Association, 2015[36]), which focus on the professional, the technical, the interpersonal, the management and organisational skills necessary to be an evaluator – thus reflecting the wide variety of competencies such a profession requires beyond that of technical expertise.
For Nuevo León to conduct high quality evaluations in the long term, it will therefore be important to invest in skills, through promoting evaluator competences. Indeed, the council currently does not have sufficiently appropriate evaluation competences within its commissions and technical secretariats to commission or conduct in-house evaluations. In order to ensure the technical quality of its evaluations, the council may wish to consider several scenarios.
Firstly, the council should rely on external evaluators’ competences to conduct evaluations in the short to medium term. In particular, the council can rely on the state’s universities and research centres to commission evaluations. The council may wish to define some quality standards for commissioning evaluations that include competence requirements for the evaluators, since terms of reference (ToRs) constitute an essential tool for quality assurance (Kusters et al., 2011[22]). These guidelines may also cover issues such as the scope of the evaluation, its methodology and goals, the composition of the evaluation team, the evaluation budget and timeline, and the type of stakeholders to be engaged (Independent Evaluation Office of UNDP, 2019[37]).
In the longer term, the council may wish to establish mechanisms to develop the appropriate competencies to conduct in-house evaluations. The council can therefore consider organising trainings for evaluators (i.e. the commissioners and technical secretariat staff). Indeed, OECD survey data shows that training evaluators is the most commonly used technique for competency development, with half of respondent countries having implemented such training. Staff could be incentivised to take courses (including online training) on different issues such as impact evaluation of public policies, programmes and projects (International Training Centre, 2019[38]), economic evaluation or data collection for programme evaluation offered by the University of Washington (University of Washington, 2020[39]). Beyond the use of trainings, the council may also wish to consider hiring staff with the appropriate technical skills to conduct evaluations, such as staff with previous evaluation experience in a multidisciplinary setting.
Finally, another way in which the council could develop the competences of its evaluators is by fostering networks of evaluators. OECD data shows that a common quality assurance mechanism that countries have implemented is the establishment of a network of evaluators.
The Nuevo León council could consider furthering its role as an evaluation champion in the state by forming such networks within the state. The council’s Knowledge Network (“Red de Conocimiento”) could be a successful starting point for developing such an evaluator network within Nuevo León as well as across Mexican States. This network currently fosters the collaboration and participation of Nuevo León’s academic community, with a particular focus on promoting applied research to support the monitoring and evaluation of the Strategic Plan (Consejo Nuevo León, 2019[40]). Bringing academics from across the state through thematic forums, the Knowledge Network could become an informal hub for exchanging practical and technical experiences related to evaluation. Hosted by the council, the Knowledge Network could easily connect its academics to the members of the council, namely practitioners, public servants, representatives from the private sector, and the civil society. Evaluators and relevant stakeholders from other Mexican states and their evaluation councils could be invited to forums, meetings or even webinars to enlarge the scope of exchanges. The Mexico City’s Council for Evaluation of Social Development is an interesting potential participant, since it already works with a network of external evaluators coming from civil society and academia to evaluate social programmes (OECD, 2019[41]).
Controlling the quality of the evaluation product
Quality control tools ensure that the product of the evaluation (the report) meets a certain quality standard (HM Treasury, 2011[24]). Overall, quality control mechanisms are much less common than quality assurance mechanisms, with only approximately one third of countries using a quality control mechanism. These quality control mechanisms are less common and may constitute an area of development in order to ensure that evaluation reports and evaluative evidence meet a high quality standard (OECD, 2020[1]).
The most common quality control mechanism used by countries to promote quality of the end product of evaluations is the peer review process. Peer reviews see a panel or reference group, composed of external or internal experts, subject an evaluation to review of its technical quality and substantive content. The peer review process helps determine whether the evaluation meets the adequate quality standards and therefore can be published.
The council could consider submitting its evaluations to peer reviews by experts (for instance academics), before they are published. Thanks to the composition of the council, and in particular the experts and academics who are part of the commissions, the council can build relationships with a community of potential peers that could contribute to controlling its evaluation product.
Some countries have also developed tools aimed either at the evaluators themselves (i.e. self-evaluation) or at the managing and/or commissioning team (quality control checklists, for example) in order to help them control whether their work meets the appropriate quality criteria. Quality control checklists are aimed at standardising quality control practices of evaluation deliverables and as such can be useful to evaluation managers, commissioners, decision-makers or other stakeholders to review evaluations against a set of pre-determined criteria (Stufflebeam, 2001[42])
The evaluation unit for development assistance in the European Commission, for example, includes a clear quality criteria grid in its terms of reference, against which the evaluation manager assesses the work of the external evaluators (OECD, 2016[43]). Self-evaluation, on the other hand, is a critical review of project/programme performance by the operations team in charge of the intervention, as they serve to standardise practices when reviewing evaluation deliverables. Although less commonly used, self-evaluation tools can form an important element of a quality control system (OECD, 2016[43]), as they constitute the first step in the control process. The council could consider designing a checklist for evaluations, in order to help them control the quality of their own work. Examples such as the New South Wales’ evaluation toolkit (see Box 4.9), show that initiatives to foster the technical quality of evaluations have also been undertaken at the state level.
Box 4.9. The New South Wales Government’s Evaluation Toolkit and the Better Evaluation website
The New South Wales (NSW) Evaluation Toolkit is an online resource that provides advice and tools for planning and conducting programme evaluations. The toolkit supports government agencies to implement the NSW Government Programme Evaluation Guidelines developed by the Centre for Programme Evaluation, which provides advice on evaluation design, conducts evaluations and fosters capacity building for evaluation (New South Wales Government, 2020[44]).
The toolkit supports evaluation managers and internal or external evaluators to manage an evaluation project, choose the appropriate methods, use them well, and meet the quality standards set in the associated guidelines. It provides concrete guidance through seven detailed steps to ensure evaluation quality in terms of technical rigour, practical feasibility, utility and ethics (New South Wales Government, 2020[45]). A key resource that complements the toolkit is the Better Evaluation Website, where key actors from across the globe continuously provide information and guidance on evaluation. More than 200 evaluation methods, tools and resources are currently accessible, on topics ranging from defining what is to be evaluated, synthesising evaluation data, reporting and using evaluation results.
Meta-evaluations are another tool that correspond to the evaluation of an evaluation to control its quality and/or assess the overall performance of the evaluation (Scriven, 1969[132]). Nowadays, it mainly refers to evaluations designed to aggregate findings from a series of evaluations. In its latter meaning, meta-evaluation is an evidence synthesis method that serves to evaluate the quality of a series of evaluations (by making an assessment of evaluations through reports and other relevant sources) and its adherence to established standards. As such, meta-evaluations constitute a useful tool for reviewing the quality of policy evaluations, before an evaluation is made publicly available. A relatively limited number of countries use meta-evaluations to control the quality of evaluations, either due to a lack of skills, familiarity or methods (OECD, 2020[1]).
Promoting the use of evaluations
While quality is very important and can facilitate use, it is not enough to guarantee the use of evaluations, which remains an important challenge faced by many countries. Indeed, in general, connections between evidence and policy-making remain elusive (OECD, 2020[1]). This appears to be the case in Nuevo León as well, at least within the scope of the Strategic Plan.
Promoting the use of policy evaluations is linked to how evaluations are communicated within and outside the public sector; and how (if so) evaluations are used to improve the impact and future design of public policies. More precisely, the instrumental use of policy evaluation implies that evaluation recommendations inform decision-making and lead to an alteration in the object of evaluation (Ledermann, 2012[46]). In that sense, effective use of evaluations is key to embedding them in policy-making processes and to generate incentives for dissemination of evaluation practices. It is a critical source of feedback for generating new policies and developing rationale for government interventions.
Conversely, if evaluations are not used, gaps will remain between what is known to be effective as suggested by evidence and policy and decision-making in practice. Simply put, evaluations that are not used represent missed opportunities for learning and accountability (OECD, 2020[1]). Moreover, the weak link between evidence and policy-making is compounded by the fact that underuse of evaluations may jeopardize the legitimacy of the evaluative exercise in the first place. When decision-makers ignore the results of evaluations, the claim for further analysis is undermined (Leviton and Hughes, 1981[18]): unused evaluations may contribute to an impression of excess supply, whereby quality evidence gets lost in the shuffle. Underuse also represents a waste of public resources: policy evaluations, whether conducted internally or contracted-out to external stakeholders, require significant public human and financial resources, which will be lost if they lead to no outcomes.
Nuevo León can promote the use of evaluations through the following mechanisms, which a large majority of countries have put in place:
conducting utilization-focused evaluations;
promoting access to, and the uptake of, evaluation results;
embedding the use of evaluation results in the institutional set-up, within and outside of the executive; through, for instance, the discussion of evaluation results at the highest level of government and the creation of management response mechanisms.
increasing demand for evaluations through competency development.
As mentioned at the beginning of this chapter, within their performance evaluation system, the government of Nuevo León has put a management response mechanism in place. According to OECD data, the use of formal management response and follow-up mechanisms is relatively infrequent. However, some countries, like Costa Rica and Mexico, do implement such mechanisms. Indeed, at the national level, Mexico implemented a mechanism to establish a follow-up process on external evaluation recommendations, which defines the actors responsible for constructing the tools that will track the aspects of programmes and policies to be improved.
In line with the national level, Nuevo León’s the final evaluation reports must identify areas for improvement (“Aspectos Susceptibles de Mejora”), which include the weaknesses, opportunities and threats to the programme’s efficiency. According to article 39 of the Strategic Planning Law regulations, the administrative entities must consider these recommendations by implementing a Management Improvement Action Plan (PAMGE). This action plan includes strategic actions aimed at improving the design, processes and implementation of the policy or programme evaluated. Each action is associated with a percentage of progress and a person in charge of its implementation. The use of findings in the case of the PAE is also facilitated by the fact that the final evaluation report including the ASM and the PAMGE must be publically available on the State website (article 42 of NL regulations).
Although the council has promoted utilisation-focused evaluations and makes evaluations available to the public, the use of evaluations is still a challenge in Nuevo León
Countries have developed mechanisms to ensure that evaluation processes are utilisation-focused, meaning that evaluations are conducted in a way that is fit for purpose and takes into account the needs of their primary users and the types of intended uses (Patton, 1978[21]). Empirical research (Johnson et al., 2009[47]) has found that user-focused evaluations share several features:
they are methodologically robust and credible (for a discussion of determinants of credible evaluations, see the section on ‘Promoting the quality of evaluations’);
users and stakeholders were involved in the evaluation process;
the evaluation methodology was perceived as appropriate by users.
The Nuevo León council is, in and of itself, a body composed of a wide variety of stakeholders, including state representatives, academics, the private sector and civil society. It could therefore be considered that the evaluations it conducts are utilization-focused in that they have engaged stakeholders, who can be consulted at any point during the evaluation process. This shows that the state government of Nuevo León, like other governments, is eager to engage a wide range of stakeholders in the decision-making process to generate a broader consensus and increase the legitimacy of public-policy decisions (OECD, 2016[11]). Similarly, OECD data shows that a majority of countries report engaging stakeholders in the evaluation of their policy priorities (see Figure 4.2).
In fact, evidence shows that policy-makers are more likely to seek and use evaluation results obtained from trusted familiar individuals or organisations, rather than formal sources (Oliver et al., 2015[48]; Haynes et al., 2012[49]). Having state representatives in the council could therefore increase their trust in and use of the evaluation it produces. Likewise, communicating findings to stakeholders as the evaluation progresses or involving stakeholders in the design of the evaluation, which the council can easily do with its members, can also favour their adherence to, and understanding of, the results (Fleischer and Christie, 2009[50]).
Within the council, despite the fact that the yearly “evaluation report” is in fact a “monitoring” exercise, and that, as mentioned before, an actual yearly evaluation of the plan will not be feasible, some provisions concerning stakeholder engagement during this process are worth mentioning. Within this review process, the document is presented to each commission in at least four instances: when the process is launched, when the first draft is finished, when the recommendations are written, and when the evaluation is published. Interestingly, commissions are invited to take part in the whole process and have to provide their opinions and suggestions at all stages as well as discussing the recommendations. The earlier and more actively users are involved in an evaluation process and with the dissemination of results, the more likely they are to use the evaluation’s results (Patton, 1978[21]). The council should continue actively involving commissions in that manner.
However, although the council is composed of a wide variety of stakeholders, it does not mean that they are all equally involved in the evaluation process. Indeed, as mentioned in chapter 1, the council may be seen as being over-representative of the private sector, and of giving too little voice to representatives of civil society. Yet, it can be argued that citizens, as the primary intended users of the policy being evaluated, are the most important stakeholders to include in the evaluation process (Kusters et al., 2011[22]). Moreover, civil society actors are certainly influential in the institutionalisation of policy evaluation and critical in facilitating demand for evaluation (OECD, 2020[1]). In order to balance the voices of each stakeholder across commissions, the council could consider adapting the composition of its commissions to ensure greater representation of civil society and citizens (see chapter 5 for a detailed discussion of the composition of the council).
Utilization-focused evaluations also require that the evaluation’s set-up, understood as the planning, resources and communication channels involved in the results creation and use, be tailored to the policy maker’s needs, in order to facilitate use in practice (Patton, 1978[21]). The resources for evidence should match the demand of policy makers in terms of timing and format. Finally, the evaluation questions foreseen by the evaluator should be set to match the users’ needs (Patton, 1978[21]). In Nuevo León, public officials providing information requested by the council for the evaluation sometimes have no to or limited knowledge about the commissions’ work and the way it relates to their own work.
In that case, the council could consider consulting the intended users (secretariat, policy-makers, and practitioners) early on in the evaluation process in order to ensure that they know about the evaluation and will use it. Going further, the state public administration may wish to commission specific evaluations for the council, on topics that are of importance to the secretariats. The role of the centre of government would be critical for communicating and co-ordinating the demand for evaluations across the public administration. There should also be a strong coordination with the Secretariat of Finance, given that it issues the Annual evaluation plan (PAE) every year. The evaluations carried out by the Council should be complementary or cover different subjects from the ones coordinated by the Secretariat of Finance.
A key aspect of ensuring the use of evaluations is ensuring access to them. Indeed, policy makers and stakeholders cannot use evidence and the results of evaluations if they do not know about them (Haynes et al., 2018[51]). The first step to promoting use is therefore making the results available to their intended users, simply put that they be communicated and disseminated to stakeholders.
Nevertheless, while a useful first step in promoting access to the evaluation report, publicity is not enough. Indeed, research suggests that in isolation, publicity alone does not significantly improve uptake of evaluations in policy (Langer, Tripney and Gough, 2016[52]; Dobbins et al., 2009[53]; Haynes et al., 2018[51]). Rather, the presentation of evidence should be strategic and driven by the evaluation’s purpose and the information needs of intended users (Patton, 1978[21]). As such, evaluation results ought to be well synthesised and tailored for specific users for their use to be facilitated (Haynes et al., 2018[51]).
More generally speaking, evaluation reports such as those published by the council may not always be easily understood or assimilated by the wider public and policy-makers who may not have the time to read lengthy reports. Recommendations can also be too general, therefore not allowing to identify clear policy actions. This is why the states of Baja California, Jalisco, Estado de México, Morelos, Oaxaca, Puebla, Querétaro and Tabasco include an executive summary in their evaluation reports (CONEVAL, 2017[2]).
In the future, the council could also consider including an executive summary in the evaluation reports, written in a clear, concise and user-friendly language that enables citizens and policy makers to acquire a rapid understanding of the evaluation and its results. Chapter 5 also provides recommendations on how the council could translate this evidence into more understandable language for policy makers and larger audiences.
Overall, despite engaging stakeholders, publishing results and thereby promoting utilisation-focused evaluations to a certain extent, the council’s evaluation results may not systematically translate to better uptake of policy evaluation in decision-making. Indeed, uptake of evaluation results is a complex phenomenon that demands broader and more systematic measures. As will be discussed in the following sections, a solution to increase demand for evaluations is to create an evaluation marketplace by embedding the use of evaluations in the institutional set-up, while another is to promote decision and policy-makers’ skills for evidence use.
The council could consider developing a communication strategy to promote the uptake of evaluation evidence
The council also sends out a press release on the day of the release of its “evaluation report” (which is in fact a monitoring exercise), which is delivered to the congress and the Governor. As previously discussed, evidence should not only be accessible to the public and policy-makers, but should also be presented in a strategic way and driven by the evaluation’s purpose and the information needs of intended users.
In order to tailor evaluation evidence to different publics, the council may wish to develop a communication strategy to adapt the way research findings are presented to their potential users. In particular, the council may wish to elaborate a communication strategy tailored to civil servants and decision makers in the state public administration to ensure greater uptake of its evaluations within the administration. Such a strategy could include the use of infographics, tailored synthesis of research evidence (for example in the form of executive summaries, which are especially useful for decision-makers), and dissemination of ‘information nuggets’ through social media and seminars to present research findings, etc. (OECD, 2016[43]) (OECD, 2018[54]).
Such tailored communication and dissemination strategies, which increase access to clearly presented research findings, are very important for use. An interesting example is the UK What Works Centre, which includes the Education Endowment Foundation, the Early Intervention and the What Works Centre for Local Economic Growth, which produces a range of policy briefs to disseminate key messages to its target audience. Box 4.10 below presents CONEVAL’s use of infographics and story-telling to present evaluation results and their impact on citizens, which is another tool that the council could consider developing on its website.
Box 4.10. Communicating evaluation results: CONEVAL’s emblematic cases
Storytelling in local cases where evaluation is used to contribute to social development
CONEVAL’s website publicly shares five stories that demonstrate the use of the information generated by CONEVAL’s evaluations related to the improvement of social development. These five evaluation cases are presented in a user-friendly, clear and concise way, through stories that involves characters, dialogues, and sometimes references to international newspaper articles. These stories also present the context in which the actual evaluation took place, the data that was used and generated through it, and how its results impacted people.
For instance, the “Value of time” story is a monologue through which the protagonist describes the experience of the transformation of an elementary school as part of the Full-Time School Programme. The programme had been consolidated thanks to the recommendations made by CONEVAL through the monitoring and evaluation of its performance. The story reflects the reality a programme evolving with verifiable results and impact.
Another story describes the experience of a state government that implements a social policy strategy based on the multi-dimensional measurement of poverty methodology issued by the National Council of Evaluation of Social Development Policy. The story explains how the three orders of government defined and focused their public policies to maximise the well-being of citizens through a participatory process.
The website also includes infographics that summarise, with brief texts and pictures, some evaluation initiatives undertaken by CONEVAL and their results on citizens.
Source: Adapted from CONEVAL, (2019[55])
Nuevo León may benefit from systematically embedding evaluation into the policy-making cycle
While individual competencies are important, formal organisations and institutional mechanisms lay the foundation for evidence-informed policy making that can withstand transitions between leadership (Results for America, 2017[56]). The use of evaluations is intimately linked to organisational structures and systems, insofar as they create a fertile ground for the meeting of evaluation supply and demand.
Institutional or organisational mechanisms that enable the creation of an evaluation marketplace can be found either at the level of specific institutions, such as management response mechanisms, or within the wider policy cycle, such as through the incorporation of policy evaluation findings into the budget cycle or discussions of findings at the highest political level. As will be discussed in the following section, the state of Nuevo León could consider implementing or strengthening such mechanisms to promote the uptake of its evaluation results.
Incorporation of evaluation findings in the budgetary cycle is one of the most commonly used mechanisms in promoting the use of evaluations (OECD, 2020[1]). Indeed, the evidence resulting from evaluations can be used in a more or less systematic manner in the budgetary cycle depending on the model of performance budgeting adopted (presentational, performance informed, managerial, direct performance budgeting). In most OECD countries, performance evidence is included in the budget cycle according to one of the first three approaches. Some countries, such as Denmark, the Netherlands or Germany, conduct ad hoc spending reviews to inform certain allocation decisions every year. At the Mexican state level, all federal entities, such as Sinaloa, Coahuila and Tamaulipas have legal provisions for using evaluation results in budgetary decisions (CONEVAL, 2017[2]).
In Nuevo León, the policy evaluations conducted by the council could also be used, as part of budgetary discussions in congress, to inform Nuevo León’s budget decisions. For instance, the council’s policy and programme evaluations could be included as an annex in the main budget document, when relevant. Greater tracking of budgetary programme spending by the state public administration would also allow Nuevo León to conduct spending reviews, which could be informed by the evaluation results.
The state public administration and the council could also consider discussing evaluation results at the highest political level. In Korea, for instance, in the context of the “100 Policy Tasks” five-year Plan, evaluation results are discussed at the Council of Ministers. Other countries have set-up specific committees or councils, most often at the centre of government, in order to follow-up on the implementation of policy evaluations and/or discuss their findings. The Brazilian Committee for Monitoring and Evaluation of Federal Public Policies is an example of such committee, which brings together high-level representatives from the executive (Presidency of the Republic, Ministry of Finance, Ministry of Planning, and the Ministry of Transparency) and from the Comptroller General of the Union (CGU).
Discussions concerning the implementation of evaluation findings could be held within the council, bringing together high-level representatives from the administration and a variety of sectors. Members of the council, including representatives from the executive would be able to exchange on the use of the evaluation results produced. The executive representatives could then present the conclusions of these discussions to the state administration, by organising a second round of discussions with high-level political representatives. These discussions could focus on the implementation of recommendations from the council evaluations and the follow-up on this implementation. Such discussions could thus be organised on a systematic rather than an ad hoc basis.
Outside the executive, congress could also play a key role in promoting the use of evaluation evidence. In addition to ensuring state accountability, congress also has the potential to develop a more structured and systematic approach for using evaluations (OECD, 2020[1]). For instance, it has been shown that parliaments have been instrumental in increasing evaluation use by: promoting the use of evaluative evidence in the budgetary cycle by requiring more performance data on government spending, introducing evaluation clauses into laws, and commissioning evaluations at the committee level in the context of hearings (Jacob, Speer and Furubo, 2015[57]). For instance, OECD data shows that 21 member countries incorporate findings from evaluations into the budget cycle (OECD, 2020[1]). Some countries, such as Denmark, the Netherlands or Germany, conduct spending reviews to inform spending allocation decisions every year. In Nuevo León, article 22 of the Strategic Planning Law stipulates that the evaluation report of the Strategic Plan is sent to congress and that the executive will communicate to this same body the actions it intends to take following the recommendations of the council. This provision constitutes an important step in promoting evidence-informed policy-making in the state of Nuevo León, which would be useful to pursue should the Strategic Planning Law be modified or a new law be adopted. Nuevo León could consider updating the Strategic Planning law to stipulate that the evaluations that are done at the request of the state public administration should be shared with Congress for information.
Recommendations
Building a sound institutional framework for the evaluation of the Strategic Plan
Adopt a comprehensive definition of evaluation applicable to whole-of-government. This could entail updating the regulations for the Strategic Planning law to specify that this definition is applicable to all public initiatives, including the strategic plan. The Strategic Planning Law could also be updated as well to include this definition.
Establish a clear schedule for the evaluation activities of the Strategic Plan, which specifies how many and which programmes and policies are going to be evaluated, the evaluator (what competences they must have, whether they are internal to the council or external to the council), and when and how the evaluation should be conducted.
Update the council’s mandate to evaluate the Plan:
Update Article 19 of the Strategic Planning Law by replacing the yearly timeline with one that is based on the evaluation types to be carried out and their corresponding stages in the Strategic Plan (design, implementation, results and impact).
Update Article 19 of the Strategic Planning Law by mentioning that the evaluation of the Plan should be carried out according to a specific methodology, that facilitates learning and understanding of what works and why (see below).
Develop a policy document framing the evaluation activities for the council. The policy document could include:
(i) A description of the different types of evaluation (i.e. design, process and impact evaluations) to be carried out with regard to different stages of the Plan and the type of policy initiative to be evaluated (policy, programme, etc.). This description could account for the evaluations already planned in the annual evaluation plan of the Secretariat of Finance in order to avoid being redundant.
(ii) A timeline specifying when each of these types of evaluation should be carried out (i.e. during the design, implementation and after the implementation of the Plan)
(iii) Whether the evaluation will be done at the request of the state public administration and thus require formal follow-up, within a specific time frame, from the state public administration on the recommendations
iv) The resources (human and financial) dedicated to the evaluation, including whether the council will consider externalising the evaluation.
Promoting the quality of evaluations
Develop explicit and systematic quality assurance mechanisms within the council to ensure the credibility of the evaluation process, such as:
Developing quality standards for the evaluation process. These should build on the existing regulations for the Consolidation of a Results-Based Budget and the Performance Evaluation System and include competence requirement for evaluators. These standards can also spell-out the specific methodologies for carrying out these different types of evaluation (i.e. data collection and evaluation methods).
Developing appropriate evaluation competencies, for instance through:
Relying on external evaluators’ competences, and particularly universities and research centres, in the medium term and identifying competence requirements for such evaluators;
Developing competencies to conduct in-house evaluations of the council by offering trainings (for example for the commissioners and technical secretariat staff) and hiring staff with the appropriate technical skills to conduct evaluations;
Fostering a network of evaluators, and considering the provision of trainings to the state public administration on evaluation as part of this network.
Develop explicit and systematic quality control mechanisms to ensure that the evaluation design, planning, delivery and reporting are properly conducted to meet pre-determined quality criteria, such as:
Submitting evaluations produced by the council to peer reviews by experts (e.g. academics) before they are published;
Conducting meta-evaluations;
Designing self-evaluation checklists for evaluators to control the quality of their work.
Review the composition of commissions to balance the voices of stakeholders, particularly to strengthen the voice of civil society relative to that of the public sector, since citizens, as the ultimate end-users of the Strategic Plan, are the most important stakeholder to involve in the evaluation.
Continue to strengthen the role of internal stakeholders (within the commissions) and external stakeholders throughout the whole evaluation process. From early on, invite them to take part in the evaluation launch. During the drafting of the evaluation report and the recommendations, invite them to provide their opinion and suggestions, which should subsequently be considered. Lastly, when the evaluation is published, send it directly to stakeholders and organise a discussion about it with them.
Continue publishing evaluation reports on the council’s website, while including an executive summary of the evaluation (including its objective, scope, methods, results, etc.) (see strategy below).
Promoting the use of evaluations
Elaborate a communication strategy to adapt the way in which research findings are presented to their potential users. Such a strategy could include the use of infographics, tailored synthesis of research evidence (for example in the form of executive summaries, which are especially useful for decision-makers), dissemination of ‘information nuggets’ through social media and seminars to present research findings, etc. (OECD, 2016[43]) (OECD, 2018[54]). In particular, develop a communication strategy tailored to civil servants and decision makers in the state public administration to ensure greater uptake of its evaluations within the administration.
Incorporate evaluation results into the budgetary cycle through the implementation of impact and performance evaluations to inform budget decisions (and/or to inform the spending reviews used in the budget cycle).
Discussing evaluation results at the highest political level by systematically holding discussions within the state public administration after reception of the evaluation report, as well as within the Council’s commissions.
Holding systematic discussions on evaluation results within congress once the report has been received.
References
[36] American Evaluation Association (2015), Core Evaluator Competencies, http://www.eval.org.
[14] Cámara de Diputados del H.Congreso De la Unión (2020), Constitución Política de los Estados Unidos Mexicano, http://www.diputados.gob.mx/LeyesBiblio/pdf_mov/Constitucion_Politica.pdf (accessed on 25 November 2020).
[13] Cámara de Diputados del H.Congreso De la Unión (2018), Ley General de Desarollo Social, https://www.coneval.org.mx/Evaluacion/NME/Documents/Ley_General_de_Desarrollo_Social.pdf (accessed on 25 November 2020).
[30] CONAC (2015), Norma para establecer el formato para la difusión de los resultados de las evaluaciones de los recursos federales ministrados a las entidades federativas, https://www.conac.gob.mx/work/models/CONAC/normatividad/NOR_01_14_011.pdf (accessed on 25 November 2020).
[33] CONAC (2013), LINEAMIENTOS para la construcción y diseño de indicadores de desempeño mediante la Metodología de Marco Lógico.
[55] CONEVAL (2019), Usos de la informacion del CONEVAL, http://www.coneval.org.mx/quienessomos/InvestigadoresAcademicos/Paginas/Investigadores-academicos.aspx.
[29] CONEVAL (2018), Informe de pobreza y evaluacion, Quintana Roo, https://www.coneval.org.mx/coordinacion/entidades/Documents/Informes_de_pobreza_y_evaluacion_2018_Documentos/Informe_QuintanaRoo_2018.pdf (accessed on 25 November 2020).
[2] CONEVAL (2017), Diagnóstico del avance en monitoreo y evaluacion en las entitades federativas.
[6] CONEVAL (2007), Lineamientos generales para la evaluación de los Programas Federales de la Administración Pública Federal, https://www.coneval.org.mx/rw/resource/coneval/eval_mon/361.pdf (accessed on 18 June 2019).
[40] Consejo Nuevo León (2019), Knowledge Network Nuevo León Council, https://red.conl.mx/ (accessed on 11 January 2020).
[8] Departamento Nacional de Planeación (2016), ¿Qué es una Evaluación?, https://sinergia.dnp.gov.co/Paginas/Internas/Evaluaciones/%C2%BFQu%C3%A9-es-Evaluaciones.aspx.
[53] Dobbins, M. et al. (2009), “A randomized controlled trial evaluating the impact of knowledge translation and exchange strategies”, Implementation Science, Vol. 4/1, p. 61, http://dx.doi.org/10.1186/1748-5908-4-61.
[20] EVALUA Jalisco (2020), Normatividad en Monitoreo y Evaluación, https://seplan.app.jalisco.gob.mx/evalua/unidad/normatividad (accessed on 25 November 2020).
[17] EVALUA Jalisco (2017), Lineamientos Generales para el Monitoreo y Evaluación de los Programas Públicos del Gobierno de Jalisco, http://www.sepaf.jalisco.gob.mx (accessed on 25 November 2020).
[50] Fleischer, D. and C. Christie (2009), “Evaluation use: Results from a survey of U.S. American evaluation Association members”, American Journal of Evaluation, Vol. 30/2, pp. 158-175, http://dx.doi.org/10.1177/1098214008331009.
[34] France Stratégie (2016), How to evaluate the impact of public policies: a guide for the use of decision makers and practitioners (Comment évaluer l’impact des politiques publiques : un guide à l’usage des décideurs et des praticiens), https://www.strategie.gouv.fr/sites/strategie.gouv.fr/files/atoms/files/guide_methodologique_20160906web.pdf (accessed on 21 August 2019).
[3] Gaarder, M. and B. Briceño (2010), “Institutionalisation of government evaluation: balancing trade-offs”, Journal of Development Effectiveness, Vol. 2/3, pp. 289-309, http://dx.doi.org/10.1080/19439342.2010.505027.
[16] Gobierno de Nuevo Leon (2020), Programa Anual de Evaluacion del Estado de Nuevo Leon (PAE 2020) 2020.
[49] Haynes, A. et al. (2012), “Identifying Trustworthy Experts: How Do Policymakers Find and Assess Public Health Researchers Worth Consulting or Collaborating With?”, PLoS ONE, Vol. 7/3, p. e32665, http://dx.doi.org/10.1371/journal.pone.0032665.
[51] Haynes, A. et al. (2018), “What can we learn from interventions that aim to increase policy-makers’ capacity to use research? A realist scoping review”, Health Research Policy and Systems, Vol. 16/1, p. 31, http://dx.doi.org/10.1186/s12961-018-0277-1.
[24] HM Treasury (2011), The Magenta Book: Guidance for evaluation, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/220542/magenta_book_combined.pdf (accessed on 18 June 2019).
[37] Independent Evaluation Office of UNDP (2019), UNDP Evaluation Guidelines.
[38] International Training Centre (2019), Impact evaluation of public policies, programmes and projects | ITCILO, https://www.itcilo.org/courses/impact-evaluation-public-policies-programmes-and-projects (accessed on 12 January 2020).
[57] Jacob, S., S. Speer and J. Furubo (2015), “The institutionalization of evaluation matters: Updating the International Atlas of Evaluation 10 years later”, Evaluation, Vol. 21/1, pp. 6-31, http://dx.doi.org/10.1177/1356389014564248.
[47] Johnson, K. et al. (2009), “Research on Evaluation Use A Review of the Empirical Literature From 1986 to 2005”, http://dx.doi.org/10.1177/1098214009341660.
[22] Kusters, C. et al. (2011), “Making evaluations matter: a practical guide for evaluators”, Centre for Development Innovation, Wageningen University & Research centre., https://www.researchgate.net/publication/254840956.
[52] Langer, L., J. Tripney and D. Gough (2016), The science of using science: researching the use of Research evidence in decision-making..
[46] Ledermann, S. (2012), “Exploring the Necessary Conditions for Evaluation Use in Program Change”, American Journal of Evaluation, Vol. 33/2, pp. 159-178, http://dx.doi.org/10.1177/1098214011411573.
[18] Leviton, L. and E. Hughes (1981), Research on the Utilization of Evaluations: A Review and Synthesis view and Synthesis.
[7] Ministerio de Planificación Nacional y Política Económica (2018), Manual de Evaluación para Intervenciones Públicas, https://documentos.mideplan.go.cr/share/s/6eepeLCESrKkft6Mf5SToA (accessed on 5 August 2019).
[44] New South Wales Government (2020), Centre for Program Evaluation, https://www.treasury.nsw.gov.au/projects-initiatives/centre-program-evaluation (accessed on 25 November 2020).
[45] New South Wales Government (2020), How to use the Evaluation Toolkit - NSW Department of Premier & Cabinet, https://www.dpc.nsw.gov.au/tools-and-resources/evaluation-toolkit/how-to-use-the-evaluation-toolkit/ (accessed on 25 November 2020).
[5] Nuevo Leon Council (2017), Evaluaction Anual 2016-2017.
[1] OECD (2020), Improving Governance with Policy Evaluation: Lessons From Country Experiences, OECD Public Governance Reviews, OECD Publishing, Paris, https://dx.doi.org/10.1787/89b1577d-en.
[58] OECD (2020), Policy Evaluation: Governance Insights from a Cross Country Study.
[41] OECD (2019), OECD Integrity Review of Mexico City.
[10] OECD (2019), Open Government in Biscay, OECD Public Governance Reviews, OECD Publishing, Paris, https://dx.doi.org/10.1787/e4e1a40c-en.
[54] OECD (2018), Building capacity for evidence informed policy making: A policy guide to support governments, OECD, Paris.
[4] OECD (2018), Draft Policy Framework on Sound Public Governance, http://www.oecd.org/gov/draft-policy-framework-on-sound-public-governance.pdf (accessed on 8 July 2019).
[43] OECD (2016), Evaluation Systems in Development Co-operation: 2016 Review, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264262065-en.
[11] OECD (2016), Open Government: The Global Context and the Way Forward, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264268104-en.
[26] OECD (2010), “DAC Guidelines and Reference Series: Quality Standards for Development Evaluation”, https://www.oecd.org/development/evaluation/qualitystandards.pdf (accessed on 9 July 2019).
[12] OECD-DAC (2009), “Guidelines for Project and Programme Evaluations”, https://www.entwicklung.at/fileadmin/user_upload/Dokumente/Projektabwicklung/Englisch/Guidelines_for_Project_and_Progamme_Evaluations.PDF (accessed on 20 September 2019).
[48] Oliver, K. et al. (2015), “Identifying public health policymakers’ sources of information: comparing survey and network analyses”, The European Journal of Public Health, Vol. 27/suppl_2, p. ckv083, http://dx.doi.org/10.1093/eurpub/ckv083.
[59] Patton, M. (2015), “Qualitative research & evaluation methods”, in Qualitative research & evaluation methods: integrating theory and practice, SAGE Publications, Inc.
[21] Patton, M. (1978), “Utilization-focused evaluation”.
[28] Queensland Government Statistician’s Office (2020), Queensland Government Program Evaluation, https://www.treasury.qld.gov.au/resource/queensland-government-program-evaluation-guidelines/.
[56] Results for America (2017), Government Mechanisms to Advance the Use of Data and Evidence in Policymaking: A Landscape Review.
[15] SHCP (2018), Documento relativo al cumplimient de las disposiciones contenidas en el parrafo tercero del articulo 80 de la Ley General de Contabilidad Gubernamental.
[19] State Government of Jalisco (2019), Strategy Evaluate Jalisco | Rate Jalisco, https://seplan.app.jalisco.gob.mx/evalua/unidad/evalua (accessed on 19 December 2019).
[31] State Government of Yucatán (2019), Sistema de Evaluación del Desempeño, http://transparencia.yucatan.gob.mx/informes.php?id=evaluacion_desempeno (accessed on 25 November 2020).
[32] State Government of Yucatán (2016), Lineamientos generales del Sistema de Seguimiento y Evaluación del Desempeño, https://www.coneval.org.mx/sitios/RIEF/Documents/yucatan-mecanismoseguimiento-2016.pdf (accessed on 25 November 2020).
[9] State of Nuevo Leon (2017), General guidelines of the Executive Branch of the State of Nuevo León for the consolidation of Results-Based Budget and the Performance Evaluation System, http://sgi.nl.gob.mx/Transparencia_2015/Archivos/AC_0001_0007_00161230_000001.pdf (accessed on 6 November 2019).
[35] Stevahn, L. et al. (2005), “Establishing Essential Competencies for Program Evaluators”, ARTICLE American Journal of Evaluation, http://dx.doi.org/10.1177/1098214004273180.
[42] Stufflebeam, D. (2001), Method Notes Evaluation Checklists: Practical Tools for Guiding and Judging Evaluations, http://www.wmich.edu/evalctr/checklists/.
[25] United Nations Evaluation Group (2016), Norms and Standards for Evaluation.
[39] University of Washington (2020), Data Collection for Program Evaluation | Northwest Center for Public Health Practice, http://www.nwcphp.org/training/data-collection-for-program-evaluation (accessed on 12 January 2020).
[23] Vaessen, J. (2018), New blogpost - Five ways to think about quality in evaluation, https://www.linkedin.com/pulse/new-blogpost-five-ways-think-quality-evaluation-jos-vaessen (accessed on 21 June 2019).
[27] World Bank et al. (2019), World Bank Group Evaluation Principles, http://www.worldbank.org.