In addition to detailing how the monitoring of results will be undertaken over the life span of an intervention, a results-based M&E system should specify how additional evaluative processes will add to and complement the already valuable information provided by the monitoring function (Markiewicz and Patrick, 2016[23]). While the monitoring system can give continuous information on the direction, pace, and even magnitude of the change generated by the policy under study, monitoring information does not provide evidence on why or how those changes are taking place (Kusek and Rist, 2004[9]). That is why evaluation evidence is a key cornerstone of a results-based M&E system.
Evaluation is the systematic and objective assessment of a planned, ongoing, or completed intervention, its design, implementation, and results (OECD, 2023[12]). Evaluation is a distinct but complementary function to monitoring (OECD, 2002[10]). While the focus of monitoring is tracking the implementation and progress made by an intervention (both in terms of actions delivered and results achieved), evaluation moves beyond this tracking character and its predominant orientation is on forming judgements about the performance of a programme or policy. Evaluation aims to obtain a deep and nuanced understanding of the changes triggered by the intervention, with the ultimate objective of informing policy and programme development.
There are several complementarities between the monitoring and evaluation functions (Kusek and Rist, 2004[9]). First is sequential complementarity, in which monitoring information can generate questions to be subsequently answered by evaluation, and vice versa, with evaluation findings suggesting that new domains of monitoring should be considered. Second is information complementarity, in which both monitoring and evaluation can use the same data for their analysis, although with the objective of answering different questions. In fact, the numerous analyses conducted as part of an evaluation exercise are usually based on the synthesis of a wide range of data, including but not limited to that gained through the monitoring function.
The purpose of this assessment is not to be a “how to” guide on designing and conducting evaluations, but to highlight the importance of specifically planning for them when building a results-based M&E system. The uses that can be given to evaluation findings are multiple and highly valuable, so a results-based M&E system cannot be complete without them (Box 4).
Evaluations can be designed and implemented internally or externally. Different types of evaluations are appropriate for answering different sorts of evaluation questions, and there is no “one size fits all” evaluation template. What is important for policy makers is to have a clear understanding of what they want to know from evaluations, and make it clear to external evaluators when commissioning for these exercises.