Drawing on the experiences of the members and observer organisations of the Development Assistance Committee Network on Development Evaluation (EVALNET), this study provides a snapshot of the core elements and ways of working within development evaluation systems. It offers insight to development co-operation organisations as they seek to establish or strengthen credible evaluation systems of their own, to support learning and accountability. The report also explores the role of evaluation in development co-operation and humanitarian assistance, including the various policy and institutional arrangements used with evaluation systems. It then reviews the evaluation process, from deciding on evaluations to disseminating findings and finally, focuses on how evaluation findings are used to improve development co-operation efforts.
Evaluation Systems in Development Co-operation 2023
Abstract
Executive Summary
Key findings
The study shows an increase in the number of centralised evaluations undertaken by participating organisations. By contrast, the number of joint evaluations has declined, highlighting room for increased collaboration. On average, most multilateral organisations conduct more evaluations than bilateral organisations, in line with their respective mandates. However, beyond this, there is no consistent, identifiable relationship between the volume of development co-operation provided by an organisation, the age of the organisation and the number of centralised evaluations conducted.
There is strong consistency in the purposes and principles that guide evaluations across organisations, which are largely based on longstanding OECD policy guidance documents, demonstrating their continued relevance. The evaluation principles of independence and usefulness are the most frequently cited. This is in line with the findings of previous surveys, indicating consensus amongst the global evaluation community on the core elements of evaluation systems, and which principles are most vital for quality.
All organisations noted that evaluation units are structurally and functionally independent, with many reporting to an independent oversight committee. There is also strong consistency in institutional arrangements to govern evaluation, with nearly all organisations having evaluation policies and guidance. This shows the core place for evaluation in the work of development co-operation providers.
The increased focus on the usefulness of evaluations reflects persistent challenges in meeting learning objectives. Participating organisations are taking concerted action to address this, ensuring that usefulness is considered earlier in the evaluation process, engaging more with end-users, ensuring that evaluations reflect policy priorities and that findings are timed to fit in with programme cycles.
While there are clear commonalities in the purposes, principles, and institutional arrangements that govern evaluation systems, there are differences in how participating organisations conduct evaluations, often linked to an organisation’s size and resources. For example, larger organisations report doing most evaluation work in-house, while smaller ones oversee external evaluators.
In recent years, the use of virtual data collection methods has increased. While this shift was underway before the COVID-19 pandemic, it was accelerated by the onset of the crisis, which demanded real-time evidence on the success of response and recovery options, and restricted travel by member organisations’ staff. Many organisations have noted that they will continue to use the innovations developed during the pandemic.
Finally, limited partner country engagement in evaluations is hindering full ownership of development co-operation. Most organisations note that partner country governments are asked to facilitate visits and data collection, but are not engaged meaningfully in the substantive planning or follow-up of evaluations.
Areas for future action
These findings point to five action points for EvalNet members and evaluators more broadly:
Work towards a more holistic understanding of data and evidence within an organisation to support learning and strategic planning. While there might be a need to maintain the distinct governance structures and roles of these different functions, there may be value in exploring a common approach to collecting evidence across different units, in support of more systematic and efficient data collection in pursuit of cross-cutting learning objectives. This requires working closely with other parts of the institution, such as statistics, results reporting, research and decentralised evaluation units.
Develop a common approach to assessing the resources spent on evaluation activities. Data collection and analysis for this study brought to light the many different ways in which the amount spent on evaluation by participating organisations is calculated. The objective of collecting this information is to understand the share of spending on various elements (e.g., human resources, communications) and whether overall resourcing is sufficient for the evaluation function to fulfil its roles. Creating a cost benchmark by type of evaluations could also be useful.
Consolidate learning on virtual ways of working. Reports vary on the value of virtual engagement and data collection during the COVID-19 pandemic. While some view virtual methods as important for broadening stakeholder engagement, others perceive them as undermining quality. Despite these mixed reports, nearly all organisations plan to continue using virtual methods. In light of this, it may be useful to explore when virtual methods work best and when in-person evaluation work is necessary, and to develop guidance.
Track when and how evaluation findings are used and the overall state of evidence-informed decision making. A recurring theme in this report is the challenge of meeting learning objectives: many respondents are searching for new and more effective approaches to increase the use of findings. With new ways of stakeholder (both internal and external) engagement and dissemination being tested, the time is right to measure which methods work best, with a view to starting each evaluation with a clear uptake pathway.
Revisit questions of ownership and its links to partner country engagement. Greater engagement with partner countries in evaluations will promote national ownership of development activities and bolster evaluation capacity. This will help meet development co-operation commitments to country ownership and localisation.
Related publications
-
30 September 2024
-
Case study27 September 2024
-
27 September 2024
-
16 September 2024
-
10 September 2024