The study shows an increase in the number of centralised evaluations undertaken by participating organisations. By contrast, the number of joint evaluations has declined, highlighting room for increased collaboration. On average, most multilateral organisations conduct more evaluations than bilateral organisations, in line with their respective mandates. However, beyond this, there is no consistent, identifiable relationship between the volume of development co-operation provided by an organisation, the age of the organisation and the number of centralised evaluations conducted.
There is strong consistency in the purposes and principles that guide evaluations across organisations, which are largely based on longstanding OECD policy guidance documents, demonstrating their continued relevance. The evaluation principles of independence and usefulness are the most frequently cited. This is in line with the findings of previous surveys, indicating consensus amongst the global evaluation community on the core elements of evaluation systems, and which principles are most vital for quality.
All organisations noted that evaluation units are structurally and functionally independent, with many reporting to an independent oversight committee. There is also strong consistency in institutional arrangements to govern evaluation, with nearly all organisations having evaluation policies and guidance. This shows the core place for evaluation in the work of development co-operation providers.
The increased focus on the usefulness of evaluations reflects persistent challenges in meeting learning objectives. Participating organisations are taking concerted action to address this, ensuring that usefulness is considered earlier in the evaluation process, engaging more with end-users, ensuring that evaluations reflect policy priorities and that findings are timed to fit in with programme cycles.
While there are clear commonalities in the purposes, principles, and institutional arrangements that govern evaluation systems, there are differences in how participating organisations conduct evaluations, often linked to an organisation’s size and resources. For example, larger organisations report doing most evaluation work in-house, while smaller ones oversee external evaluators.
In recent years, the use of virtual data collection methods has increased. While this shift was underway before the COVID-19 pandemic, it was accelerated by the onset of the crisis, which demanded real-time evidence on the success of response and recovery options, and restricted travel by member organisations’ staff. Many organisations have noted that they will continue to use the innovations developed during the pandemic.
Finally, limited partner country engagement in evaluations is hindering full ownership of development co-operation. Most organisations note that partner country governments are asked to facilitate visits and data collection, but are not engaged meaningfully in the substantive planning or follow-up of evaluations.