This chapter addresses monitoring and evaluation of gender equality. It suggests monitoring and evaluation approaches, including those that allow DAC members to monitor transformative change for gender equality.
Gender Equality and the Empowerment of Women and Girls
5. Results monitoring and evaluation
Abstract
Gender equality results need to be monitored and evaluated at whatever level they are developed. This chapter provides overall guidance, focusing primarily on the programme level, but the guidance is also largely applicable at other levels.
Measuring gender equality change, and especially gender-transformative change, requires working within existing frameworks and indicators, while providing flexibility and adaptation to reflect the nature and timescales of gender equality results. These are unlikely to be achieved within the timeline of a typical project. As with other complex social change, changes in gender relations can often be nonlinear and unpredictable. Changes that seem positive at first may quickly erode. A hard-won victory by community members for women’s land rights, for example, can provoke a backlash against activists or an increase in gender-based violence. Monitoring and evaluation of gender equality results needs flexibility to track progress and achievements, and to capture negative impacts, resistance, reaction, holding ground and unexpected outcomes (Batliwala and Pittman, 2010[1]).
Each Development Assistance Committee (DAC) member is at a different stage in the monitoring and evaluation approaches and infrastructure it has in place, and the resources and capacities for monitoring and evaluating gender equality initiatives. Some DAC members are experimenting with innovative evaluation methods, including feminist evaluation.1
The rise of the results agenda – and increased emphasis on monitoring and evaluation of development efforts – has increased the capacity of DAC members to define and track gender equality outcomes and to evaluate gender equality results related to their investments. The focus on results has helped anchor gender equality in DAC member systems and build momentum for commitments to gender equality and women’s rights (OECD, 2014[2]). Building a strong body of evidence showing the achievement of gender equality results, or the lack of them, can help build the political will to focus on investments in gender equality.
Meanwhile, the challenges associated with monitoring and evaluating gender equality results – particularly transformative change related to shifting power relations and changing norms – have become more apparent, along with the view of what counts as “evidence” of change. As tools and guidance on gender-sensitive or responsive monitoring and evaluation have increased, the need for indicators and methodologies better able to capture long-term change and transformational gender equality results is acknowledged by DAC members. Given the long-term nature of transformative change, investment in and use of ex post or impact evaluations and meta-evaluations may increase DAC members’ capacity to evaluate gender equality results (USAID, 2021[3]). This is valid for official development assistance (ODA) funded programmes and equally for “other” types of investments, such as blended finance.
5.1. Monitoring gender equality results
Monitoring is “a continuing function that uses systematic collection of data on specified indicators to provide management and the main stakeholders of an ongoing […] intervention with indications of the extent of progress and achievement of objectives and progress in the use of allocated funds” (OECD, 2002[4]). The DAC gender equality policy marker and its scores – while not designed or intended as a monitoring tool – can be used strategically in this context as a framework for monitoring efforts. The marker score may also need to be adjusted based on the monitoring (Chapter 4).
DAC members should consider adapting performance measurement frameworks and assessment tools to account for the timelines and complex nature of gender equality results. This might include encouraging partners to report on unanticipated results, either positive or negative, without undue judgement on programme quality.
Undertaking a thorough risk assessment during the design stage to name potential risk and mitigation strategies is required good practice, although it does not preclude unexpected negative results (Chapter 2). Results need to be defined, monitored and evaluated using frameworks that are both flexible and learning oriented, where both positive and negative results provide insights for policy or programme improvement and future design (see Box 3.3 on the Women’s Voice and Leadership programme).
Box 5.1. Measuring social norms change through storytelling: Finland
Traditional monitoring and evaluation frameworks across sectors are based on measuring performance against predetermined targets and visible change. They may thus not be adequate for measuring change in gender relations or gender discriminatory social norms. On the other hand, case-study-focused qualitative research, while powerfully explanatory, lacks the necessary population coverage to make robust causal inference claims about changing institutions, as expressed in behavioural norms.
In response to this challenge, building on lessons from a previous project, Finland, UN Women Nepal and its partners are exploring an appropriate mix of tools to better measure social change at impact level. This is closely linked to the need for effective monitoring of SDG5 impact indicators and is designed to understand its own contribution to changes in gender discriminatory social norms and harmful cultural practices, through the use of storytelling methodology. The initiative “Measuring social norms change through storytelling: Advancing the transformative shift towards gender equality by 2030” leverages the power of storytelling to measure and influence change in gendered power relations and social norms.
The aim of the research is to identify and understand pathways for change in social norms at the individual and community level, to enable transformative programming for gender equality. The mass storytelling tool combines the interpretive depth of storytelling with the statistical power of aggregated data for tracking patterns and trends in social behaviour. The aim is to generate a “feedback loop” of evidence and learning for long-term programming to influence social norms and end harmful practices. This qualitative storytelling research process is designed to measure change in social norms that lends itself to quantification. This could help identify social change with confidence, and it could also allow development partners to understand how they have or have not contributed to complex social institutions, and to design, adapt and integrate their evidence-based strategies and programming. The initiative will use a storytelling tool such as SenseMaker to track and interpret programmatic contributions linked to the SDG5 indicators to changes in social norms and gender equality. This qualitative storytelling research for measuring change in social norms is in use in four provinces in Nepal.
Results reporting
Results reporting can encourage political and financial support for policies, programmes and projects and help build a solid knowledge base. It can also introduce changes in the way institutions operate, leading to improved performance and accountability.
Development partners have for some time argued for more streamlined or simplified reporting, given capacity gaps.2 Multilaterals and larger civil society organisations (CSOs) have systems for meeting members’ reporting requirements, but small local organisations find it difficult to handle the reporting burden that comes with bilateral and multilateral funding, particularly quantitative data collection. Some DAC members are encouraging organisations to use alternative methods for integrating qualitative data in their reporting, such as embedding videos, music, case studies and vignettes to accompany data on quantitative indicators.
DAC members can also helpfully address specific gender equality objectives and results indicators with investors and private sector actors when engaging in “beyond aid” initiatives, such as blended finance (see Chapter 4).
DAC members should consider options for streamlining and simplifying reporting. The approach of using a narrower set of mandatory but adaptable indicators being taken by some DAC members is one example.
A few DAC members have also experimented with using a common reporting template where they are funding the same organisation, instead of requiring separate reports. Other ways include less frequent reporting (e.g. moving from annual to bi-annual results reporting) and continuing to examine how to balance learning and accountability in institutional structures.
Box 5.2. Australia’s Investment Performance Reporting system
Australia’s Department of Foreign Affairs and Trade (DFAT) has an Investment Performance Reporting system to assess performance and collects results of individual investments (projects) and their delivery partners during implementation and on completion. Investment Monitoring Reports (IMR) are completed each year by investment managers for all DFAT investments of a total value of AUD 3 million or higher. The IMR assesses progress against quality criteria, one of which is gender equality. Evidence is gathered from implementing partner reports, monitoring visits, reviews and evaluations, to provide an assessment of investment performance over the previous 12 months. The investment is rated from 1 to 6 on the following criteria, to assess an overall rating for gender equality performance in the reporting period:
Analysis of gender equality gaps and opportunities substantially informs the investment.
Risks to gender equality are identified and appropriately managed.
The investment is making progress as expected in effectively implementing strategies to promote gender equality and the empowerment of women and girls.
The monitoring and evaluation system collects sex-disaggregated data and includes indicators to measure gender equality outcomes.
Sufficient expertise and budget allocations are available to achieve gender equality-related outputs of the investment.
As a result of the investment, partners increasingly treat gender equality as a priority through their own policies and processes.
The gender equality ratings that are part of the IMR are crucial for ensuring that the twin-track approach to gender equality is fully implemented in all DFAT aid investments. For investments that do not include specific gender objectives at inception, the IMR becomes one of the few occasions when gender equality is discussed or considered, and in some cases, the IMR rating process itself is a mechanism for triggering consultations on gender equality. IMRs can be a way to motivate or negotiate with partners to try to improve gender equality in implementation of investments. In the final year of the project, a Final Investment Monitoring Report is completed, which assesses performance over the lifetime of the investment and provides lessons learned.
The IMR process gives DFAT an overall assessment of the effectiveness and achievements of the Australian development programme, feeding into policy dialogue, planning processes and capability development. Performance is tracked each year and reported publicly.
5.2. Evaluation of gender equality results
Evaluation is “the systematic and objective assessment of an ongoing or completed project, programme or policy, its design, implementation and results. The aim is to determine the relevance and fulfilment of objectives, […] efficiency, effectiveness, impact and sustainability. An evaluation should provide information that is credible and useful, enabling the incorporation of lessons learned into the decision-making process of both recipients and donors” (OECD, 2002[4]).
The strategy of gender mainstreaming (see Chapter 3), should include a focus at the institutional level to address gender equality and empowerment of women and girls through internal organisational changes, such as resource allocation, strategic planning, policies, culture, human resources, staff capacity, leadership, management, accountability and performance management. These efforts also need to be evaluated.
Box 5.3. Evaluation Criteria
The OECD DAC has defined six criteria to guide development partners:
relevance
coherence
effectiveness
efficiency
impact
sustainability
These criteria provide a normative framework to determine the merit or worth of an intervention (policy, strategy, programme, project or activity). They can be used when designing action plans or programmes, as well as for monitoring and evaluation. They serve as the basis for making evaluative judgements.
The OECD DAC guidance “Applying Evaluation Criteria Thoughtfully”: includes a section on applying a gender lens to these criteria. The section notes that “Evaluators should work in ways that thoughtfully consider differential experiences and impacts by gender, and the way they interact with other forms of discrimination in a specific context (e.g. age, race and ethnicity, social status). Regardless of the intervention, evaluators should consider how power dynamics based on gender intersect and interact with other forms of discrimination to affect the intervention’s implementation and results. This may involve exploring how the political economy and socio-cultural context of efforts influence delivery and the achievement of objectives” (OECD, 2021[5]).
Source: OECD (n.d.[6]), Evaluation Criteria, https://www.oecd.org/dac/evaluation/daccriteriaforevaluatingdevelopmentassistance.htm.
Data collection for evaluations
Ethical considerations must be front and centre in evaluating gender equality efforts, particularly in assessing and selecting the approach and methods used in an evaluation. In many contexts where evaluations are undertaken, support for gender equality is limited. Data security and safeguarding, paramount in every context, is especially crucial here.
The DAC Quality Standards for Development Evaluation commit members to abide by relevant professional and ethical guidelines and codes of conduct and undertake evaluations with integrity and honesty (OECD, 2010[7]). Specifically, evaluators should note how they plan to ensure that the evaluation process does not cause any harm, respects participants’ confidentiality and ensures informed consent of all participants in the evaluation process. Issues such as who asks who what types of questions, and what types of risks are involved in answering questions at the household, community or national level, must be taken into account including the potential risks of digital evaluations. Some DAC members have developed ethical guidance for research, evaluation and monitoring, not necessarily specific to gender equality, which should be applied to all evaluation and research (Thorley and Henrion, 2019[8]).
Good practice includes considering the following questions in the design phase of an evaluation:
Have the evaluation design and data collection tools considered approaches to include full participation of different groups of women and girls?
Do the data collection tools, and in particular, surveys, avoid perpetuating negative gender norms and model positive gender norms in the way questions are formulated?
Are opportunities created for women and girls to collect data themselves through participatory data collection methods, engagement in analysis of data, strategic oversight of the evaluation process and the communication of findings?
Will the data collection methods allow unintended results – positive and negative – to emerge on the well-being, lived experiences and status of girls and women?
Does the evaluation team include local evaluator(s) with strong gender and intersectional analysis skills, and is it at a minimum gender-balanced?
Have protocols on safety, data security and privacy issues been followed?
Box 5.4. Canada’s approach to using a feminist methodology to capture data
As part of Global Affairs Canada (GAC)’s work in the Middle East and Maghreb, it worked with feminist researchers to design the Gender Equality and Empowerment Measurement (GEM) tool to collect project outcome data and to evaluate work on gender equality and the empowerment of women and girls. The GEM tool uses feminist methodology to capture qualitative and descriptive data on gender equality and empowerment outcomes of development programming. It employs an intersectional lens for participatory focus group discussions and interviews used to capture the voices and perspectives of project partners. The GEM tool allows researchers, evaluators or project officers to gather data on project participants’ experiences of empowerment, based on five empowerment categories: economic, psychological, physical, knowledge and social. The tool is also designed to gather information on the enabling environment, including cultural, legal and societal factors that may have contributed to these experiences of empowerment.
The GEM tool was piloted in Egypt, Jordan, Lebanon, Morocco, the West Bank and Gaza. It helped capture results on the ground, and participants expressed that they felt engaged in the focus group discussions. The tool has been peer-reviewed by feminist researchers, scholars, academics and experts from the Canadian non-governmental organisation (NGO) community.
Table 5.1. Techniques for data collection and analysis
Approach/tool |
Considerations for data collection and analysis |
---|---|
Focus groups |
Focus groups can encourage women and girls to express their views more openly than through conventional survey methods. Focus groups also provide opportunities for dialogue on gender equality and enable evaluation processes to contribute to changes in attitudes about gender. The inclusion of local evaluation consultants and advice from evaluation reference groups are important strategies to adopt. |
Interviews |
Special attention should be paid to including in the programmes women and girls who may have been forgotten or left out of discussions and decision making, but who may have insights related to the context and the evaluation questions. |
Surveys |
Surveys are commonly used to collect information on experiences of stakeholders in a programme or project. Feminist survey design experts have advocated for survey questions to avoid perpetuating negative gender-related social norms and to model positive norms in designing survey questions. |
Case studies |
Case studies can be particularly helpful for highlighting the experiences of women and girls to understand the effects of a particular programme or intervention. Case studies by definition are context-specific and allow a detailed narrative to emerge about how a programme has been experienced by stakeholders. Case studies, combined with participatory analysis, can be empowering, as they allow individual women or girls to understand and interpret their own situation. |
Most Significant Change |
The Most Significant Change (MSC) methodology is widely used for collecting stories of lived experiences and allowing the storytellers to select stories representative of the type of change being sought. Project stakeholders are involved in deciding the kind of change to be recorded and analysing data. |
Outcome Mapping |
As a planning, monitoring and evaluation approach, Outcome Mapping (OM) unpacks an initiative’s theory of change, provides a framework to collect data on immediate, basic changes that lead to longer, more transformative change, and allows for the plausible assessment of the initiative’s contribution to results. |
Outcome Harvesting |
Outcome Harvesting (OH) is designed to collect evidence of change (the “outcomes”) and then to work backwards to assess whether or how an organisation, programme or project contributed to that change. This contrasts with the more traditional way of carrying out monitoring and evaluation, which is to start with activities and then attempt to trace changes forward through output, outcome and then impact levels. Women’s rights organisations are experimenting with OH as an approach that is consistent with feminist monitoring and evaluation. |
Participatory mapping |
Participatory mapping refers to a spectrum of data collection tools that can be used for collecting women and girls’ spatial access and knowledge of different resources, freedom of movement, and how this is affected by different relations within communities. The use of interactive, fun and engaging techniques facilitates an exploration of sensitive issues around differences in access and control over resources amongst different women, in a non-threatening manner. |
Participatory visual storytelling |
Participatory visual storytelling includes a variety of participatory tools aimed at transformative change embedded in action research. It empowers participants in telling their life story, as well as other experiences, through photography or video, as a basis for stimulating social change. Examples include PhotoVoice and participatory video, to help make women and girls’ voices central in explaining empowerment and other processes of change from their perspective. |
Evaluative rubrics |
Evaluative rubrics set out criteria and standards for different levels of performance and describe what performance would look like at each level. These frameworks can be developed from the programme logic and developed in a participatory way by evaluators with programme stakeholders. Rubrics offer a process to make explicit the judgements in an evaluation and are used to judge the quality, the value or the importance of the service provided. Rubrics are made up of:
|
Note: This table is adapted from guidance provided on human rights and gender equality data collection and evaluation approaches in the UN Evaluation Group 2014 and the below sources.
Source: Newton et al. (2019[9]), What do participatory approaches have to offer to the measurement of empowerment of women and girls?, https://www.kit.nl/wp-content/uploads/2019/03/KIT-Working-Paper_final.pdf; Oakden (2013[10]), Evaluation rubrics: how to ensure transparent and clear assessment that respects diverse lines of evidence, https://www.betterevaluation.org/sites/default/files/Evaluation%20rubrics.pdf; Better Evaluation (2014[11]), Photo Voice, https://www.betterevaluation.org/en/evaluation-options/photovoice.
Applying a gender equality analysis to the data available
Once necessary evaluation data, including gender data, have been gathered, the next step is to ensure that gender analysis is applied to that data. It is important to consider ways to engage women and girls in the analysis of data. Their participation in interpreting data may bring a unique perspective important in triangulating evaluation data. In addition to participatory, inclusive approaches to data analysis, the following may be helpful:
integrating contextual analysis such as gendered-related social norms, power dynamics as they affect different groups of individuals
comparing data with existing community, country, etc., information on women and girls’ rights and other social indicators, to confirm or refute trends and patterns already identified
disaggregating survey data (if used) along lines of sex, age, education, geographical location, poverty, ethnicity, indigeneity, disability, sexual orientation and gender identity, and paying attention to trends, patterns, common responses and differences (following up, if possible, with further qualitative methods and analysis)
analysing how far the programme has addressed structural factors that contribute to inequalities experienced by women and girls, especially those experiencing multiple forms of exclusion
assessing the extent to which (different groups of) women and girls were included as participants in programme planning, design, implementation, decision making, and monitoring and accountability processes.
DAC members should design or commission evaluations that use mixed-method approaches to answer evaluation questions and include participatory data collection and data analysis techniques that allow women’s voices and perspectives to be heard.
Feminist evaluation
Feminist evaluation is grounded in feminist theory and principles and can help make the link to the feminist foreign policies that some DAC members have implemented (Chapter 1). An initial impetus was a recognition of the negative consequences of lack of attention to gender and gender inequities in conceptualising, designing, conducting, and analysing data (Frey, 2018[12]). Beyond this, there are no prescribed methods or tools for feminist evaluation, or indeed any agreed-upon definition of feminist evaluation.3 Evaluators may explicitly use the term “feminist” to describe their approach or refer to a different term, while still using approaches based on feminist principles.
The gender and the feminist approach to evaluation differ in several ways. These include the kinds of questions posed, the design of the evaluation processes, how data and evaluation reports are used and by whom. Feminist evaluation acknowledges from the outset the need for transformative change in gender and power relations – i.e. it is values-driven – and explores and challenges the root causes of gender inequalities. Feminist evaluation emphasises the design of processes that are not only inclusive of diverse women and girls but engage them in ways that are empowering. This includes, for example, using participatory methods of data collection and data analysis that directly include project participants, who can give voice to and make meaning out of their own experiences. Crucially, feminist evaluation emphasises the position of the evaluators and encourages them to reflect on the assumptions and biases they bring to the evaluation. In other words, feminist evaluation holds that evaluations are not value-free.
Finally, feminist evaluation prioritises the use of knowledge generated in the evaluation process by those directly implicated in the evaluation. Evaluation findings should be accessible and barrier-free for all stakeholders. The most effective way to ensure this is to ask them what products will be most useful (social media, infographics, videos, briefings).
Box 5.5. Feminist evaluations in Canada
To carry out its feminist international assistance policy and promote gender transformative change in its foreign policy operations, GAC has adopted a feminist approach in its practices. Feminist evaluations look to encourage collaborative Global North and Global South partnerships, as well as participatory processes that place the voices of women and girls at the centre of the evaluation process.
This approach also enables those with lived experiences and contextual and cultural understandings of power dynamics and gender issues to guide evaluation practices. Feminist evaluation strategies facilitate reflection and dialogue, leaving room to adapt to evolving needs and information. This approach places effort and importance not only on the findings of an evaluation, but on the process, which helps promote empowerment and autonomy.
Learning and communication of monitoring and evaluation findings
Data generated in the monitoring of gender equality results, evaluations or performance assessment processes provide important information for DAC members on progress towards gender equality. The communication and dissemination of monitoring data and evaluation findings can potentially strengthen multiple levels of accountability on gender equality. Such data can be useful both to promote gender equality objectives externally and internally within the institution, in efforts to understand progress towards results and to course-correct where progress is not happening as anticipated.
A focus on results monitoring and reporting of DAC members’ internal institutional gender equality efforts (i.e. Gender Action Plans) is as important as tracking results on programme or policy efforts. The value of evaluating institutional gender equality initiatives includes helping understand the relevance, coherence, efficiency and effectiveness of institutional gender mainstreaming (including gender policies, gender parity strategies, gender markers, financial tracking systems, gender analysis in programme and policy design); and building an evidence base of the correlations between institutional gender equality initiatives and development results. A strong evidence base showing the relationship between internal gender equality changes and programme outcomes can also build political will for investments in gender equality initiatives.
It is good practice to integrate learning-orientated approaches in monitoring and evaluation on gender equality.
The development of a learning agenda is increasingly being used by DAC members and development partners in their gender equality work.4 Typically, a learning agenda includes: a set of questions addressing critical knowledge gaps on gender equality identified during implementation start-up; a set of associated activities to answer them; and knowledge products aimed at disseminating findings and designed with use of multiple stakeholders in mind. A theory of change approach lends itself well to the use of a learning agenda. Learning questions can be framed to test and explore assumptions and hypotheses throughout implementation and to generate new evidence for advocacy and future programme and policy development. A learning agenda can be set at different levels, and ideally should be developed during the design phase of a strategy, project or activity. It can provide a framework for performance management planning, using regular feedback loops related to key learning questions, and can also assist in evaluation design, to prioritise evaluation questions.
Checklist on results monitoring and evaluation
DAC members can ask the following questions:
On monitoring:
Do monitoring or performance measurement frameworks provide enough flexibility and direction to account for the complexity of transformative change to achieve gender equality when this is the objective of the intervention, including negative change or unintended outcomes?
Do the programmes’ accountability and reporting structures avoid unnecessary burdens on partners, by streamlining or reducing monitoring and reporting requirements? Do they aim to create an enabling environment for partners to engage in a way that does not require adapting or tailoring their systems?
On evaluation:
Are the evaluation approaches used ethical, inclusive and participatory, and in support of accountability to affected populations? Have locally based evaluators and/or researchers been involved? Are women’s voices and perspectives included and valued as a source of data?
Has consideration been given to developing and resourcing a learning agenda relating to gender equality at the institutional level?
Is monitoring and evaluation of gender equality at the institutional or organisational level linked with programme outcomes?
Is there a strategy in place to share new knowledge and evidence on gender equality results from monitoring, evaluation and learning activities, internally and/or with other stakeholders and partners?
References
[1] Batliwala, S. and A. Pittman (2010), Capturing Change in Women’s Realities: A Critical Overview of Current Monitoring and Evaluation Frameworks and Approaches, Association for Women’s Rights in Development (AWID), https://www.awid.org/sites/default/files/atoms/files/capturing_change_in_womens_realities.pdf.
[11] Better Evaluation (2014), Photo Voice, https://www.betterevaluation.org/en/evaluation-options/photovoice (accessed on 27 April 2022).
[12] Frey, B. (2018), Feminist Evaluation, The SAGE Encyclopedia of Educational Research, Measurement, and Evaluation, https://doi.org/10.4135/9781506326139.n262.
[9] Newton, J., A. van Eerdewijk and F. Wong (2019), What do participatory approaches have to offer to the measurement of empowerment of women and girls?, KIT Royal Tropical Institute, https://www.kit.nl/wp-content/uploads/2019/03/KIT-Working-Paper_final.pdf.
[10] Oakden, J. (2013), “Evaluation rubrics: how to ensure transparent and clear assessment that respects diverse lines of evidence”, Better Evaluation, https://www.betterevaluation.org/sites/default/files/Evaluation%20rubrics.pdf (accessed on 27 April 2022).
[5] OECD (2021), Applying Evaluation Criteria Thoughtfully, OECD Publishing, Paris, https://doi.org/10.1787/543e84ed-en.
[2] OECD (2014), From ambition to results: Delivering on gender equality in donor institutions, https://www.oecd.org/dac/gender-development/fromambitiontoresultsdeliveringongenderequalityindonorinstitutions.htm.
[7] OECD (2010), Quality Standards for Development Evaluation, OECD Publishing, https://www.oecd.org/development/evaluation/qualitystandards.pdf.
[4] OECD (2002), Glossary of Key Terms in Evaluation and Results Based Management, https://www.oecd.org/dac/evaluation/2754804.pdf.
[6] OECD (n.d.), Evaluation Criteria, https://www.oecd.org/dac/evaluation/daccriteriaforevaluatingdevelopmentassistance.htm (accessed on 22 April 2022).
[8] Thorley, L. and E. Henrion (2019), DFID ethical guidance for research, evaluation and monitoring activities, DfID, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/838106/DFID-Ethics-Guidance-Oct2019.pdf.
[3] USAID (2021), Discussion Note: Ex-Post Evaluations, https://usaidlearninglab.org/sites/default/files/resource/files/dn-ex-post_evaluation_final2021.pdf.
Annex 5.A. Additional resources on results monitoring and evaluation
Monitoring gender equality results
For insight and guidance on the value of participatory approaches, see the KIT Royal Tropical Institute working paper “What do participatory approaches have to offer the measurement of empowerment of women and girls”: https://www.kit.nl/wp-content/uploads/2019/03/KIT-Working-Paper_final.pdf.
For more information on monitoring whether projects and programmes are having their intended effect, and to make changes if they are not, see the Research and practice note “Changing Gender Norms: Monitoring and Evaluating Programmes and Projects”: https://odi.org/en/publications/changing-gender-norms-monitoring-and-evaluating-programmes-and-projects/.
For examples of development, monitoring and evaluation of gender equality results at the country and sector level and the programme and project level, see the Asian Development Bank and Australian Aid’s “Tool Kit on Gender Equality Results and Indicators”: https://www.oecd.org/derec/adb/tool-kit-gender-equality-results-indicators.pdf.
For guidelines to help those who work on results-based monitoring (RBM), see the “Guidelines on designing a gender-sensitive results-based monitoring (RBM) system” from the Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ): https://www.oecd.org/dac/gender-development/GIZ-guidelines-gender-sensitive-monitoring.pdf.
For examples of good practice where results from gender-integrated and targeted gender equality interventions are presented in compelling results reports that mix quantitative data with case studies, vignettes and data analysis, see UNICEF’s “Gender Equality: Global Annual Results Report 2020”: https://www.unicef.org/media/102281/file/Global-annual-results-report-2020-gender-equality.pdf or UNICEF’s “Health Results 2020: Maternal, Newborn and Adolescent Health” report: https://www.unicef.org/media/102666/file/Health-Results-2020-Maternal-Newborn-Adolescent-Health.pdf.
Evaluation of gender equality results
For examples of gender equality evaluations, including evaluations of Gender Action Plans from DAC members and other development partners, see the UN Women evaluation portal: https://genderevaluation.unwomen.org/en/region/global?region=8c6edcca895649ef82dfce0b698ebf60&orgtype=c580545e97254263adfcaf86c894e45b.
For guidance on how to integrate an equity-focused and gender-responsive approach to national evaluation systems, see “Evaluating the Sustainable Development Goals: With a ’No One Left Behind’ lens through equity-focused and gender-responsive evaluations”: https://www2.unwomen.org/-/media/field%20office%20americas/imagenes/publicaciones/2017/06/eval-sdgs-web.pdf?la=en&vs=4007.
For guidance on how to integrate a gender lens in UNICEF evaluations, or evaluations more generally, see the “UNICEF Guidance on Gender Integration in Evaluation”: https://www.unicef.org/evaluation/documents/unicef-guidance-gender-integration-evaluation.
For a resource for development practitioners and evaluators who are seeking explanations and recommendations on how to include a focus on gender impact in commissioning or conducting evaluations, see the Methods Lab resource, “Addressing Gender in Impact Evaluation: What should be considered?”: https://internationalwim.org/wp-content/uploads/2020/10/Addressing-Gender-in-Impact-Evaluation-.pdf.
The United Nations Evaluation Group (UNEG) provides an evaluative framework for evaluations on institutional gender mainstreaming that could be adapted by DAC members in the practical guide “Guidance on Evaluating Institutional Gender Mainstreaming”: http://www.uneval.org/document/detail/2133.
In “The ‘Most Significant Change’ Technique – A Guide to Its Use”, Better Evaluation offers a practical tool for anyone seeking to use Most Significant Change (MSC): https://www.betterevaluation.org/resources/guides/most_significant_change.
For an accessible introduction to Most Significant Change, see: https://www.betterevaluation.org/en/plan/approach/most_significant_change.
For an example of survey design on women’s empowerment, see the Abdul Latif Jameel Poverty Action Lab’s “A Practical Guide to Measuring Women’s and Girls’ Empowerment in Impact Evaluations”: https://www.povertyactionlab.org/sites/default/files/research-resources/practical-guide-to-measuring-women-and-girls-empowerment-appendix1.pdf.
For information on Outcome Mapping (OM) and how it can be used to unpack an initiative’s theory of change and serve as a framework to collect data on immediate, basic changes, see Better Evaluation’s resource: https://www.betterevaluation.org/en/plan/approach/outcome_mapping.
For ethical guidance on data collection, see the World Health Organization’s “Putting Women First: Ethical and Safety Recommendation for Research on Domestic Violence Against Women” resource: https://www.who.int/gender/violence/womenfirtseng.pdf.
See also the subsequent report: https://www.who.int/reproductivehealth/publications/violence/intervention-research-vaw/en/.
For an accessible introduction to the basic concepts that underpin feminist evaluation, see Better Evaluation’s resource “Feminist evaluation”: https://www.betterevaluation.org/en/themes/feminist_evaluation.
For an overview and description of feminist evaluation and gender approaches, and of their differences, see the research paper, originally published in the Journal of Multidisciplinary Evaluation, “Feminist Evaluation and Gender Approaches: There’s a Difference?”: https://www.betterevaluation.org/en/resources/discussion_paper/feminist_eval_gender_approaches.
For an exploration of how quantitative impact evaluations and other technical choices and ethical considerations are changed by bringing a feminist intent to research into monitoring and evaluation processes, see Oxfam GB’s discussion paper, “Centring Gender and Power in Evaluation and Research: Sharing experiences from Oxfam GB’s quantitative impact evaluations”: https://policy-practice.oxfam.org/resources/centring-gender-and-power-in-evaluation-and-research-sharing-experiences-from-o-621204/.
Feminist evaluation can be used alongside, or combined with, other systems of monitoring, evaluation changes and learning for programmes, to help make sense of how social change occurs. For more information, see “Merging Developmental and Feminist Evaluation to Monitor and Evaluate Transformative Social Change”: https://journals.sagepub.com/doi/full/10.1177/1098214015578731.
For examples of concrete steps that can be taken on data security and safeguarding evaluation participants, see “ActionAid’s feminist research guidelines”: https://actionaid.org/publications/2020/feminist-research-guidelines.
Notes
← 1. Thirteen DAC members included gender equality in monitoring and evaluation frameworks for programming, and five members used additional annual quality checks. A few DAC members noted that they produce evaluations of gender equality as a cross-cutting issue, while others produced evaluation reports of their Gender Action Plans or other gender-specific programmes.
← 2. Thirteen DAC members identified the inclusion of results from gender equality programmes and initiatives within regularly scheduled reports to be an important component of their systems for monitoring and evaluation. Of these members, some used report writing at varying stages of the intervention as a system for the monitoring and evaluation of gender equality programmes (quarterly, annually, mid-term, or at the end of the programme).
← 3. The DAC Network on Development Evaluation is developing a Glossary of evaluation terms.
← 4. Nine DAC members incorporated a learning agenda devoted to improving their work on gender equality and the empowerment of women and girls within their development co-operation systems and processes (including monitoring and evaluation). The way these learning agendas are put into effect is extremely varied. Five members incorporated learning agendas in their programming, with dedicated work streams for knowledge management, or broader institutional learning systems. Examples of these agendas ranged from a gender unit being responsible for institutional learning and knowledge management, to help desks that give out rapid advice from external experts. Four members used systematic learning activities such as comprehensive and multi-year reports on the evolution and progress of approaches used to advance gender equality in development co-operation, including key lessons learned and recommendations for moving forward. Two members included their processes for evaluation and programme improvement as a component of their learning agenda, with learning questions integrated into evaluation questions when appropriate. One member noted that its learning agenda is carried out by a designated implementation team. Twelve DAC members indicated that they do not have learning agendas.