The OECD Good Practice Principles for Deliberative Processes for Public Decision Making (2020a) recommend evaluation of deliberative processes as a key element of a successful process.
Timely evaluation strengthens the trust of policy makers, the public, and stakeholders in any recommendations developed by a deliberative body as it can demonstrate the quality and the rigour involved in generating them. Each of these three groups, who were not part of the deliberative process, plays a role in implementing the deliberative body’s recommendations. Their confidence in the legitimacy of the process is crucial.
Evaluation can demonstrate the level of quality and neutrality of a deliberative process. When publicised, the results can increase trust in the deliberative process itself, as well as its resulting outputs that are used to inform public decision making. By making a process subject to evaluation, the authorities commissioning it demonstrate a commitment to transparency and quality, earning them greater legitimacy. Any groups that oppose the final recommendations of the deliberative body will scrutinise how its members reached their conclusions. Evaluation permits a clear sense of whether such critiques are justified.
Evaluation also creates opportunities for learning by providing evidence and lessons for public authorities and practitioners about what went well and what did not. It gives a basis for iterative improvement.
Evaluation allows public authorities to identify which practitioners consistently deliver high-quality deliberative processes, enhancing the accountability feedback loop
Evaluation Guidelines for Representative Deliberative Processes
1. Conducting an evaluation: Why, who, and how?
1.1 Why evaluate?
1.2 Evaluating representative deliberative processes
To date, evaluation of representative deliberative processes has been an emerging and fragmented practice. The 2020 OECD Catching the Deliberative Wave report found that the most common practice of evaluating representative deliberative process (67%) has been self-reporting by members of a deliberative process. Two per cent were found to have reflections by process organisers, although qualitative research shows that this number is likely to be much higher in reality. Seventeen per cent have had a research oriented academic analysis and only seven per cent have had an independent evaluation.
These guidelines recommend independent evaluations as a gold standard of evaluation, but recognise that it may not necessarily be feasible or appropriate for smaller scale, shorter deliberative processes due to time and budgetary constraints. In such cases, evaluation in the form of self-reporting by members and/or organisers of a deliberative process can also be a helpful practice for learning.
1.2.1 Independent evaluations
Independent evaluations are the most comprehensive and reliable way of evaluating a deliberative process. They are particularly valuable for deliberative processes that last a significant amount of time (e.g. four days or more).
Independent evaluators, ideally with training in evaluating deliberative processes, are best placed to provide an objective and fair assessment of a deliberative process. Independent evaluators can be external, in-house, or a mix of both. They are considered independent if they do not have any conflicts of interest regarding the policy issue, are not involved in designing or implementing the deliberative process, and are functionally independent from the people who are. Independent evaluators should have experience in evaluation methods, expertise in deliberative democracy, and an understanding of what a high-quality public deliberation entails.
Independent evaluations can use a range of methods. These often include observation of the process from start to finish, conducting member surveys, interviews, and assessment of informational material, whilst taking into account the reflections of the organising team and the facilitators. Please see 2.2 Measuring the Criteria section of this document for further information.
1.2.2 Self-reporting by members of a deliberative process
Most evaluations of deliberative processes include confidential feedback from the members who have been selected via civic lottery. Their perspective is valuable as they personally experienced learning, deliberation, and decision making, and thus know what helped them complete their work, as well as what process features need improvement. However, as it is often the first deliberative process they have experienced, and their assessment is best used as part of broader evaluation by independent evaluators who are better placed to introduce a comparative perspective. The evaluation questions included in Annex C of these guidelines can elicit members’ candid assessments of their deliberative process.
1.2.3 Hearing from organisers
In smaller and shorter deliberative processes, for example local-level processes that are one to three days long, evaluation and reflection often takes the form of self-reporting by the organising team. Organisers are the people who implemented a deliberative process, as opposed to those who commissioned it. They will have gained insights about what worked as intended and what challenges arose. They can also share creative solutions that they devised to address unexpected problems. Such feedback can help to improve future processes.
Organiser self-reporting often happens as an open discussion among team members or through a survey (when the evaluation is conducted by independent evaluators). Annex D of these guidelines provides evaluation questions for process organisers’ surveys. Annex E provides evaluation questions for an open discussion between process organisers.
1.3 Five principles of evaluation
The following five principles have been developed by the OECD Advisory Group on Evaluating Representative Deliberative Processes. They can help guide an evaluation and ensure its quality and integrity.
1. Independent: For deliberative processes lasting a significant amount of time, evaluations should be impartial and thus independent. Independence entails being at arm’s length from the commissioning public authority and the organisation implementing the process. The evaluators should have no stake in the outcome of the process and ideally have expertise in deliberation. For shorter, smaller-scale processes that are not evaluated by external evaluators, efforts should still be made to ensure a maximum degree of independence of evaluation.
2. Transparent: The selection of the evaluators should be clear and transparent. The evaluation process and the final evaluation report of a deliberative process should be made accessible and open to a peer review process. The evidence on which the evaluation is based should be published at an aggregate level, to the extent that it does not impede candid assessments or compromise confidentiality.
3. Evidence-based: Evaluations should be based on valid and reliable data. Evidence can be collected through a variety of methods, such as surveys, interviews, observation, and a review of materials used during a deliberative process. The standard measures in these guidelines should be used (see Annex C, Annex D, and Annex E).
4. Accessible: Evaluators should have access to sufficient financial resources and all necessary information required to assess a deliberative process, including recordings and controlled access to small group discussions. There should also be dedicated time in the programme for the evaluation team to access the members of a deliberative process for the purpose of filling in the evaluation survey(s), while ensuring that members are not burdened by such tasks and with due respect to the privacy and non-publicity of members’ identities.
5. Constructive: A useful evaluation allows organisers and commissioning authorities to learn good practices and identify shortcomings to inform future processes. The evaluation should focus on the quality and impact of a deliberative process.
1.4 Planning and designing for evaluation
1.4.1 Ensuring an independent evaluation
Independence of evaluations can be structural and functional (such as independence of the evaluation team with respect to the commissioners and organisers of the deliberative process) or behavioural (integrity and unbiasedness of the evaluators themselves) (OECD, 2020a). Efforts should be made to ensure independence in all of these regards.
For large-scale processes, independence of the evaluation can be enhanced by setting up an evaluation oversight committee with external members, commitment to peer review, and declaration of absence of conflict of interest from the evaluators. Small-scale deliberative processes with very limited resources for evaluation should at a minimum use the standard measures and surveys in these guidelines – Annex C, Annex D, and Annex E.
Ethical conduct of evaluators should be ensured (such as ethical use of data, research results, protection of members’ privacy, confidentiality of responses).
A credible evaluation needs sufficient funding, creating enough distance to ensure that the evaluation is independent, and that evaluators have sufficient access to the process.
Funding for the evaluation can come from an independent, government funded institution. For example, The Scottish Citizens’ Assembly evaluation was funded by Scottish Government Social Research. The process of awarding evaluative research of the Irish Citizens’ Assembly was run by the Irish Research Council, with funding provided by the Department of An Taoiseach (Prime Minister).
Academic institutions are often interested in partnering and are well-placed to provide credible and independent evaluations.
1.4.2 Planning for the timing and efficiency of an evaluation process
Planning for evaluation should take place during the design stage of a deliberative process. Commissioning the evaluators early will allow enough time to get everything started for the beginning of the process. The evaluation plan should be discussed with the commissioning authority and implementing practitioners to identify any areas of particular interest and ensure that all needs are met. While maintaining their independence, evaluators should seek to maximise the relevance of their report for practitioners, commissioners, stakeholders, and the public.
Organisers must reserve time in the programme for the evaluation team to conduct surveys and explain the importance of evaluation to the members. This ensures that the evaluators’ access to the process is sufficient to properly do their job, whilst guaranteeing the highest possible response rates compared to post-process evaluation surveys. If there is no independent evaluation, it is still important to reserve this time so that organisers can both survey members and record their own assessments.
It is also helpful to agree to some fixed points in time after the deliberation when evaluators can revisit members of a deliberative process to obtain their views. Such follow-up contacts can help to assess long-term impacts and assessments of the broader process, such as whether the commissioning public authority responded constructively to the deliberative body’s recommendations. Evaluators may also follow up with commissioning authorities to hear their perspectives on implementing the recommendations.
1.4.3. Resources needed for evaluation
Funding needed for evaluation should be considered from the start when commissioning a process.
1.5 Participatory evaluation
An evaluation can also include the members of a deliberative process and the broader public. Involving them creates an additional opportunity for public engagement, and potentially contributes to improving the quality of evaluations.
1.5.1 Members of a deliberative process
Members of a deliberative process can participate in evaluation process design, rather than only as subjects of the assessment. An evaluator can ask members what they perceive as the key criteria for a successful deliberative process. When done at the outset of an evaluation, such an exercise can help identify additional criteria for a comprehensive assessment.
An example of such evaluation is Healthy Democracy’s emerging practice to set up a small committee of members with a mandate to design their own evaluation procedures. When given this opportunity, members have decided to set up daily feedback forms and designed an extensive survey to ask their fellow members about aspects of their experience.
1.5.2. The broader public
Members of the broader public could also serve as independent evaluators. Such members could be selected by a civic lottery to serve as evaluators, or they could volunteer to observe the process. Such initiatives would require dedicated resources to recruit people as evaluators and provide them with training and ensure independence. Expert evaluators or an academic partner would still be needed to help guide the process and ensure a comprehensive assessment. Ideally, an oversight committee with a mix of experts and members of the broader public could be set up to oversee the evaluation process and discuss the results.
1.6 Peer evaluation
Peer evaluation is an opportunity to invite expert witnesses, such as international deliberative democracy experts and practitioners or domestic peers (such as members of academia and NGOs), to observe a deliberative process. Peer observers can then provide an account of the process and be interviewed or surveyed to elicit relevant information for an evaluation. See section 2.4 (measuring the evaluation criteria) of this document for further details.