Melissa Mouthaan
OECD
Mykolas Steponavičius
OECD
Melissa Mouthaan
OECD
Mykolas Steponavičius
OECD
This chapter examines the notion of system-level co-ordination of education research production. The first section presents the challenges that exist in systems to co-ordinate education research production and presents a range of policy mechanisms that can help build a high-quality knowledge base that policy makers and practitioners can draw on. The chapter then discusses different types of education research and their relevance to policy makers and practitioners. It subsequently examines the emerging optimism around collaborative research, in particular the need to incentivise knowledge mobilisation efforts and address stakeholders’ divergent knowledge needs. The chapter showcases examples of promising mechanisms that countries have implemented to co-ordinate education research production.
Critics have long been concerned about the state of education research, considering it to be of low quality, anecdotal, affected by political or ideological bias, or methodologically weak (Kaestle, 1993[1]; Winch, 2001[2]; Burkhardt and Schoenfeld, 2003[3]; Gorard, See and Siddiqui, 2020[4]; OECD, 2022[5]). According to such critiques, it is both lacking in rigour and of limited use to both policy makers and practitioners. Despite this, there is also a broad acknowledgement of some positive developments in recent years. The earlier claim that education research has an “awful reputation” (Kaestle, 1993[1]) now appears unwarranted: critics have begun to acknowledge the notable increase in the quantity of studies in education and their higher quality, notably with a turn towards more empirical research designs; and, crucially, there has been more extensive collaboration at the international level (van Damme, 2022[6]).
Previous academic work on education research has dedicated considerable attention to these quality issues, such as the relative benefits and appropriateness of different research designs, most notably randomised control trials (Oliver and Boaz, 2019[7]). The issue of research quality in education remains undoubtedly important. This chapter, however, takes a step back from questions around quality and explore how a system-level co-ordination of education research production is an important enabling factor for a strong research engagement culture. We examine two dimensions in particular: 1) the development of a robust knowledge base as a prerequisite to having such a culture; and 2) the question of co-ordination, with the aim of increasing stakeholder involvement in research production. There are strong indications that improving these two elements will help to foster a research engagement culture. Yet, our analysis also notes some of the challenges, uncertainties and knowledge gaps that arise when considering stronger co‑ordination in these areas.
A culture of thoughtful engagement with evidence necessitates having a solid, high-quality knowledge base to draw on. Conceptually, this chapter, and the report more generally, considers that building such a knowledge base is not a neat and linear process where new knowledge is systematically added building on existing knowledge. A robust knowledge base is also built through falsification, where previous evidence claims are thrown out if disproven. Working to ensure this knowledge base is part of what can be described as a system-level culture (see Chapter 1). Yet, not enough is done in education to systematically synthesise research, systematically identify research gaps, and systematically review and revise the knowledge base – including rejecting earlier evidence when appropriate (Education.org, 2021[8]; Burkhardt and Schoenfeld, 2003[3]). Which tools and strategies can countries use to develop and support a fit‑for‑purpose knowledge base: by which we mean a solid evidence base that can drive teaching, school practice and policy making and improve our understanding of the education system as a whole?
While there needs to be more extensive engagement with existing research, and such research needs to be of a sufficient quantity and quality, the question of relevance of research to different groups is central to such an evidence system.
In examining these topics, this chapter addresses the following questions:
What role is there for co-ordination mechanisms in building a robust knowledge base in education and supporting engagement with this knowledge base?
What types of education research, evidence and knowledge do policy makers and practitioners most need – and how can we improve their access to these?
What forms of research production can improve the relevance of research to different communities – and ensure better engagement with research?
This chapter addresses these questions through a review of secondary research and international data collected in the Strengthening the Impact of Education Research policy survey (see Chapter 1 for details).
There are both optimistic and critical accounts of countries’ efforts to co-ordinate education research production. The culture of research production in education in countries such as the United Kingdom and the United States is still considered by some educationalists to function “largely for its own purposes”, where critics note that education research has so far failed to develop a solid body of agreed research results and detailed evidence of what works (Burkhardt and Schoenfeld, 2021[9]). At the same time, in these countries, as well as others such as Belgium, Chile, the Netherlands, Norway and the Philippines, co‑ordination mechanisms have been introduced that aim to build efficient education systems that draw on the best evidence available (Vilalta and Comas, 2021[10]). This chapter refers to “co‑ordination of research production” as any policy process deployed to steer research production to achieve a specific goal, such as facilitating research engagement in policy and practice or systematically addressing research needs and gaps. As such, our analysis is limited to education research production that has the perceived purpose of being relevant to policy and practice, which is certainly not the totality of education research. Incentivising education researchers to address these particular needs of policy and practice requires funding programmes for research that are strategic, adequately financed and targeted. In addition to funding mechanisms, systems use various mechanisms to co-ordinate and support education research production. Some of these mechanisms are explained in detail below.
Countries’ spending on education research affects the quantity and quality of the knowledge base in education. However, data on education research spending are generally lacking. This suggests first that education research remains a relatively low priority among policy makers in many systems. The available data (see Figure 2.1), nevertheless, indicate that countries allocate only a small fraction of their total research and development (R&D) funding to education. In addition, the data compare this spending with spending on health research and suggest that countries typically spend far less on education research than on health research.
Looking more closely at spending on research in these sectors, in 2014, all countries for which there are data available spent at least twice as much on health as on educational R&D, with Estonia, Korea and Hungary recording the largest difference in spending between the two sectors (see Figure 2.1). These differences are in part due to the presence of commercial incentives for investing in research in the health sector (Education.org, 2021[8]), and the fact that the health sector typically involves extensive R&D with a focus on medical research conducted by actors such as pharmaceutical companies.
This mirrors to some degree the differential in overall spending between the health and education sectors (OECD, 2022[5]). As a percentage of gross domestic product (GDP), many countries typically spend more on healthcare than on education; for instance, healthcare expenditure in Germany and France in 2020 was 12.8% and 12.2% respectively (Eurostat, 2020[11]). In comparison, France spent 5.2% of its GDP per full‑time equivalent student in primary to tertiary educational institutions while Germany spent 4.3% (OECD, 2022[12]). This difference can be partly explained because maintaining and expanding healthcare infrastructure requires significant resources. Nevertheless, the differences are much more marked for R&D spending. For example, Estonia spends 7.8% of its GDP on healthcare (Eurostat, 2020[11]) and 4.7% on education (OECD, 2022[12]), whereas spending on health R&D is almost 20 times more than that on education R&D (Figure 2.1). The contexts of R&D investment in the health and education sectors are somewhat different, and comparisons between the two can only be limited or superficial. However, these differences might also be indicative of societal and cultural perceptions that investing in healthcare improvements is worthwhile, while investing in education is not.
The Strengthening the Impact of Education Research policy survey data show that most responding systems reported co-ordinating education research production by means of one or more mechanisms. Funding mechanisms and consultations with policy makers on their needs are the most common ways of co-ordinating research production. However, less than half of systems reported having mechanisms such as a public research organisation or regular consultations with practitioners in place (Figure 2.2) including England, Estonia, Belgium (Flemish Community), New Zealand, South Africa and Spain. Overall, just two in five systems use both of these mechanisms to co-ordinate research production. This indicates an untapped potential for policy action to stimulate research that addresses the needs and interests of all stakeholders. Systems that consult both practitioners and policy makers on their needs include Canada (Quebec), Iceland, Latvia, the Netherlands, Slovenia and Switzerland (the canton of Appenzell Ausserrhoden). In addition, less than half of the surveyed OECD systems have a long-term strategy for research production. Systems that reported having such a strategy include Chile, Finland, Latvia, the Netherlands, Norway (see Chapter 7) and Slovenia.
In practice, several mechanisms are often used together to co-ordinate research production (Figure 2.3).
Strong co-ordination can sometimes have consequences, such as when policy actors restrict education research to a narrow range of methodologies and questions thereby perhaps stifling other promising types of research or methodologies. Researchers have, for instance, criticised the strong tendency towards randomised control trials following practices in healthcare evaluation, noting that experiments are not always the most appropriate or useful form of investigation (McKnight and Morgan, 2019[14]). Well-designed co-ordination mechanisms could, however, help to address key issues of research production and engagement, such as accessibility.
There is still scope to better understand how countries that perceive education research to be accessible use co-ordination mechanisms to ensure accessibility. The Strengthening the Impact of Education Research policy survey understood accessibility in terms of user-friendly formats, language and price (research that is behind paywalls). Figure 2.3 shows that there is no direct relationship between the number of co-ordination mechanisms and the perceived accessibility of research. Some systems, such as in Costa Rica and New Zealand, reported having either no or very few mechanisms, and a relatively low accessibility of research. Although with a slightly better average accessibility, England, Estonia, Flanders (Belgium), South Africa, Spain and Portugal also belong to this group. In contrast, the canton of Nidwalden (Switzerland) reported the presence of two mechanisms (a public research organisation and a long-term strategy for education research) and reported education research to be “quite accessible”. Quebec (Canada) is a similar exception, with few mechanisms but reasonable accessibility. Generally, countries that reported having a high number of mechanisms to co-ordinate education research (four or more) were more likely to view it as being fairly accessible. However, this is not the case in Iceland and Latvia.
There are two points to consider when comparing co-ordination mechanisms and accessibility. First, there can be different reasons for low accessibility. Setting up appropriate co-ordination mechanisms requires first digging deep to determine the exact cause of low accessibility: is it that there is no locally produced and context-relevant research? Is it that international research is not translated? Or is just not available in briefs or toolkits that potential users could more easily access? The second point concerns the effectiveness of existing co‑ordination mechanisms: are they targeting the actual research gaps and needs? Are they appropriately incentivising research that is of high quality, relevant and accessible? Finally, effectiveness lies not only in aligning mechanisms with perceived needs, but also in the quality of the co-ordination mechanisms themselves, and their alignment. Notably, building a robust knowledge base in education necessitates a sound, long-term strategy for research that is regularly evaluated and revised (see Chapter 7).
Several systems also face particular capacity challenges when it comes to co-ordinating and incentivising education research production. Systems serving smaller populations, such as Flanders (Belgium) and the Netherlands, report challenges such as:
A limited number of researchers working in the field of education research and educational organisation.
Infrequent cross-institutional or international research collaborations.
A declining number of Master and PhD students conducting research in fields relevant to education policy [(Watterreus and Sipma, 2023[15]); and see Chapter 5]. This is furthermore challenging as, in the Dutch case, the amount of direct government funding for education research is tied to the number of students in the educational sciences (Watterreus and Sipma, 2023[15]).
These capacity limitations are largely related to an insufficient overall share of funding. However, in some cases, even when there is targeted funding available for research that is relevant to education policy, there may only be a limited number of researchers applying for the funding (see Chapter 5).
Smaller systems can also be particularly affected by issues of transferability of research produced in other contexts. While education policy makers call for context-specific research that is relevant to local contexts, such research – when it exists – is often small scale. In systems such as New Zealand, limited funding for education research and development has affected the country’s ability to fund research at scale (Box 2.1). Education research initiatives in New Zealand are also comparatively more dependent on government funding in the absence of foundations funding such research, e.g. as in the United States (NZCER, 2022, p. 5[16]). In comparison, international research can be considered valuable by virtue of it often being large‑scale, but conversely does not lend itself to be easily translated to specific or local contexts. With many education systems having unique characteristics, it is not straightforward to apply research findings and policy instruments from other systems (Watterreus and Sipma, 2023[15]).
The New Zealand Council for Educational Research (NZCER) is a research and development organisation established in 1934. The NZCER Act 1972 provides the organisation with a mandate to carry out and disseminate education research. The organisation’s revenue derives from a government grant, research contracts, and sales of products and services related to education, such as curriculum and assessment tools. The NZCER’s principal tasks are to carry out and disseminate education research and provide independent information, advice and assistance. Much of its research work is conducted on contract for clients, and NZCER has fostered links with many strategic partners and stakeholders in New Zealand.
The organisation’s strategic priorities include improving equity for learners, decolonising education and influencing the future of education. The organisation supports building Māori research capability and Māori education research as a fundamental component of governing and delivering research in the country, including through strategies of co-leadership and co-design in research. The NZCER advocates for a system that gives stability to research and development through dedicated funding. It raises awareness of the need for a significant increase in government investment in education research, and to secure base funding to enable more significant research programmes.
Sources: NZCER (2022[16]), Te Ara Paerangi Future Pathways Green Paper, www.mbie.govt.nz/dmsdocument/20739-nz-council-for-educational-research-te-ara-paerangi-future-pathways-green-paper-submission-pdf; NZCER (2022[17]), “Is educational research in Aotearoa in good shape?” https://doi.org/10.18296/rep.0023.
The transferability paradox has an important implication for policy. There is a clear need for effectiveness and scaled up research that helps to understand “how to make programmes work under a range of circumstances, and for all groups” (Gutiérrez and Penuel, 2014, p. 22[18]). Currently, however, a significant number of the OECD systems surveyed consider both effectiveness and scale-up research to be inaccessible (see Figure 2.4).
Building more effective bridges between researchers and diverse audiences is key to promoting research engagement. Yet in many systems, there are too few incentives for researchers to promote their work among practitioners and policy makers. It is certainly true that in some countries and contexts, researchers are increasingly expected to demonstrate research impact beyond traditional academic measures, in the form of a new “impact agenda” (Boswell and Smith, 2017[19]). This shift has occurred amid policy makers’ calls for more research evidence to inform policy, and universities’ “renewed focus on their civic mission” (Durrant and MacKillop, 2022[20]). Initiatives such as the research excellence frameworks developed in Australia and the United Kingdom (Upton, Vallance and Goddard, 2014[21]; Smith and Stewart, 2017[22]) are indicative of this agenda. However, in many cases, universities and academic research institutions still fail to adequately incentivise their staff to engage broad audiences for their research and expertise. The following incentive-related issues have been noted in studies of Anglo-Saxon academic cultures, where the “impact agenda” has become part of the core mission of many institutions of higher education (Boswell, Smith and Davies, 2022[23]):
Publishing in high-impact education journals is favoured to publishing in the grey literature, given the importance of successful academic journal publishing in tenure and promotion decisions (Lupia and Aldrich, 2015[24]). Researchers may feel they have to choose between actions that enable pay raises, promotions and tenure, and increasing their public engagement.
Similarly, impact-related activities may not be adequately taken into account in workloads. A study on incentives for public engagement in nine British universities found that the most reported obstacle to knowledge exchange was time – respondents wanted to engage further, but could only do so by making a significant out-of-hours commitment (Upton, Vallance and Goddard, 2014, p. 359[21]). In Smith and Stewart’s (2017[22]) study involving public health researchers in the United Kingdom, the most common complaint is related to the time-consuming nature of impact work and the lack of specifically allocated time.
The current impact architecture encourages academics to promote the findings of individual studies, rather than improve available evidence more broadly, such as through research syntheses and knowledge brokerage (Smith and Stewart, 2017[22]; Boswell, Smith and Davies, 2022[23]).
The above points highlight two problems. First, the under-resourcing of impact-related activities presents a clear obstacle to aligning academic culture and incentives to the impact agenda. This is strongly related to the precarity of academic research careers in many countries and systems, whereby researchers often hold fixed-term positions without permanent or continuous employment prospects and face strong pressures to publish in academic journals to further their careers (OECD, 2021[25]; OECD, 2021[26]). Second, more effective incentivisation may require a shift to rewarding processes over outcomes. This can include rewarding knowledge exchange when measuring impact, such as collaborative endeavours that build incrementally on a wider body of work, or building long-term relationships with non-academic audiences (Spaapen and van Drooge, 2011[27]; Upton, Vallance and Goddard, 2014[21]; Boswell and Smith, 2017[19]). Overall, there is much work to do here: better aligning incentives will require significant effort and co-ordination at a system level.
It is important to first state that not all education research has the goal to be relevant to policy or practice. However, this section explores how improving both the quality and accessibility of certain types of education research can be a core initial building block of the evidence system.
The Strengthening the Impact of Education Research policy survey collected policy makers’ perceptions of the relevance and accessibility of different types of education research. Many systems report shortages of research that is considered relevant to policy and practice across different research types What is particularly concerning is that policy makers perceived some types of research to be quite or highly relevant, but not very accessible (see Figure 2.4). For instance, 85% of systems considered to be scale‑up research to be quite or highly relevant while in comparison, 10% considered it to be highly accessible. This could indicate that systems lack studies that test the effectiveness of a wide range of populations or contexts (see Box 2.2). However, the gap between relevance and accessibility appears to be narrower for secondary research, possibly suggesting that evidence syntheses are among the types of research that policy makers are more likely to access (see below).
The Strengthening the Impact of Education Research policy survey data do not allow us to disentangle specific factors of accessibility, or to understand if these factors vary by type of research or by country. A first explanation for low perceived accessibility can be that there simply is little research available in a given area, with respondents reporting this as low accessibility. It is thus likely that the survey findings that indicate limited access to certain types of research simply reflect a general lack of research in the respective areas. A second explanation is that there is research, but for various reasons it may not be accessible. This seems especially plausible given the relatively low level of funding discussed above. The literature suggests that low accessibility of research is at least partly due to limited progress in open access publishing. On the one hand, university policies and open access mandates provided by funders have contributed to the growing share of open access scholarship (Piwovar et al., 2018[28]; Roehrig et al., 2020[29]). However, the ability to publish open access is dependent on the support pledged by individual institutions and most high-impact education journals retain a paywall policy (SCImago, 2023[30]). It may also be the case that certain types of research are inaccessible in terms of their format or language. This is supported by the Strengthening the Impact of Education Research policy survey data according to which low accessibility of research in appropriate formats is the third most common barrier in education practice and policy. In addition, less than two in three systems synthesise and disseminate research findings through user-friendly tools to practitioners and policy makers (OECD, 2022[5]).
At the same time, policy makers’ perceptions of which types of research are relevant can and perhaps sometimes need to be shaped. This can be done by supporting ministries and other education stakeholders to develop their understanding of how different types of research are relevant to their work. In practice, stakeholders disagree on fundamental topics such as the purpose of education research, the issues it should deal with and the methods it should employ. This is reflected in the tendency for policy makers to consider applied research to be more relevant: effectiveness research was considered highly relevant by more than half of the surveyed systems, whereas foundational and exploratory research were seen as highly relevant by 40% and 25% of the systems, respectively (Figure 2.4). While it is intuitive to think of applied research as more relevant for policy makers or practitioners, there is no clear binary distinction, with much of education research falling on a spectrum between basic and applied research. Arguably, many forms of research can potentially address practical concerns in education and provide a space for critical reflection and holistic understanding of practice and policy.
Design and development research develops solutions to achieve a goal related to education or learning; it draws on existing theory or evidence to develop interventions.
Early-stage or exploratory research examines relationships among important constructs in education and learning to establish logical connections and inform the design of future interventions.
Effectiveness research examines the effectiveness of an intervention under typical circumstances.
Efficacy research allows testing a strategy or intervention under “ideal” circumstances (e.g. with a higher level of support than one would find in normal circumstances).
Foundational research is research intended to contribute to better outcomes; studies that test, develop or refine theories of teaching or learning.
Scale-up research examines effectiveness research in a wide range of populations and contexts.
Secondary research consists of syntheses or reviews of existing research on specific topics.
Source: Adapted from Institute of Education Sciences and the National Science Foundation (2013[31]), Common Guidelines for Education Research and Development, www.nsf.gov/pubs/2013/nsf13126/nsf13126.pdf.
A lack of time to access research remains one of the key barriers to engaging with research in policy and practice (OECD, 2022[5]). Coupled with the ever-expanding body of research, this generates demand for evidence synthesis.
Evidence synthesis can be defined as “a rigorous approach to cumulate evidence” (Polanin, Maynard and Dell, 2017, p. 172[32]). In particular, a key benefit is the possibility of drawing evidence from a comprehensive body of literature instead of relying on a single study which can be misleading (Gough, Maidment and Sharples, 2018[33]). Box 2.3 lists different types of evidence synthesis.
Systematic review is a type of evidence synthesis which employs rigorous methods to outline “what is known and how is it known and what more do we need to know” (Gough, Maidment and Sharples, 2018, p. 66[33]).
A thematic narrative review assesses the state of knowledge on a given question through a theoretical lens (Rother, 2007[34]).
Rapid review is an accelerated evidence synthesis approach intended to meet the timely needs of decision makers (Kelly, Moher and Clifford, 2016[35]).
Meta analysis is a statistical technique of combining findings from a set of studies that address common research hypotheses (Denson and Seltzer, 2010[36]).
Syntheses of reviews integrate results from qualitative, quantitative or mixed methods empirical studies. Such syntheses may include systematic reviews, thematic narrative reviews and meta‑analyses (British Educational Research Association, 2023[37]).
Evidence synthesis offers numerous benefits. First, it enables educators and policy makers to understand which programmes or practices have been proven to work in other contexts (Slavin, 2019[38]) and can help guide funding decisions (Gough, Oliver and Thomas, 2017[39]). Second, it can shed light on the cause of disagreement on a given question. For instance, one reason why disagreement exists regarding the impact of reducing class size on student learning may be its heterogenous effects, namely the benefits being mostly limited to disadvantaged pupils in primary schools (EEF, 2023[40]). The health sector is an example of a sector that has been highly active in creating a robust knowledge base through evidence synthesis (Box 2.4).
The health sector produces 26 times more syntheses than the education sector (Education.org, 2021[8]). In many countries, this volume gap can be partly explained by the higher funding of health research in absolute terms and as a percentage of total research and development (R&D) funding (OECD, 2019[13]). Nevertheless, the R&D spending gap in these sectors is significantly narrower than the volume gap in research syntheses. This indicates that some key synthesis work may be missing for certain topics and areas of education research. For instance, evidence of what does not work is often lacking (Education.org, 2021[8]).
Evidence syntheses in education have sometimes been found to be of questionable quality: they are often incomplete, sporadic, lack actionable guidance and outdated (Education.org, 2021[8]). Meta-analyses in particular often fail to produce “credible and generalisable meta-analytic findings that can be transformed to educational practices” (Ahn, Ames and Myers, 2012, p. 436[41]).In addition, one of the main reasons for the low reproducibility and replicability of systematic reviews is the apparent lack of transparency in reporting data, or the analytical procedure and methods (Page et al., 2021[42]). The same holds for synthesis of reviews (Polanin, Maynard and Dell, 2017[32]).
Knowledge intermediaries have done important work to address these concerns about the quality and accessibility of evidence syntheses in education (Torres and Steponavičius, 2022[43]) (Box 2.5). Meanwhile, specialised departments and units within Ministries of Education are commissioning external research and identifying significant knowledge gaps, as well as producing research syntheses and green papers.
A growing number of knowledge intermediaries are contributing to evidence synthesis in education. This includes the 3ie Development Evidence Portal, an expansive repository of rigorous evidence of “what works” in international development interventions, including in education. The Evidence for Policy and Practice Information Centre at University College London conducts systematic reviews in sectors such as health and education and works closely with diverse stakeholders to understand their needs.
The most well-known evidence synthesis for education practice initiative is the Education Endowment Foundation’s (EEF) online Teaching and Learning Toolkit that synthesises and routinely updates evidence about school-based interventions.
Increasingly, knowledge intermediaries are also active beyond Anglo-Saxon countries. In two such examples described below, intermediaries have integrated the EEF toolkit into their work.
SUMMA (Laboratorio de Investigación e Innovación en Educación para América Latina y el Caribe) is a knowledge intermediary active in the Latin America and Caribbean region. SUMMA hosts the Effective Education Practices Platform – an online toolkit that synthesises global and regional high-quality evidence and research on school-level education interventions. The platform provides synthesised information of different interventions, including a description of the intervention or strategy; the impact that can be expected based on evidence of effectiveness; contextualisation of evidence detailing what the research findings are for the region; how secure the evidence is in terms of the quality, quantity and consistency of available international evidence; an estimation of costs needed to implement a given strategy; and factors to consider before implementing a specific strategy or intervention. The platform is available in Spanish, Portuguese and English.
eBASE is a knowledge intermediary based in Cameroon whose work has focused on the Lake Chad basin (including countries such as Cameroon, Chad, Niger and Nigeria). eBASE has developed a teaching and learning toolkit that is available in French and English for decision makers, teachers and learners. The toolkit integrates 27 teaching strands that present synthesised evidence on each theme.
Source: eBASE (2023[44]), “Teaching and Learning Toolkit, https://ebaselearning.org/teaching-learning-toolkit; SUMMA (2023[45]), “Effective Education Practices Platform”, https://practicas.summaedu.org/.
Improving the quality and quantity of evidence syntheses and their accessibility is not a panacea for educational issues. Dissemination alone is often ineffective at enhancing research impact (Oliver et al., 2022[46]). High-quality and accessible evidence syntheses should instead be seen as a core building block of the evidence system around which other initiatives aimed at fostering a culture of research engagement can be developed.
A common complaint among educationalists is that research is not directly relevant to policy makers and practitioners, which discourages their engagement with research (OECD, 2022[5]).
Traditional models of research dissemination have been shown to perform poorly when it comes to connecting research evidence with practitioners or policy makers (DuMont, 2019[47]). Research on knowledge brokerage has called for more attention to the “social side” of research engagement; in other words, to better understand the role of relationships in knowledge mobilisation processes (see also Chapter 3) (DuMont, 2019[47]). This has led to calls for collaborative research design, and in particular co‑production – a form of stakeholder involvement in the production of research, following specific principles – to be more highly valued.
Advocates of collaborative research note that involving stakeholders in the research process can strengthen the quality, relevance and availability of research to inform policy or practice (Gough et al., 2011[48]; Boaz, 2021[49]). For example, they argue that engaging practitioners in setting the research agenda can help to ensure more actionable findings and more efficient data collection (Meyer et al., 2022[50]). Extending collaboration to more stages of the research production process can also foster a better mutual understanding between policy makers, practitioners and researchers of their respective work. It can enhance public trust in research – and empower communities that have been traditionally marginalised (Meyer et al., 2022[50]; Davies, Powell and Nutley, 2015[51]; Boswell, Smith and Davies, 2022[23]; OECD, 2022[5]).
At the same time, proposals for research where the explicit goal is to increase relevance to policy and practice, such as collaborative research, have been met with scepticism in some arenas. Critics have raised concerns such as that resulting research methods risk introducing more political ideology and bias into education research [Van Damme in Bangs et al. (2022[52])]. These criticisms are predominantly concerned with a risk of eroding the methodological rigour of education and other social science research by adopting such approaches and consider this too high a cost despite the potential benefits of increasing research engagement. Others are more optimistic regarding the transformative potential of collaborative research, but have also advocated a cautious approach: the ability to negotiate ethical-political dimensions of research methods should be a key competency when conducting this type of research (West and Schill, 2022[53]; Oliver, Kothari and Mays, 2019[54])
The Strengthening the Impact of Education Research policy survey asked about ministries’ perception of different stakeholders’ involvement in seven stages of research production and about incentives for such involvement. Academic researchers were considered to receive the most intrinsic and extrinsic incentives and were reported as the most involved in all stages of research production (Figure 2.5). For both policy makers and practitioners, systems reported on average more than two intrinsic incentives for participation, such as informal recognition from peers and a sense of participation in national debate. However, almost none of the systems reported extrinsic incentives such as research production being part of the job description and performance evaluation criteria, explicit time allocation, or a salary supplement for these groups. Practitioners were perceived to be the group the least involved in research production. Policy makers were reported to be somewhat involved, on average, in research production despite the apparent lack of extrinsic incentives (although their participation is heavily concentrated on the early stage of formulating research questions; see Figure 2.6).
Some countries have a stronger tradition of collaborative research. In Sweden, an increase in research collaborations with practitioners emerged as key steps were taken to embed an evidence- and experience‑based approach to education. In particular, the government’s revision of the Education Act in 2010 made it compulsory for education in Sweden to be based on rigorous research and scientific knowledge and subsequently spurred collaboration with researchers and practitioners (Box 2.6).
Sweden ratified a new Education Act in 2010 which replaced the previous act of 1985. The 2010 act states that education in Sweden must be based on scientific knowledge and proven experience. The Swedish National Agency for Education considers scientific knowledge to be based on “theoretical rooting, elaboration and development, as well as an empirical basis” (Swedish National Agency for Education, 2014, p. 11[55]), and defines “proven experience” as teacher knowledge that is tested between peers and documented.
Since the ratification of the 2010 act, there has been an emphasis on practice-near school research in Sweden, which is research that focuses on practitioners’ needs and has practice improvement as a central purpose. Researchers have suggested that the wording of the Education Act has fuelled interest in this type of research. The Swedish National Agency for Education has also played an important role in endorsing the implementation of the Education Act and the growth of practice-near school research, which has seen collaboration between researchers and practitioners increase (see, example, the Education, Learning, Research initiative below). Other developments to promote evidence-informed policy and practice also followed the introduction of the Education Act, such as the founding of the Swedish Institute for Educational Research in 2015.
The Education, Learning, Research initiative is a Swedish research-practice partnership that strengthens the scientific basis for teacher training. The pilot project tested collaboration models between different stakeholders on practice-based research, bringing together universities, municipalities and schools.
Sources: Magnusson and Malmström (2022[56]), “Practice-near school research in Sweden”, https://doi.org/10.1080/20004508.2022.2028440; Swedish National Agency for Education (2014[55]), Research for Classrooms: Scientific Knowledge and Proven Experience in Practice, http://effect.tka.hu/documents/Research_for_Classrooms_benne_collaborative_learning.pdf.
Despite optimism about stakeholder engagement in research production, it is important to note that the evidence that collaborative research increases engagement with research evidence or has a positive effect on student outcomes is mixed, with studies varying widely in their conclusions (Coburn and Penuel, 2016[57]; Boaz et al., 2018[58]). Studies have reported on the effectiveness and impact of collaborative research, including in particular research-practice partnership (RPPs) as one such form. RPPs focus on problems of practice (challenges that practitioners face) and employ intentional strategies to foster partnerships, including carefully designed rules to structure interactions between researchers and practitioners (Coburn and Penuel, 2016[57]).
On teachers’ engagement, research shows positive outcomes but also challenges:
Participation in an RPP may lead to organisational change that enables greater access to research (Coburn and Penuel, 2016[57]).
Certain conditions in RPPs can be conducive to the development of mindsets and behaviours among teachers associated with evidence-based decision making, while the trust and communication that is built as part of RPPs can bolster research use (Wentworth, Mazzeo and Connolly, 2017[59]).
The most frequently reported challenge is that teachers lack time and external support to engage with research (Cooper, Klinger and McAdie, 2017[60]; Bell et al., 2010[61]).
On student outcomes, evidence is also mixed:
The evidence that collaborative research improves student outcomes could be stronger and more conclusive (Gorard, See and Siddiqui, 2020[4]).
RPPs have been shown to produce reasonable evidence of promising impacts on student learning (Coburn and Penuel, 2016[57]). However, it remains unclear if their impact on student learning is due to the nature of the partnerships themselves, and if results would have been different outside the context of a RPP.
Other studies have found little evidence of the impact of RPPs on student outcomes (Blazar and Kraft, 2019[62]; Cannata, Redding and Nguyen, 2019[63]).
Clearly, more research is needed to ascertain the benefits of collaborative research on research engagement and student outcomes. In particular, two important research gaps remain. First, many collaborative research studies have not been evaluated. The case of RPPs illustrates this: evidence of their effectiveness remains limited to individual case studies, and RPPs are themselves quite diverse (Desimone, Wolford and Hill, 2016[64]; Welsh, 2021[65]). However, it is worth noting that the partnership work such as done through RPPs and other collaborative research aims to fundamentally change the culture of research engagement, and this impact may only become visible over time (Welsh, 2021[65]). Second, studies on collaborative research, and RPPs in particular, are concentrated in the North American context such as the United States (Coburn and Penuel, 2016[57]) and Canada (Cooper, Shewchuk and MacGregor, 2020[66]), although research is emerging in Germany, Norway and Sweden (Hartmann and Decristan, 2018[67]; Sjölund et al., 2022[68]; Fjørtoft and Sandvik, 2021[69]; OECD, 2022[5]). Finally, as we noted earlier, whether collaborative research poses other risks such as affecting studies’ methodological rigour – and whether these risks are acceptable – is still being debated. Ascertaining the impact of such research approaches on research engagement and other outcomes forms a relevant part of this debate.
A further challenge is that genuine partnerships of research production often remain more of an aspiration. Figure 2.6 shows that policy makers are more widely involved in research production than teachers. In most respondent systems, they are involved in formulating research questions and communicating and evaluating results. Teachers’ involvement is concentrated on formulating research questions and data collection (although both only in less than half of the systems). When teachers are only involved in data collection (Quebec [Canada], Costa Rica) it may indicate a passive role whereby they simply fill in surveys or respond to interviews. A solely passive role, however, does not bring much benefit to schools [Nagy in Bangs et al. (2022[52])]. A few countries such as Latvia, Norway and Spain appear to have a stronger collaborative research culture that involves policy makers and practitioners more extensively.
Researchers too often appear unprepared for genuine co-production (Boaz, 2021[49]). Some of the key challenges include power relations, incentives for actors to be involved and institutional support for academics in their engagement.
First, genuine collaboration requires managing power asymmetry between stakeholders, which often arises from differences in professional backgrounds and lived experiences (Meyer et al., 2022[70]). Often researchers hold the power position in seemingly collaborative approaches, with their points of view and knowledge dominating the process. Too often co-production is seen as an add-on or a stage of research rather than an entirely different epistemological approach to doing research (Boaz, 2021[49]). What starts out as co-production can revert to a traditional research design characterised by extractive relationships between stakeholders and researchers. For example, teachers’ involvement may be limited to data generation and mere consultation, particularly when researchers are subject to time pressures to complete a project and publish their results in a journal. The process of developing a collaborative research agenda can also be contentious, as the goals and priorities of partners may vary. In general, the challenge of running a partnership in a way that shares power equally, where academic researchers have previously considered this “their turf”, has been underestimated (Prøitz and Rye, 2023, p. 3[71]). To address the issue of unequal power dynamics, studies on collaborative work have recommended employing intentional strategies to guide partnerships, including rules for a structured interaction (Hartmann and Decristan, 2018[67]; Jones et al., 2016[72]).
Second, academic researchers are often incentivised to focus on research that is appealing to journals, while practitioners are inclined towards the application of knowledge in specific settings, and research approaches that minimise classroom disruption (Meyer et al., 2022[50]).
Third, in terms of institutional support, academics still lack support and training on how to work together with practitioners or policy makers. While such training programmes and policy engagement centres have been springing up within universities in recent years, these remain underexamined in the literature and their contribution to evidence engagement in policy is still poorly conceptualised (Durrant and MacKillop, 2022[20]). The same is often true for practitioners and policy makers (see Chapter 3).
In sum, involving stakeholders in research production is considered by many to be a promising way of increasing research relevance and engagement. However, important questions remain to be explored. This chapter highlighted that collaborative research approaches have also been contested, with critics expressing concerns about methodological risks. Practically, it is also unlikely to be feasible that every teacher in every school constantly or even regularly collaborates with researchers. How could promising initiatives be scaled – and what do we mean by scaling in the context of collaborative research? Systems, schools, universities and intermediary organisations have been experimenting with different models of collaborative research. These include RPPs, research school networks (EEF, 2023[73]) and teacher researchers (Halász, 2022[74]), among others (see also Chapter 8). However, we still know little about which models are effective and under what conditions. How can they increase evidence use among teachers and policy makers? What is their impact on teachers’ and policy makers’ beliefs, competences and practice?
This chapter presented a number of mechanisms, practices and processes that policy organisations and other actors use to co-ordinate, steer or influence the production of education research. The analysis thus focused on the co-ordination of the production of research that is deemed relevant to policy and practice yet, as we noted, this is not the totality of research production. Indeed, while an important question that arises is how much of research production should be geared towards producing knowledge that is relevant or useful for policy and practice, this question goes beyond the scope of this chapter. While the presented mechanisms have mostly not been evaluated, they serve as examples of policy levers and other organisational practices that are employed to strengthen the knowledge base and nurture the culture of research engagement in education.
This chapter has addressed three main questions: 1) the role of co-ordination mechanisms in building a robust knowledge base and supporting research engagement; 2) the different types of education research that stakeholders need and ways to improve access; and 3) the main considerations when exploring the potential of collaborative research processes for a better research engagement.
Three key messages are emerging that draw on the analysis of these questions.
First, our overall understanding of which co-ordination mechanisms work in different contexts, and how well they work to build a robust knowledge base, could be improved. The existence of a mechanism alone says little about how well it works, or the quality of its design. More research and evaluation are needed to understand how policy mechanisms can help to address key issues of research production and engagement, such as the accessibility of relevant research. Second, system-level incentives are needed to support research engagement, including additional funding for knowledge mobilisation activities, and eventually also incentives to support diverse actors’ involvement in research production. Addressing this requires substantial, system-level cultural change if systems are to move towards a culture of research engagement. Third, funding is a key co-ordination mechanism in different systems. Ensuring stable funding for education research production is key and can include exploring options such as base funding for research organisations to enable significant research programmes and grow and retain research capabilities.
Stakeholders need better accessibility to education research, but particularly some types of research. Accessibility, availability of research and funding are also intricately interlinked. A more fine-grained analysis is needed to make sense of the low accessibility of education research. The relatively low level of funding dedicated to education research seems to suggest that in some cases there may simply be a shortage of research in certain areas. This, in turn, calls for changes in the collection and reporting of data for education research and development spending, which at the moment is largely lacking.
In light of increasingly complex informational environments, evidence synthesis can help co-ordinate research efforts and help policy makers and practitioners access relevant research. Yet, there is considerable scope for high-quality evidence synthesis to be done more systematically.
Finally, based on their specific needs and context, systems need to consider how much to invest in their capacity both to produce locally relevant research and to conduct scale-up and effectiveness research as a way of translating findings to their context. While drawing on findings from different contexts implies cost savings, it is not straightforward to generalise findings and apply these to a different context. Conversely, addressing all knowledge needs with research produced within a given system may not be feasible due to limited capacity, especially in small education systems.
Collaborative research has been shown to be promising when considered purely from the perspective of increasing practitioners’ and policy makers’ engagement with research. However, the evidence on the impact of collaborative research is inconclusive due to the high variety of approaches and contexts in which they are implemented. To better understand the potential benefits – and indeed potential risks or drawbacks – of drawing diverse stakeholders into education research production processes, it is essential to prioritise evaluating and piloting of collaborative research.
Finally, it is at best impractical for all teachers and schools to be in regular collaboration with researchers. How, then, can promising initiatives be scaled? And what does scaling look like in the context of collaborative research? The question of scalability of collaborative research initiatives will need to be resolved.
[41] Ahn, S., A. Ames and N. Myers (2012), “A review of meta-analyses in education: Methodological strengths and weaknesses”, Review of Educational Research, Vol. 82/4, pp. 436-476, https://doi.org/10.3102/0034654312458162.
[52] Bangs, J. et al. (2022), “Perspectives on education research”, in Révai, N. (ed.), Who Cares about Using Education Research in Policy and Practice?: Strengthening Research Engagement, OECD Publishing, Paris, https://doi.org/10.1787/e83abf22-en.
[61] Bell, M. et al. (2010), Report of Professional Practitioner Use of Research Review: Practitioner Engagement in and/or With Research, CUREE, GTCE, LSIS & NTRP, Coventry, http://www.curee.co.uk/node/2303.
[62] Blazar, D. and M. Kraft (2019), “Balancing rigor, replication, and relevance: A case for multiple-cohort, longitudinal experiments”, AERA Open, Vol. 5/3, https://doi.org/10.1177/2332858419876252.
[49] Boaz, A. (2021), “Lost in co-production: To enable true collaboration we need to nurture different academic identities”, LSE Impact Blog, https://blogs.lse.ac.uk/impactofsocialsciences/2021/06/25/lost-in-co-production-to-enable-true-collaboration-we-need-to-nurture-different-academic-identities.
[58] Boaz, A. et al. (2018), “How to engage stakeholders in research: Design principles to support improvement”, Health Research Policy and Systems, Vol. 16/1, https://doi.org/10.1186/s12961-018-0337-6.
[19] Boswell, C. and K. Smith (2017), “Rethinking policy ‘impact’: Four models of research-policy relations”, Palgrave Communications, Vol. 3/1, pp. 1-10, https://doi.org/10.1057/s41599-017-0042-z.
[23] Boswell, C., K. Smith and C. Davies (2022), Promoting Ethical and Effective Policy Engagement in the Higher Education Sector, The Royal Society of Edinburgh, Edinburgh, https://rse.org.uk/.
[37] British Educational Research Association (2023), Author guidelines, https://bera-journals.onlinelibrary.wiley.com/hub/journal/20496613/forauthors.html (accessed on 20 January 2023).
[9] Burkhardt, H. and A. Schoenfeld (2021), “Not just “implementation”: The synergy of research and practice in an engineering research approach to educational design and development”, ZDM – Mathematics Education, Vol. 53/5, pp. 991-1005, https://doi.org/10.1007/s11858-020-01208-z.
[3] Burkhardt, H. and A. Schoenfeld (2003), “Improving educational research: Toward a more useful, more influential, and better-funded enterprise”, Educational Researcher, Vol. 32/9, pp. 3-14, https://doi.org/10.3102/0013189X032009003.
[63] Cannata, M., C. Redding and T. Nguyen (2019), “Building student ownership and responsibility: Examining student outcomes from a research-practice partnership”, Journal of Research on Educational Effectiveness, Vol. 12/3, pp. 333-362, https://doi.org/10.1080/19345747.2019.1615157.
[57] Coburn, C. and W. Penuel (2016), “Research-practice partnerships in education”, Educational Researcher, Vol. 45/1, pp. 48-54, https://doi.org/10.3102/0013189X16631750.
[60] Cooper, A., D. Klinger and P. McAdie (2017), “What do teachers need? An exploration of evidence-informed practice for classroom assessment in Ontario”, Educational Research, Vol. 59/2, pp. 190-208, https://doi.org/10.1080/00131881.2017.1310392.
[66] Cooper, A., S. Shewchuk and S. MacGregor (2020), “A developmental evaluation of research-practice-partnerships and their impacts”, International Journal of Education Policy and Leadership, Vol. 16/9, https://doi.org/10.22230/ijepl.2020v16n9a967.
[51] Davies, H., A. Powell and S. Nutley (2015), “Mobilising knowledge to improve UK health care: Learning from other countries and other sectors – A multimethod mapping study”, Health Services and Delivery Research, Vol. 3/27, pp. 1-190, https://doi.org/10.3310/hsdr03270.
[36] Denson, N. and M. Seltzer (2010), “Meta-analysis in higher education: An illustrative example using hierarchical linear modeling”, Research in Higher Education, Vol. 52/3, pp. 215-244, https://doi.org/10.1007/s11162-010-9196-x.
[64] Desimone, L., T. Wolford and K. Hill (2016), “Research-practice: A practical conceptual framework”, AERA Open, Vol. 2/4, https://doi.org/10.1177/2332858416679599.
[47] DuMont, K. (2019), “Reframing evidence-based policy to align with the evidence”, The Digest, Issue 4, William T. Grant Foundation, https://wtgrantfoundation.org/digest/reframing-evidence-based-policy-to-align-with-the-evidence.
[20] Durrant, H. and E. MacKillop (2022), “University policy engagement bodies in the UK and the variable meanings of and approaches to impact”, Research Evaluation, Vol. 31/3, pp. 372-384, https://doi.org/10.1093/RESEVAL/RVAC015.
[44] eBASE (2023), Teaching and Learning Toolkit, https://ebaselearning.org/teaching-learning-toolkit (accessed on 24 January 2023).
[8] Education.org (2021), Calling for an Education Knowledge Bridge: A White Paper to Advance Evidence Use in Education, Education.org, https://whitepaper.education.org/download/white_paper.pdf.
[73] EEF (2023), Research Schools Network, https://educationendowmentfoundation.org.uk/support-for-schools/research-schools-network (accessed on 24 January 2023).
[40] EEF (2023), Teaching and Learning Toolkit, https://educationendowmentfoundation.org.uk/education-evidence/teaching-learning-toolkit (accessed on 24 January 2023).
[11] Eurostat (2020), Health care expenditure by financing scheme (database), https://ec.europa.eu/eurostat/statistics-explained/index.php?title=Healthcare_expenditure_statistics#.
[69] Fjørtoft, H. and L. Sandvik (2021), “Leveraging situated strategies in research-practice partnerships: Participatory dialogue in a Norwegian school”, Studies in Educational Evaluation, Vol. 70, p. 101063, https://doi.org/10.1016/j.stueduc.2021.101063.
[4] Gorard, S., B. See and N. Siddiqui (2020), “What is the evidence on the best way to get evidence into use in education?”, Review of Education, Vol. 8/2, pp. 570-610, https://doi.org/10.1002/REV3.3200.
[33] Gough, D., C. Maidment and J. Sharples (2018), UK What Works Centres: Aims, Methods and Contexts, University College London, https://eppi.ioe.ac.uk/cms/Default.aspx?tabid=3731.
[39] Gough, D., S. Oliver and J. Thomas (2017), An Introduction to Systematic Reviews, Sage.
[48] Gough, D. et al. (2011), Evidence Informed Policymaking in Education in Europe: EIPEE Final Project Report, EPPI-Centre, Social Science Research Unit, Institute of Education, University of London, London.
[18] Gutiérrez, K. and W. Penuel (2014), “Relevance to practice as a criterion for rigor”, Educational Researcher, Vol. 43/1, pp. 19-23, http://www.jstor.org.ezp.lib.cam.ac.uk/stable/24571243.
[74] Halász, G. (2022), “Communication, collaboration and co-production in research: Challenges and benefits”, in Who Cares about Using Education Research in Policy and Practice?: Strengthening Research Engagement, OECD Publishing, Paris, https://doi.org/10.1787/2d9d7988-en.
[67] Hartmann, U. and J. Decristan (2018), “Brokering activities and learning mechanisms at the boundary of educational research and school practice”, Teaching and Teacher Education, Vol. 74, pp. 114-124, https://doi.org/10.1016/J.TATE.2018.04.016.
[31] Institute of Education Sciences and the National Science Foundation (2013), Common Guidelines for Education Research and Development, https://www.nsf.gov/pubs/2013/nsf13126/nsf13126.pdf.
[72] Jones, M. et al. (2016), “Successful university-school partnerships: An interpretive framework to inform partnership practice”, Teaching and Teacher Education, Vol. 60, pp. 108-120, https://doi.org/10.1016/j.tate.2016.08.006.
[1] Kaestle, C. (1993), “The awful reputation of education research”, Educational Researcher, Vol. 22/1, p. 23, https://doi.org/10.2307/1177303.
[35] Kelly, S., D. Moher and T. Clifford (2016), “Quality of conduct and reporting in rapid reviews: An exploration of compliance with PRISMA and AMSTAR guidelines”, Systematic Reviews, Vol. 5/79, https://doi.org/10.1186/s13643-016-0258-9.
[24] Lupia, A. and J. Aldrich (2015), “How political science can better communicate its value: 12 recommendations from the APSA Task Force”, Political Science & Politics, Vol. 48/S1, pp. 1-19, https://doi.org/10.1017/S1049096515000335.
[56] Magnusson, P. and M. Malmström (2022), “Practice-near school research in Sweden: Tendencies and teachers’ roles”, Education Inquiry, pp. 1-22, https://doi.org/10.1080/20004508.2022.2028440.
[14] McKnight, L. and A. Morgan (2019), “A broken paradigm? What education needs to learn from evidence-based medicine”, Journal of Education Policy, Vol. 35/5, pp. 648-664, https://doi.org/10.1080/02680939.2019.1578902.
[50] Meyer, J. et al. (2022), “Whose agenda is it? Navigating the politics of setting the research agenda in education research-practice partnerships”, Educational Policy, Vol. 37/1, pp. 122-146, https://doi.org/10.1177/08959048221131567.
[70] Meyer, J. et al. (2022), “Whose Agenda is It? Navigating the Politics of Setting the Research Agenda in Education Research-Practice Partnerships”, Educational Policy, Vol. 37/1, pp. 122-146, https://doi.org/10.1177/08959048221131567.
[17] NZCER (2022), “Is educational research in Aotearoa in good shape?”, an NZCER occasional paper, New Zealand Council for Educational Research, https://doi.org/10.18296/rep.0023.
[16] NZCER (2022), Te Ara Paerangi Future Pathways Green Paper, New Zealand Council for Educational Research, https://www.mbie.govt.nz/dmsdocument/20739-nz-council-for-educational-research-te-ara-paerangi-future-pathways-green-paper-submission-pdf.
[12] OECD (2022), Education at a Glance 2022: OECD Indicators, OECD Publishing, Paris, https://doi.org/10.1787/3197152b-en.
[5] OECD (2022), Who Cares about Using Education Research in Policy and Practice? Strengthening Research Engagement, Educational Research and Innovation, OECD Publishing, Paris, https://doi.org/10.1787/d7ff793d-en.
[26] OECD (2021), “Challenges and new demands on the academic research workforce”, in OECD Science, Technology and Innovation Outlook 2021: Times of Crisis and Opportunity, OECD Publishing, Paris, https://doi.org/10.1787/72f6f879-en.
[25] OECD (2021), “Reducing the precarity of academic research careers”, OECD Science, Technology and Industry Policy Papers, No. 113, OECD Publishing, Paris, https://doi.org/10.1787/0f8bd468-en.
[13] OECD (2019), “Research and development statistics: Gross domestic expenditure on R-D by sector of performance and socio-economic objective (Edition 2018)”, OECD Science, Technology and R&D Statistics (database), https://doi.org/10.1787/5993e7f1-en.
[7] Oliver, K. and A. Boaz (2019), “Transforming evidence for policy and practice: Creating space for new conversations”, Palgarve Commun, Vol. 5/60, https://doi.org/10.1057/s41599-019-0266-1.
[46] Oliver, K. et al. (2022), “What works to promote research-policy engagement?”, Evidence & Policy, Vol. 18/4, pp. 691-713, https://doi.org/10.1332/174426421X16420918447616.
[54] Oliver, K., A. Kothari and N. Mays (2019), “The dark side of coproduction: do the costs outweigh the benefits for health research?”, Health Research Policy and Systems, Vol. 17/1, https://doi.org/10.1186/s12961-019-0432-3.
[42] Page, M. et al. (2021), “The REPRISE project: Protocol for an evaluation of REProducibility and Replicability In Syntheses of Evidence”, Systematic Reviews, Vol. 10, pp. 1-13, https://doi.org/10.1186/s13643-021-01670-0.
[28] Piwovar, H. et al. (2018), “The state of OA: a large-scale analysis of the prevalence and impact of Open Access articles”, PeerJ, Vol. 6, https://doi.org/10.7717/peerj.4375.
[32] Polanin, J., B. Maynard and N. Dell (2017), “Overviews in education research: A systematic review and analysis”, Review of Educational Research, Vol. 87/1, pp. 172-203, https://doi.org/10.3102/0034654316631117.
[71] Prøitz, T., P. Aasen and W. Wermke (eds.) (2023), “Actor roles in research-practice relationships: Equality in policy-practice nexuses”, From Education Policy to Education Practice: Unpacking the Nexus, Springer Nature (forthcoming).
[29] Roehrig, A. et al. (2020), “Changing the Default to Support Open Access to Education Research”, Educational Researcher, Vol. 47/7, pp. 465-473.
[34] Rother, E. (2007), “Systematic literature review X narrative review”, Acta paulista de enfermagem, Vol. 20/2, https://doi.org/10.1590/S0103-21002007000200001.
[30] SCImago (2023), SCImago Journal & Country Rank, website, http://www.scimagojr.com (accessed on 14 April 2023).
[68] Sjölund, S. et al. (2022), “Using research to inform practice through research‐practice partnerships: A systematic literature review”, Review of Education, Vol. 10/1, https://doi.org/10.1002/rev3.3337.
[38] Slavin, R. (2019), “How evidence-based reform will transform research and practice in education”, Educational Psychologist, Vol. 55/1, pp. 21-31, https://doi.org/10.1080/00461520.2019.1611432.
[22] Smith, K. and E. Stewart (2017), “We need to talk about impact: Why social policy academics need to engage with the UK’s research impact agenda”, Journal of Social Policy, Vol. 46/1, pp. 109-127, https://doi.org/10.1017/S0047279416000283.
[27] Spaapen, J. and L. van Drooge (2011), “Introducing ‘productive interactions’ in social impact assessment”, Research Evaluation, Vol. 20/3, pp. 211-218, https://doi.org/10.3152/095820211X12941371876742.
[45] SUMMA (2023), Effective Education Practices Platform, https://practicas.summaedu.org/en/what-is-it-platform/what-is-it-main-objectives/ (accessed on 14 April 2023).
[55] Swedish National Agency for Education (2014), Research for Classrooms: Scientific Knowledge and Proven Experience in Practice, Swedish National Agency for Education, http://effect.tka.hu/documents/Research_for_Classrooms_benne_collaborative_learning.pdf.
[43] Torres, J. and M. Steponavičius (2022), “More than just a go-between: The role of intermediaries in knowledge mobilisation”, OECD Education Working Papers, No. 285, OECD Publishing, Paris, https://doi.org/10.1787/aa29cfd3-en.
[21] Upton, S., P. Vallance and J. Goddard (2014), “From outcomes to process: Evidence for a new approach to research impact assessment”, Research Evaluation, Vol. 23/4, pp. 352-365, https://doi.org/10.1093/RESEVAL/RVU021.
[6] van Damme, D. (2022), The Power of Proofs: (Much) Beyond RCTs, Centre for Curriculum Redesign, https://curriculumredesign.org/wp-content/uploads/The-Power-of-Proofs-CCR.pdf.
[10] Vilalta and N. Comas (2021), Research in the service of an evidence-informed education policy, https://fundaciobofill.cat/uploads/docs/x/y/f/zc7-e8s-researchserviceevidenceinformed_280721.pdf.
[15] Watterreus, I. and L. Sipma (2023), Kennisingrediënten voor onderwijsbeleid [Knowledge Ingredients for Education Policy], Netherlands Initiative for Education Research, Den Haag, https://www.nro.nl/sites/nro/files/media-files/Rapport%20Kennisingredie%CC%88nten%20voor%20onderwijsbeleid.pdf.
[65] Welsh, R. (2021), “Assessing the quality of education research through its relevance to practice: An integrative review of research-practice partnerships”, Review of Research in Education, Vol. 45/1, pp. 170-194, https://doi.org/10.3102/0091732x20985082.
[59] Wentworth, L., C. Mazzeo and F. Connolly (2017), “Research practice partnerships: A strategy for promoting evidence-based decision-making in education”, Educational Research, Vol. 59/2, pp. 241-255, https://doi.org/10.1080/07391102.2017.1314108.
[53] West, S. and C. Schill (2022), “Negotiating the ethical-political dimensions of research methods: a key competency in mixed methods, inter- and transdisciplinary, and co-production research”, Humanities and Social Sciences Communications, Vol. 9/1, https://doi.org/10.1057/s41599-022-01297-z.
[2] Winch, C. (2001), “Accountability and relevance in educational research”, Journal of Philosophy of Education, Vol. 35/3, pp. 443-459, https://doi.org/10.1111/1467-9752.00237.