Philippe Larrue
Dominique Guellec
Frédéric Sgard
Philippe Larrue
Dominique Guellec
Frédéric Sgard
Public research is expected to fulfil a widening set of objectives, from scientific excellence and economic relevance to contributing to a variety of societal challenges (inclusiveness, gender diversity, sustainability, etc.). Policy makers in ministries and funding agencies have broadened their portfolio of funding instruments and design variants to respond to this demand. However, little is known about the potential effects of the various funding instruments on research outcomes. This chapter aims to provide policy makers with analytical tools to help them decide upon what types of funding mechanisms and instruments should finance what types of research and for what effects. It examines recent changes in the modes of allocation of research funding that have blurred the formerly well-established boundaries between competitive and non-competitive funding instruments. It then proposes a simple conceptual framework to present the portfolio of research-funding instruments available to policy makers along multiple and continuous – rather than unique and binary – dimensions. The chapter then analyses the “purpose fit” of this growing set of funding instruments – i.e. their ability to fulfil different policy objectives – to help policymakers design and utilise them in a way that best corresponds to the expected impacts of public research. The chapter concludes with a forward-looking discussion that draws implications in terms of future analytical work and how emerging long-term trends (e.g. digitalisation and societal challenges) might influence the volume and types of research funding.
The statistical data for Israel are supplied by and under the responsibility of the relevant Israeli authorities. The use of such data by the OECD is without prejudice to the status of the Golan Heights, East Jerusalem and Israeli settlements in the West Bank under the terms of international law.
What types of funding mechanisms and instruments should finance what type of research and for what effects? Despite progress in understanding the underlying dynamics, research funding is still the subject of lively discussions in academic and policy arenas.1
The various positions in these debates, often revolving around the two models of competitive and non-competitive funding, are entrenched in different conceptual views on how new knowledge is generated and used in the innovation process. They also reflect various communities’ vested interests since the responses given to this question influence the allocation of funds to different actors. Finally, they are strongly related to the national institutional set-ups in which the funding systems are embedded, adding a further layer of complexity to the debate.
These policy debates have become more intricate as the boundaries between the formerly two well-established modes of research funding – competitive and non-competitive – have become increasingly blurred and porous. On the one hand, competitive funding can be allocated to certain institutions – particularly centres of excellence – for a period of several years; on the other hand, institutional funding increasingly integrates performance-based components, introducing a degree of competition into these funding mechanisms.
Reflecting changes in the policy arena, an extensive academic and grey literature has progressively moved away from the usual dichotomy between competitive and non-competitive funding instruments, introducing more nuanced measurement and comparison of national funding patterns. Scholars and experts also scrutinise the operational/technical aspects of the different funding instruments (e.g. the components of the funding formula for institutional funding, and the criteria and selection modes for competitive funding). This body of work now offers a richer understanding of the funding landscape, more closely related to the reality experienced by policymakers.
However, little is yet known about the effects of funding instruments. What are the merits of the various instruments (and their multiple design variants) in achieving certain policy objectives, including supporting research excellence, steering research in certain directions or triggering breakthroughs? Although they do not provide systematic responses to this question, various country reviews, evaluations of schemes and programmes supporting research, and research works provide some useful insights on this matter. Together, they help shed light on the “purpose fit” of instruments, i.e. how certain instruments are more or less adapted to specific policy objectives. They also provide a significant – though scattered – evidence base on the various factors influencing the desired effects at the different stages of the funding process, from high-level strategic orientation to research implementation in Higher education institutions (HEIs) and public research institutes (PRIs).
Connecting the technical (“how to fund?”) and political (“for what desired effects?”) aspects of research funding is essential, to help policy makers design and use funding instruments in a way that best corresponds to their objectives. This chapter builds on recent progress in the academic and empirical literature, analysing the policy objectives and desired effects underlying the different types of government research funding. The OECD has recently resumed work in this field (OECD, forthcoming a) and future OECD work on research funding is planned for the 2019-20 biennium.
The chapter takes stock of recent changes in the allocation modes of research funding. It examines the increasingly complex set of funding instruments designed to convey a widening set of policy objectives, and proposes a simple analytical framework of the mix of these funding instruments as a continuum. Regarding the purpose fit of funding instruments, the chapter pays particular attention to performance-based institutional funding instruments, which have undergone recent reforms in many countries and offer new policy levers to accommodate a wide set of policy objectives. It concludes with a forward-looking view, drawing implications for future analytical work and discussing how emerging long-term trends (e.g. digitalisation and societal challenges) might influence the volume and types of research funding.
Innovation, particularly at the knowledge frontier and in emerging sectors, depends heavily on scientific progress (OECD, 2015a). HEIs and PRIs – which in 2016 represented just under 18% (HEIs) and 11% (PRIs) of gross domestic expenditure on research and development (GERD) in OECD member countries, far below business (69%) – perform more than three-quarters of total basic research.
HEIs play a growing role in research and development (R&D), surpassing PRIs, whose importance has decreased in many countries. In addition to providing higher education, universities are strongly engaged in the production of longer-term and higher-risk scientific knowledge, and increasingly in applied research, knowledge transfer and innovation activities.
Despite considerable country differences, government sources finance the bulk of academic research activities: in 2015, public funds supported 67% of academic research by HEIs and 92% of research by PRIs (OECD, 2017a). Budgetary restrictions in the aftermath of the 2008 global financial crisis negatively affected R&D funding (Box 8.1). However, research will remain an important component of public budgets, as the level of knowledge embedded in products and services keeps increasing, and the number of global challenges calling for radically new technological and social innovation also keeps rising.
The share of government funding in the budget of PRIs has remained relatively stable since the 1980s. However, it has decreased steadily for HEIs, which have successfully sought third-party funding. A closer look at the more recent period (Figure 8.1, Panel A) reveals a significant increase in public research funding (as well as sharp increases in business R&D funding) immediately after the onset of the financial crisis, as many countries used research and innovation programmes in their stimulus packages (e.g. the 2010 Investments for the Future Programme [Programme d’investissements d’avenir] in France and the 2009 American Recovery and Reinvestment Act in the United States). However, this increase was short-lived: as early as 2010-11, increases in public R&D budgets slowed or reversed. Government spending on research in HEIs and PRIs slightly decreased, both in real terms and as a percentage of gross domestic product (GDP), as economic growth resumed without an increase in government funding of public research (Figure 8.1, Panel B).
This decrease cannot be attributed only to budgetary pressures: public funding for research also decreased as a share of total government expenditures. This is consistent with anecdotal evidence suggesting a certain “frustration”, owing to the absence of sufficient tangible innovation results from past funds allocated to research. In such a context, advocates of science, technology and innovation (STI) activities are less well positioned to defend their budgets when negotiating with finance ministries and representatives of other policy areas.
Research funding is allocated in very diverse ways, reflecting the institutional settings of national research systems. The earliest and simplest typology distinguishes between competitive and non-competitive funding mechanisms:
Competitive project funding encompasses the programmes or instruments of funding agencies, research councils or ministries that allocate resources for a research activity limited in scope, budget and time, based on formal contests or competitions, in which applicants apply for funding. Financial awards can be of variable size and length, and may be allocated to individuals, projects or centres (OECD, forthcoming a).
Non-competitive institutional funding includes institutional core or block funding, i.e. the general funding of research-performing institutions, without direct selection of R&D projects or programmes. It is generally allocated as a yearly government contribution to HEIs or PRIs (not to a specific sub-component or research group) to fund their day-to-day operations, such as staff salaries, infrastructure and maintenance related to education or research activities. While institutional funding was earmarked in the past for specific activities, it is now mostly allocated as a lump sum (block grant) that research institutions can spend as they see fit (OECD, 2015b; Jongbloed and Lepori, 2015).
The changing ways in which most governments allocate research funding have increasingly blurred the formerly well-established boundaries between the two major funding mechanisms in the two last decades. First, the gradual spread of new public management (NPM) thinking in many public administrations (including HEIs and PRIs in the 1980s and 1990s), and the growing pressure on budgets, have led public authorities to increase the share of research funds distributed through competitive project funding (Hicks, 2010). Furthermore, not only did NPM reforms further increase project-based funding, they also introduced performance-based variables and different conditions for institutional funding allocated to HEIs and PRIs (Lepori, Geuna and Mira, 2007). In some countries (e.g. Sweden) and institutions (e.g. PRIs in Norway), attempts have been made to include strategic components in institutional funding, in order to better align research activities and national priorities while preserving institutional autonomy. As a result of these changes, institutional funding (which still often retains a strong historical component) can no longer be considered non-competitive and non-oriented.
An even more recent trend has also challenged the previously binary typology of funding mechanisms. Governments increasingly use competitions to allocate multi-year funding to institutions (or part of them) through different types of research excellence initiatives (REIs). These initiatives aim to encourage outstanding research by allocating large-scale, long-term funding directly to designated research units; hence, they feature elements of both institutional and project funding. In 2014, over two-thirds of OECD countries were operating such schemes, mostly established within the past decade (OECD, 2014a). The 2017 edition of the European Commission EC-OECD science, technology and innovation policy (STIP) survey showed similar results: 31 countries (i.e. 61% of a total of 51 countries)2 reported 84 initiatives using these funding instruments (EC/OECD, 2017).
The evolution of the funding landscape has challenged the boundaries between competitive and non-competitive funding instruments, requiring “nuanced” conceptual frameworks. Several initiatives – mainly commissioned by the European Commission and the OECD since the early 2000s – have attempted to clarify the definitions of instruments in this moving landscape and reflect these observations in precise statistics (Box 8.2).
One of the first significant attempts at measuring national patterns of research funding was conducted by the PRIME European network (2004-08), with support from the European Commission (Lepori, Geuna and Mira, 2007). PRIME developed a conceptual framework and definitions, which were then applied to existing data on a subset of six European countries. Using these results as a stepping-stone, the OECD Working Party on Science and Technology Indicators (NESTI) made a first attempt in 2012 to collect data differentiating different modes of funding (van Steen, 2012). Building on this seminal work, EUROSTAT started to collect voluntary information from European countries on the share of project and institutional funding. The European Commission sponsored another research consortium, Public Funding of Research (PREF), which collected new data and obtained results that are “broadly consistent” with EUROSTAT (Jonkers and Zacharewicz, 2016).
These projects yielded the following main results:
The studies show a wide diversity of national configurations and evolution patterns of research-funding flows. One key indicator, the share of project funding of R&D in total domestic R&D, ranged from above two-thirds (in New-Zealand, South Korea and Ireland) to less than one-third (in Austria, Switzerland and the Netherlands) in the NESTI study. EUROSTAT data suggest that the relative importance of project funding typically ranges between 25% and 50% in the European countries analysed, with some noticeable exceptions (e.g. above 65% in Ireland).
The studies show a general trend of increasing project funding from 1970 to 2000, in real terms and relative to GDP. EUROSTAT data show a relative stability since the mid-2000s (again with exceptions, e.g. a strong increase in Greece since 2010 and a decline in Iceland until 2014). Although most of these studies focus on HEIs, funding sources for PRIs have also moved towards higher competitive and contractual funding in most countries. Despite the growth of project funding, institutional funding is still the main instrument for financing public research.
Considerable progress has therefore been made, but the measurement agenda is still open. Ongoing work provides a more granular and less binary analysis than analysis based on the divide between institutional (or organisational) funding and project funding. To that aim, PREF authors defined “mixed models” of the two. They have also recently developed synthetic indicators of the degree of competition and “proximity to performance”, rather than considering these notions as absolute features of funding instruments. For instance, the performance-based indicator for institutional funding ranges from 0 if it is allocated historically to 1 if it depends entirely on past research outputs. Out of the 14 European countries scrutinised in this research, 3 (Poland, Portugal and the United Kingdom) appear to have a strong orientation towards performance-based allocation of block funding, while 8 (Austria, Denmark, France, Italy, Sweden, Switzerland and Germany) have a lower dependence on performance (Reale, 2017). Despite increasingly introducing performance criteria in block-funding allocation formulae, most countries still award institutional funding mainly on a historical basis, with scaling parameters (often related to higher education activities, such as the number of students or teaching staff).
Taking a broader perspective that considers both institutional and competitive funding, Lepori, Reale and Orazio Spinello (2018) developed a synthetic indicator of the performance orientation of public research funding, tracking its evolution over reforms of the funding system. One key result is that although the wide variations in countries’ performance orientation stem from the relative shares of project funding, the recent significant increases in performance orientation in some countries (Finland, Norway, Poland, Portugal and Sweden) followed the introduction or reform of institutional funding instruments.
No such systematic initiatives are known to have been undertaken outside of Europe recently. Despite some measurement issues (notably breaking down the block grant between research and education activities), general university funds (GUF),3 considered as “a proportion of university block grants” (OECD, 2017a), provide some insights about the level of institutional funding on a broad international scale, while not reflecting the many variants of this type of funding. This indicator confirms wide dispersion in national use of institutional funding, from 0% in the United States (at the federal level; institutional funding is allocated at the US state level) to above 50% of government support for civil R&D in several countries, such as Iceland, the Netherlands and Austria (Figure 8.2). It also confirms that in many countries, the decrease in institutional funding was particularly significant in the 1990s and has plateaued since then (e.g. in Japan).4
Considering these changes and looking more closely at the diverse variants of funding instruments requires reconsidering the dichotomy between non-competitive and competitive funding as a continuum (Dialogic and Empirica, 2014). Based on progress made over the last decade in understanding research funding, this chapter proposes a simple analytical framework to present the portfolio of research-funding instruments available to policy makers along multiple and continuous – rather than unique and binary – dimensions.
These dimensions, as well as the main parameters influencing the positioning of the different funding instruments along them, are discussed below:
Competition intensity: competition is more intense when the number of applicants is large for a given total available budget. Since funders themselves often have little margin to augment the overall budget dedicated to a given funding stream, the size of the targeted population will be the main lever in their hands to manage competition intensity. Hence, the scope of the calls for proposals in project funding, the eligibility rules for institutional funding (e.g. targeting only research universities), together with factors affecting the selection rate and concentration of the distributed funds, are key determinants that intensify competition.
Granularity: the selection/allocation unit can be an entire institution, part of an institution (e.g. a faculty), or a project or programme of different sizes and scope. This has important implications in terms of the scope and flexibility of the allocation, its stability, the level of fragmentation of the funding, etc.
Level: competition can also involve different levels within a single organisation, depending on the elementary units of allocation and assessment. These two units may not coincide, e.g. in the case of institutional funding, where the assessment is performed at the level of departments or research groups, with funding allocated to the organisation as a whole. Depending on internal allocation rules, competition between institutions can translate internally into rivalry between and within parts of these institutions.
Type of assessment and selection criteria: competition can be based on a wide array of criteria, using different timeframes for assessment. Selection/allocation criteria range from publications and citations, to third-party funding and expected social impact. Simplistically, a distinction can be made between input and output-related performance criteria. These criteria can be considered within timeframes with different durations (number of years) and directions (ex ante and/or ex post).
Orientation/directionality: funding allocation can be open, or targeted towards priority areas or issues (e.g. scientific disciplines, economic or societal problems). The more granular and ex ante the allocation, the easier it is for policymakers to steer funding in selected directions.
Figure 8.3 schematises the mix of funding instruments as a continuum along three of the above dimensions. Although not all countries have implemented the full range of instruments, many of these overlap or accumulate. For instance, performance-based funding is almost always provided on top of historical block funding, to allow some stability in funding allocation over time. Similarly, performance contracts are most often coupled with an (ex-post) performance-based component. Therefore, the relative weights of the different funding components (e.g. the research performance-based component in Norway only affects 15% of the total block funding), and their possible synergistic effects, are an important variable when defining a national funding portfolio.
What are the different funding instruments, with their multiple design variants, “good at”? As previously shown (Box 8.2), considerable conceptual, data-collection and case-study work has generated important progress in characterising and measuring research-funding trends over the last two decades. However, knowledge and evidence on the effects of research-funding mechanisms is much scarcer. A key preliminary step in assessing the effects of funding instruments consists in analysing their purpose fit, i.e. determining what policy instruments fit what objectives. This also reconnects the knowledge gained on instruments with the challenges facing policymakers as they attempt to respond to the mounting societal expectations of public research, far beyond a sole focus on scientific excellence.
Each instrument conveys an ever-widening range of policy objectives as new social needs arise, with more programmes stating multiple goals. A recent OECD project identified the desired effects most frequently stated in a dedicated questionnaire covering 75 competitive funding programmes from 21 countries (OECD, forthcoming a). The study distinguished between two sets of “internal” and “external” desired effects (Figure 8.4). Although they were not covered in the study, a similar array of objectives would probably apply to institutional funding instruments, albeit in different proportions.
This trend toward more programmes stating multiple goals results in more complex policy-instrument designs to accommodate these various objectives (Jongbloed and Lepori, 2015). For instance, new “mixed” or “hybrid” funding model instruments have been introduced, either by adding competition and performance requirements to formerly “fixed” instruments, or adding more strategic and longer-term components in competitive schemes (e.g. REIs).
The increasingly complex design of instruments also offers many levers to make them more “amenable” to fulfilling different policy objectives. Table 8.1 describes how the design features of three main “families” of instruments can be fine-tuned to accommodate different policy objectives.
|
Enhancing research excellence |
Steering research towards specific priorities |
Creating the conditions for research breakthroughs |
---|---|---|---|
Project-based funding |
|
|
|
Institutional funding |
|
|
|
Mixed models: research excellence initiatives |
|
|
|
It focuses on the three most frequent and comprehensive types of desired effects: enhancing research excellence; steering research towards specific priorities; and creating the conditions for breakthroughs.
The section below briefly reviews the main elements of institutional funding, project-based funding and funding of REIs against these three types of desired effects.
Institutional funding focuses on maintaining a stable research infrastructure and underpinning longer-term “excellent” research. As “formal” selection is generally absent from this type of allocation, and academic institutions are entitled to use the funding as they see fit (serving the principle of academic freedom), it is generally not considered amenable to steering research towards specific national priorities. However, some of the initiatives presented in Box 8.3 show that institutional funding can create the appropriate conditions and incentives for researchers to engage in targeted research, providing the necessary strategic capabilities are present at the top level of the beneficiary institutions. They illustrate three main ways to steer research activities through institutional funding: “top-slicing” block grants to target specific priorities; providing additional earmarked institutional funding (either through direct negotiation or competitive awards) for large multi-year projects aligned with national priorities; and using performance contracts to help research institutions build up their profile in fields of national interest. If these initiatives are designed appropriately, and specific conditions are in place to promote co-operation between institutions (as with the Swedish Strategic research areas [SFO] programme), they could also serve the objective of creating breakthrough research.
Project funding consists of allocating funds to groups or individuals to perform specific R&D activities, mostly based on a project proposal subjected to a competitive process. Project funding is considered a better policy tool to steer research, particularly with a view to producing higher-quality research and (to a lesser extent) research that is more relevant to socio-economic objectives (Hicks, 2010). By contrast, many studies have highlighted that an increasing reliance on competitive funding can result in shorter-term, lower-risk projects, rather than longer-term, higher-risk research, although the evidence for this is mixed5. Moreover, the resource and time burdens of applying for and reviewing competitive grants can deter some of the best researchers from participating. Finally, project funding hinders the ability of researchers and institutions to engage in long-term planning, because of uncertain future funding. This is especially true for project-based funding with low success rates. Policymakers have experimented with a few alternatives, such as “lotteries” and “sandpits” (OECD, forthcoming a).
REIs provide the selected centres with relatively long-term resources, thereby allowing them in principle to carry out (as their name suggests) excellent research. REIs often include researchers and infrastructures from different institutions, hence promoting the interdisciplinary and co-operative context necessary for high-impact, high-risk “breakthrough” research (OECD, 2014a).
While the performance-based component of institutional funding has been widely documented, the strategic steering component remains understudied, primarily because it is used less frequently. However, mission-oriented research is attracting renewed interest in the academic and policy arenas. A few countries provide interesting examples of this trend (Mazzucato, 2018).
Norway maintains a dual-tier institutional funding system comprising a fixed amount and a performance-related amount, which is complemented by separate funding for relatively large multi-year projects. These “strategic institutional initiatives” (SIS) are negotiated between the institutes, the ministries and the research council, and their budget is added to the envelope of the block grant. SIS aim to develop long-term expertise in the institutes’ research fields that are deemed to be of high national interest, but are difficult to realise through competitive funding. SIS represented about 40% of the institutional funding of “environmental” research institutes and 30% of the “primary” research institutes’ institutional funding in 2016 (overall institutional funding itself represented about 15% of these two types of institutes’ total revenues) (OECD, 2017b).
Sweden launched the SFO programme to increase the share of institutional research funding in universities’ funding mix and strengthen co-operative university research in areas of national strategic relevance. SFO grants were allocated on a competitive basis for five years, based on proposals from university partnerships in priority areas. Once awarded, the selected universities could add the funds to their institutional funding and use them with total freedom in the priority areas determined by the Swedish Government according to the proposals’ relevance to Swedish industry, as well as their capacity to reach the highest international quality levels, and solve important societal needs. The three selected areas were medicine and the life sciences, technology and climate change (OECD, 2016).
In the Netherlands, TO2 Applied Research Institutes have seen a triple evolution in their block funding since the mid-2000s, with significant cuts in direct government funding, a greater share tied to performance, and stronger conditions for using the funding to better align research with the national priorities formalised as “top sectors”. This change has been implemented in multi-year performance contracts, connected to specific public-private partnerships in the priority areas (OECD, 2014b).
In Austria, performance agreements determine around 95% of the 22 universities’ block funding for research (compared to 7% in the Netherlands, 10% in Ireland and 100% in Finland). First implemented in 2007, the agreements define a concrete set of measures and services to be fulfilled over three years, based on development plans individually negotiated between each university and the Federal Ministry of Education, Science and Research (BMBWF). These development plans are informed by the National Development Plan for Higher Education, which is formulated by the BMBWF and sets national objectives for a period of six years. The University Act (2002) also sets priorities to be addressed in institutional plans (OECD, forthcoming b).
Among the different policy objectives, the issue of how the different funding instruments support breakthrough research is attracting growing attention, particularly in light of rising concerns about the seemingly decreasing productivity of research (Bloom et al., 2017). As previously mentioned, the research community has expressed concerns that competitive funding mechanisms could disadvantage risky, potentially transformative, or transdisciplinary research proposals in favour of applied, incremental, or disciplinary proposals. Indeed, reconciling both a desire for more efficient and transparent research funding with the need to support more innovative (but also riskier) projects poses a real challenge.
Studies on this topic provide recommendations on how to design instruments to fund breakthrough research (e.g. Laudel and Gläser, 2014; Wang, Lee and Walsh, 2018). Some studies recommend tailoring funding mechanisms to the need for creativity in science, rather than simply adding criteria to existing project-funding schemes. Others claim that competitive funding can support breakthrough research, providing it is specifically adapted to this strategic objective (Heinze, 2008; Goldstein and Narayanamurti, 2018). The Japanese Government, for instance, announced that the number of selection panels in the main competitive instrument (the Grants-in-Aid for Scientific Research programme, “kakenhi”) will drop from close to 500 to around 375, to foster research originality and creativity (Hornyak, 2017). The increase in competitive funding has been blamed for a markedly increased concentration of basic-research funding in the hands of a small number of Japanese institutions; this loss of diversity is detrimental to novelty and alternative scientific ideas (Matsuo, 2018).
Considerable conceptual, data-collection and case-study work has generated important progress in characterising and measuring research-funding trends over the last two decades. The increasing diversity of design variants for funding instruments offers policymakers new levers to accommodate a widening set of policy needs. However, knowledge on the effects of research-funding mechanisms is far scarcer, notably owing to many methodological problems (Butler, 2010). This chapter has proposed a conceptual framework to represent the new research-funding landscape and analyse which policy instruments (and their design variants) can theoretically fulfil different policy objectives. However, this analysis of the ‘purpose fit’ of funding instruments is still in its infancy and will be the object of more work in the near future to assess how policy makers can best fund research to realise their priorities.
Pushing this research agenda further will require going beyond an “instrument-by-instrument” analysis, to examine the instruments’ combined effects and interactions with the institutional environment:
Competitive and non-competitive funding interact in several ways, exhibiting both positive complementarities and tensions. For instance, a project grant generally only covers part of the costs of the research activities and requires matching funds that might be found in the block funding for university research (often under the form of research staff time). Implementing the project also requires services and equipment financed through past and present institutional funding. Typically, institutional funding provides money to build and maintain basic capacity (i.e. skills and the work environment) and finance day-to-day operations, whereas project funding supports more targeted research (Lepori, Geuna and Mira, 2017). However, this traditional model is becoming blurred, as rules (not least concerning overheads and eligible expenses) are changing and vary among countries. As a result, making a clear-cut distinction between longer-term institutional funding and competitive-project respective contributions to the steering of research is even more difficult.
The institutional environment is essential to understanding the funding landscape. Some important parameters to consider are the existence, size and scope of funding agencies, and their type of relationships with ministries; the existence of “umbrella organisations”, to which government can delegate some programming and funding roles (e.g. the National Centre for Scientific Research [CNRS] in France); and universities’ internal organisation (e.g. the internal funding-allocation mechanisms) and strategic management capabilities.
Research funding is a complex, staged and multifaceted issue, which calls for a systemic view in order to understand its dynamics and assess its effects. The “In my view” box below provides some guidelines to pursue a holistic analysis of research funding.
Erik Arnold, Chairman of Technopolis Group and Adjunct Professor in Research Policy, Royal Institute of Technology (KTH), Stockholm.
Funding research involves a range of actors, influences and policies, each with limited reach, which tend to be managed separately. If we look at the whole picture, it becomes clear that a range of policy levers exist to improve system performance (not all of which are accessible to all policy actors), and that a co-ordinated approach provides an opportunity to steer the whole system in a way that helps it develop, and supports the implementation of national research and innovation policy.
Fundamentally, funding instruments serve specific policy intentions and should be considered in the context of the overall system of rewards (and punishments) policy offers to research performers, such as universities.
Figure 8.5 provides a bird’s-eye view of that system. The central box focuses on research funding. Traditionally, education ministries provided universities with institutional funding in the form of block funding – lump sums they could use to produce teaching and research. Some countries provided a detailed budget to indicate the intended uses of the block fund, but the principle of university autonomy meant (and still means) that there was a distance between what the education ministry could decide, and what the universities would actually do. More recently, education ministries have started not only to distinguish between institutional funding for education and institutional funding for research, but also to base parts of this funding on performance, introducing an unprecedented element of inter-university competition for institutional funding. Performance-based research funding systems have received increasing policy attention in recent years as policymakers try to manage national research systems more effectively. These systems can be contentious (academics hate them, university research managers love them), and a growing literature studies the role of performance assessment in their operation.
Recognising the difficulty of assuring the quality of research by autonomous universities, education ministries also tend to fund research councils offering competitive “external” (i.e. non-block) funding on a project basis. This is normally “bottom-up” and investigator-initiated research lacking any predetermined societal relevance. This “excellence” funding is expected to assure quality, as well as increase the volume of research. However, with academics controlling the research councils and the committees prioritising projects, it is the academics – not the rest of society – who are firmly in charge of the nature and quality of research.
Backed by “sector” or “mission” ministries, innovation agencies and sector funders (e.g. covering health, transport and the environment) offer other funding incentives for the research system to address societal problems.
However, the direct operation of these incentives is far from the only policy influence on the development of the research system. The overall amount or growth of research funding is one positive factor (for example, Denmark’s dramatic surge in scientific performance in recent years builds on substantially increased funding). Internationalisation raises quality in lagging countries (international co-publications are more highly cited than single-author or national ones). University governance and management also have a big impact. It is widely believed that the competition involved in having a high share of external money in universities’ research income drives up quality). Finally, there is increasing faith in performance-based research funding systems, as well as significant disagreement about whether it should govern a high proportion of institutional funding for research (there is evidence that both high and low proportions affect researcher behaviour.)
Statistically, it is very difficult to connect observed patterns in national performance to most of these policy levers. Multiple policies are at play. Their effects are hard to untangle; contextual factors, such as history and culture, are also important. Often, good performance seems to result from changes in one or more of the “levers” discussed above, rather than from the presence of particular ratios among funding streams. As with much else in innovation systems, policymakers need to adopt a systemic perspective of their specific national situation when analysing needs and using policy instruments. Ultimately, a single ministry cannot do this – a higher power, such as a research and innovation council or the government itself, needs to co-ordinate the different components of a research and innovation system.
Emerging or ongoing trends are already changing funding practices and landscapes; the future evolutions of research funding are therefore uncertain. With the growing importance of innovation in all human activities, the pressure will grow for research to deliver workable solutions to real-world problems. A likely scenario is that research will continue to evolve as a demand-driven activity, favouring mechanisms in which research users – rather than researchers alone – increasingly shape the research agenda. Such an evolution could not only promote competitive mechanisms, but also different forms of institutional funding that steer research. It could also result in a multiplication of the expected objectives underlying any research activity, as shown in the growing list of project-evaluation criteria and the expanding formula for performance-based institutional funding. This trend could jeopardise the ability of a given research project to excel in a specific dimension, e.g. scientific excellence, high-risk research or economic/social relevance. The modes of research support will most likely continue to evolve to deal with this issue, either by segmenting funding according to types of objectives or creating new modes of “customised” project evaluation.
The growing recognition of the Sustainable Development Goals (SDGs) as challenges to be addressed in research and innovation is a salient trend (Chapter 4 on the SDGs). The literature has widely documented that research relevant to SDGs will need to be transformational, hence ambitious, interdisciplinary and performed with a mid- to long-term horizon. While this does not in principle imply project-based funding, the pressure for greater accountability and cost efficiency will clearly favour competitive-funding approaches. Designing new instruments and programmes (such as different forms of mission-oriented programmes) will be key to juggling the competing requirements of strategic steering, competitive allocation and risk-taking.
The articulation between instrument design and policy objectives is also changing as digitalisation transforms the research and innovation enterprise (as evidenced in this Outlook). Digitalisation is improving the ability of policymakers and funders to monitor research: more up-to-date information is available, which can be analysed more in-depth, hence facilitating evaluation (see Chapter 12 on digital science and innovation policy). Information useful for resource allocation could be accessed directly through data processing, reducing the need for costly competitions. At the same time, digitalisation can lower the cost of competitive funding (project-preparation work can be subjected to versioning and re-used, and panels can be organised online), which could enhance its appeal.
Needless to say, research to address the SDGs and/or reap the opportunities of digitalisation will require ever-increasing financial inputs, in the context of the rising costs of research and budget pressure in indebted states. Tensions over budgetary negotiations will undoubtedly grow between policy fields. Research – which is both an increasingly costly policy field and a key enabler of the transformational agenda – will be at the heart of these debates.
Bloom, N. et al. (2017), “Are Ideas Getting Harder to Find?”, NBER Working Paper, No. 23782, National Bureau of Economic Research, Cambridge, MA, http://www.nber.org/papers/w23782.
Butler, L. (2010), “Impacts of performance-based research funding systems: A review of the concerns and the evidence”, in Performance-Based Funding for Public Research in Tertiary Education Institutions: Workshop Proceedings, OECD Publishing, Paris, pp. 127-165, https://doi.org/10.1787/9789264094611-en.
Dialogic and Empirica (2014), “The effectiveness of national research funding systems”, Dialogic and Empirica, Utrecht/Bonn, https://www.dialogic.nl/wp-content/uploads/2016/12/2013.109-1422.pdf.
EC/OECD (2017), STIP Compass: International database on STIP policies (database), April 2018 version, https://stip.oecd.org.
Goldstein, A.P. and V. Narayanamurti (2018), Simultaneous pursuit of discovery and invention in the US Department of Energy, Research Policy, Vol. 47, pp. 1505-1512, Elsevier, Amsterdam, https://doi.org/10.1016/j.respol.2018.05.005.
Heinze, T. (2008), “How to sponsor ground-breaking research: A comparison of funding schemes”, Science and Public Policy, Vol. 35/5, pp. 302-318, Oxford Academic Press, Oxford, https://doi.org/10.3152/030234208X317151.
Hicks, D. (2010), “Overview of models of performance-based research funding systems", in Performance-Based Funding of Public Research in Tertiary Education Institutions, OECD Publishing, Paris, http://dx.doi.org/10.1787/9789264094611-en.
Hornyak T. (2017), “Japan shakes up research funding system”, Nature Index, 1 August 2017, Springer Nature, https://www.natureindex.com/news-blog/japan-shakes-up-research-funding-system.
Jongbloed, B. and B. Lepori (2015), “The Funding of Research in Higher Education: Mixed Models and Mixed Results”, in Huisman, J. et al., The Palgrave International Handbook of Higher Education Policy and Governance, pp. 439-462, Palgrave Macmillan, London.
Jonkers, K. and T. Zacharewicz (2016), Research Performance Based Funding Systems: a Comparative Assessment, European Commission, Publications Office of the European Union, Luxembourg, http://publications.jrc.ec.europa.eu/repository/bitstream/JRC101043/kj1a27837enn.pdf.
Laudel, G. and J. Gläser (2014), “Beyond breakthrough research: Epistemic properties of research and their consequences for research funding”, Research Policy, Vol. 43, pp. 1204-1216, Elsevier, Amsterdam, https://doi.org/10.1016/j.respol.2014.02.006.
Lepori, B., E. Reale and A. Orazio Spinello (2018), “Conceptualizing and measuring performance orientation of research funding systems”, Research Evaluation, Vol. 1/13, Oxford University Press, Oxford, https://doi.org/10.1093/reseval/rvy007.
Lepori B., A. Geuna and A. Mira (2017), “Money matters, but why? Distribution of resources and scaling properties in the US and European higher education”, Presentation at Leiden University, June 2017.
Lepori, B. et al. (2007), “Comparing the evolution of national research policies: What patterns of change?”, Science and Public Policy, Vol. 34/6, pp. 372-388, Oxford University Press, Oxford, https://doi.org/10.3152/030234207X234578.
Mazzucato, M. (2018), Mission-Oriented Research & Innovation in the European Union – A problem-solving approach to fuel innovation-led growth, Directorate-General for Research and Innovation, European Commission, Publications Office of the European Union, Luxembourg, https://ec.europa.eu/info/sites/info/files/mazzucato_report_2018.pdf.
Matsuo, K. (2018), “The structure and issues in Japan’s STI funding”, Presentation at the Euroscience Open Forum (ESOF) Conference, Toulouse, 11 July 2018.
OECD (forthcoming a), Effective Operation of Competitive Research Funding Systems, OECD Publishing, Paris.
OECD (forthcoming b), OECD Reviews of Innovation Policy: Austria 2018, OECD Publishing, Paris.
OECD (2018a), Main Science and Technology Indicators (database), https://www.oecd.org/sti/msti.htm (accessed on accessed on 25 June 2018).
OECD (2018b), Research and Development Statistics, database, http://www.oecd.org/innovation/inno/researchanddevelopmentstatisticsrds.htm (accessed on 25 June 2018).
OECD (2017a), OECD Science, Technology and Industry Scoreboard 2017: The digital transformation, OECD Publishing, Paris, https://doi.org/10.1787/9789264268821-en.
OECD (2017b), OECD Reviews of Innovation Policy: Norway 2017, OECD Publishing, Paris. http://dx.doi.org/10.1787/9789264277960-en.
OECD (2016), OECD Reviews of Innovation Policy: Sweden 2016, OECD Publishing, Paris, http://dx.doi.org/10.1787/9789264249998-en.
OECD (2015a), The Innovation Imperative: Contributing to Productivity, Growth and Well-Being, OECD Publishing, Paris, https://doi.org/10.1787/9789264239814-en.
OECD (2015b), Frascati Manual 2015: Guidelines for Collecting and Reporting Data on Research and Experimental Development, The Measurement of Scientific, Technological and Innovation Activities, OECD Publishing, Paris, https://doi.org/10.1787/9789264239012-en.
OECD (2014a), Promoting Research Excellence: New Approaches to Funding, OECD Publishing, Paris, https://doi.org/10.1787/9789264207462-en.
OECD (2014b), OECD Reviews of Innovation Policy: Netherlands 2014, OECD Reviews of Innovation Policy, OECD Publishing, Paris, https://doi.org/10.1787/9789264213159-en.
OECD (2011), Public Research Institutions: Mapping Sector Trends, OECD Publishing, Paris, http://dx.doi.org/10.1787/9789264119505-en.
Reale, E. (2017), “Analysis of National Public Research Funding (PREF) – Final Report”, JRC Technical report, European Commission, Publications Office of the European Union, http://publications.jrc.ec.europa.eu/repository/bitstream/JRC107599/kj0117978enn.pdf.
van Steen, J., (2012). “Modes of Public Funding of Research and Development: Towards Internationally Comparable Indicators”, OECD Science, Technology and Industry Working Papers, Vol. 2012/04, OECD Publishing, Paris, http://dx.doi.org/10.1787/5k98ssns1gzs-en.
Wang J., Y.-N. Lee and J.P. Walsh (2018), “Funding model and creativity in science: Competitive versus block funding and status contingency effects”, Research Policy, Vol. 47, pp. 1070-1083, Elsevier, Amsterdam, https://doi.org/10.1016/j.respol.2018.03.014.
Zdravkovic, M and B. Lepori (2018), “Mapping European Public Research Funding Studies: Selected results and some open questions”, Presentation at the EU-SPRI conference, 7 June, ESIEE, Marne-la-Vallée.
← 1. As shown, for instance, in the analysis of responses to the questions on the main public-research policy debates in the 2017 edition of the EC-OECD STIP survey, covering more than 50 countries (EC/OECD, 2017). See also Zdravkovic and Lepori (2018) for an analysis of the academic literature.
← 2. Including 21 OECD of 36 member countries.
← 3. The OECD Frascati Manual defines GUF as the share of R&D funding from the general grants universities receive from the central government (federal) ministry of education or the corresponding provincial (state) or local (municipal) authorities to support their overall research/teaching activities (OECD, 2015b).
← 4. Part of the country differences relate to the relative weights of research activities performed in HEIs and PRIs.
← 5. Similar criticisms can be also directed towards some forms of performance-based institutional funding.