This chapter surveys practices to assess costs and benefits of financial guarantee programmes for SMEs, based on the “OECD/EC Survey on Evaluating Publicly Supported Financial Guarantee Programmes for SMEs". It highlights the wide range of different evaluation approaches across countries, and offers guidance on what specific characteristics of evaluation methodologies are considered particularly helpful.
Financing SMEs and Entrepreneurs 2018
Chapter 2. Evaluating publicly supported credit guarantee programmes for SMEs: Selected results from an OECD/EC survey
Abstract
This chapter was prepared by Sebastian Schich, Principal Economist of the Directorate for Financial and Enterprise Affairs, Financial Markets, Insurance and Pensions Division (DAF/FIN). It is based on OECD (2017), "Evaluating Publicly Supported Credit Guarantee Programmes for SMEs," available at http://www.oecd.org/finance/financial-markets/Evaluating-Publicly-Supported-Credit-Guarantee-Programmes-for-SMEs.pdf by Sebastian Schich, OECD, and Jessica Cariboni, Anna Naszodi and Sara Maccaferri, Scientific Officers at the European Commission Joint Research Centre, prepared for (and having benefitted from inputs and comments from members of) the OECD Committee on Financial Markets. Box 2.2 has been drafted by Asad Ghani, British Business Bank.
Introduction and objectives
As underscored in the trends chapter of the Scoreboard (Chapter 1), credit guarantee schemes (CGS) remain the most widespread instrument to support SME access to finance, and 2016 guaranteed loan volumes remain well above pre-crisis levels in most countries. CGS typically provide a partial guarantee for a bank credit to an SME that would be triggered in the event of debtor default. Over the past decades, there has been a proliferation of such schemes worldwide; more recently, in response to the effects of the global financial and economic crisis, CGS were used as a counter-cyclical policy tool.
The need to evaluate the performance and cost-effectiveness of credit guarantee schemes has been widely recognised, including in recently developed G20/OECD High Level Principles on SME Financing (G20/OECD, 2015, see Box 2.1) and in public credit guarantee arrangements (The World Bank and FIRST Initiative, 2015). Given the public expenditure that publicly supported CGS may entail, it is essential to provide accountability, and to monitor and evaluate the effect of CGS and the extent to which they meet their stated objectives. The expansion of CGS since the financial crisis has further increased the demand on the part of policy makers for monitoring and evaluating publicly supported financial support arrangements for SMEs.
Box 2.1. High-level principles related to SME financing and public support programmes for SMEs
The G20/OECD High-Level Principles on SME Financing developed in 2015 emphasise the need for public SME support programmes to be assessed in order to ensure their additionality and cost effectiveness. The principles recognise that CGS can play a positive role and help SMEs access to bank credit. They also suggest that there is a need to complement SME bank financing with a broad range of non-traditional financing instruments, although they do not explore to what extent there might be any interactions between traditional and alternative sources of SME funding (i.e. complementarity or substitutability). The principles suggest the need for monitoring and regular evaluation of public programmes against their specific target objective(s) and that the results should feed back into the policy-making process.
In addition, the World Bank, in collaboration with the FIRST Initiative, developed high-level principles for the design, implementation, and evaluation of public CGS for SMEs in 2015. The principles ask for systematic and regular evaluations to be conducted and published, in particular on the additionality and sustainability of CGS. In addition, the principles suggest the need to collect relevant data and information and to adopt a transparent methodology. No recommendation is made about the choice of any specific evaluation method.
Evaluating the performance of these different arrangements is not straightforward but important, as the design of many CGSs has been revised and might need to be further adapted to meet the challenges of an evolving environment and enable CGS to effectively perform their objectives. As the country profiles in this publication suggest (Chapter 3), some schemes provide more support than others to SMEs in terms of amounts guaranteed and other features. Other CGS have started to offer new types of guarantees, or have changed the distribution channels for their guarantees. An earlier survey highlighted that CGS differ in their objectives, ownership structures, legal and regulatory frameworks, operational characteristics, eligibility criteria and credit risk management (OECD, 2013a). Evaluations are essential to assess whether and to what extent these design changes have been effective in allowing the CGS to achieve their intended effects.
Despite the agreement among policy makers of the importance of performance assessment, it is not always clear whether national authorities undertake rigorous evaluations of CGS activities and use the results of their findings to improve the functioning of the arrangements. There is no internationally agreed set of good practices on methods to evaluate the performance and cost-effectiveness of CGS. To find out more about national approaches in this regard, an "OECD/EC Survey on Evaluating Publicly Supported Financial Guarantee Programmes for SMEs" was circulated to OECD members and partner country authorities in 2016. The goal was to enable participants to learn what approaches others are using and what specific characteristics of evaluation methodologies are considered particularly helpful. The results are described in Schich et al. (2017) on which the present chapter draws.
This work adds to the body of OECD analysis on instruments to foster SME access to finance, including a 2012 study on the design elements of credit guarantee schemes and mutual guarantee societies (OECD, 2013b). The present work places a sharper focus on how public authorities assess the performance of publicly supported arrangements, so as to allow them to adjust design elements; it is based on responses received from member countries through a survey among public authorities.
The rationale for credit guarantee schemes
Public intervention in lending to SMEs aims to overcome the effects of a diagnosed market failure. SMEs in general or certain segments within the SME population such as those with high growth potential are sometimes seen as receiving fewer funds than they could productively use, and that they are requesting. Such a situation might arise both in the case of large and small firms, but problems of information asymmetry are likely to be more relevant in the case of small firms (Kraemer-Eis et al., 2017). This reflects the disproportionality between the cost of assessing a small company’s credit worthiness on the one hand, and the potential financial return on the other. As the costs of conducing a credit assessment does not scale up linearly with the size of the firm of its need for debt, small enterprises run a greater risk to be credit-constrained. Moreover, financial regulation that adds to these costs can have a disproportionate effect on the supply of credit to SMEs.
The potential market failure created by the existence of non-negligible fixed costs associated with SME lending can be further complicated by a lack of collateral, limited credit history and lack of expertise to produce financial statements on the part of SMEs. As a result, a difference may arise between the demand for finance and the supply of funds to SMEs, which is often considered a structural market failure and is generally referred as the “financing gap for SMEs”. Of particular concern is the financing gap for those SMEs that have a high growth potential. These firms are typically risky and lack a track record and standardised information on past performance and growth prospects. A common reaction on the part of banks to such a situation is to charge higher interest rates as well as demanding collateral to cover losses in the event of default on the SME loan. SMEs, especially young ones, typically lack not only a track record but also collateral and they thus can find themselves rationed out of the credit market.
Publicly supported credit guarantee schemes for lending to SMEs are one answer to this situation as they perform part of the functions of collateral and limit the losses of the creditor in the case of SME insolvency. Such guarantees address diagnosed market failure. It should be noted in this context that there are also other means of addressing market failure, such as improving transparency, creating and disseminating additional information e.g. through databases that allow to improve assessment of growth prospects and risks, as well as by providing education and training to SMEs to present their information in more standardised formats. Whatever the specific type of intervention, it is acknowledged that the various types of support need to be part of a coherent approach and that there needs to be an important element of coordination across different programmes.
Similar to any other type of policy intervention, publicly supported credit guarantee arrangements for SMEs can generate both benefits and costs. Thus, the economic and social benefits in terms of maintaining or creation of employment, increased investment, enhanced productivity, etc. need to be carefully compared with the costs (Schich et. al, 2016). Costs include both operational costs as well as opportunity costs of public funds. In addition, these schemes can have unintended consequences. For example, they might channel funds to companies that cannot make productive use of them, keep companies alive that otherwise would exit the market; reduce the incentives to explore and develop alternative financing sources; create deviations from the level playing field between companies that benefit from credit guarantees and those that do not; and creating contingent fiscal liabilities.
Selected considerations regarding the evaluation of the performance of public intervention
The OECD Framework for the Evaluation of SME and Entrepreneurship Policies and Programmes was developed in 2007 and provides guidance to policy makers in this area (OECD, 2007). In recent years, considerable advances in both the techniques and availability of data for SME and entrepreneurship policy evaluation have been made. Nonetheless, high-quality evaluations remain relatively rare in the field of SME and entrepreneurship policy. As regards developments in both policy inputs (e.g. amount of loan guarantees) and intermediate outcomes (e.g. number of firms having received loan guarantees), national systems to monitor CGS have improved considerably. This is due also in part to international efforts, including the OECD Scoreboard and complementary efforts at the World Bank, European Investment Bank Group (EIF and EIB) and European Commission. Cross-border reviews of key characteristics of CGS, including their functioning, funding, and some elements of performance include OECD (2013b) and Chatzouz (2017).
Performance is typically assessed based on intermediate outcomes (e.g. new loans generated as a result of loan guarantees), as evaluation of policy outcomes (e.g. new employment created as a result of loan guarantees) continues to be challenging. The key challenge consists of robustly assessing the causal impact of policy interventions. Establishing causality between policy inputs and outcomes requires the construction of a valid counterfactual. In other words, what would have happened to SMEs benefitting from support if they had not received that support? One method that provides an answer to this question relies on an experiment where the guarantee is granted to a sample of randomly selected SMEs. If selection is independent of SMEs characteristics, then the difference between the outcomes for the “treated group” (enterprises benefitting “by chance” from the support measure) and the “control group” (enterprises not benefitting from the programme) can, in principle, be attributed to the treatment, and not to pre-existing differences between the two groups.
In reality, guarantees are not assigned randomly. First, only those SMEs that apply for a loan guarantee have the possibility of obtaining it; second, applicants have to meet certain criteria to be selected for the guarantee programme. As better managed SMEs, with higher growth potential, are in general more likely to get the guarantee, any detected difference between the outcomes for the “treated group” and the “control group” cannot be attributed to the programme only, but should be attributed also in part to intrinsic differences between the groups. If these differences are not controlled for, then the estimated effect of the programme is subject to the so-called selection-into-treatment bias. In the absence of randomised selection to the programme (which, however, would be an “ideal” setup from the programme evaluator’s point of view), analysing the counterfactual requires more sophisticated statistical methods than the simple comparison of the outcomes in the two groups.
Perhaps even more importantly, comparing pre-intervention period and post-intervention period levels of a target variable (such as employment, turnover, or a measure on gender inequality or regional income inequality, etc.) for the group of SMEs receiving guaranteed loans does not provide information about the value added of the programme, as a change in performance can be affected either by the policy intervention or by other factors. Without development of a proper counterfactual, evaluation studies that exploit data covering treated firms only can test whether the performance of the SME has improved or not after receiving the guaranteed loans, but not whether that improvement is due to the policy intervention.
OECD/EC Survey on Evaluating Publicly Supported Financial Guarantee Programmes for SMEs
The OECD/EC Survey describes practices adopted to assess costs and benefits of financial guarantee programmes for SMEs, based on responses from public authorities. The goal of the survey was to enable authorities to learn what approaches others are using and what specific characteristics of evaluation methodologies are considered particularly helpful. To assess whether credit guarantee programmes achieve their scope effectively, periodical evaluations are important, and they are essential for policy makers to improve design elements of these programmes.
Coverage of the survey
A questionnaire was circulated to collect information on how OECD, EU members and partner countries evaluate the performance of their domestic CGS. Altogether 33 responses were received from 24 countries. Responses were invited from countries with or without CGSs, although Iceland was the only country without a CGS that provided a response to the questionnaire. 32 responses from countries with a CGS and 31 completed questionnaires were received, covering 23 countries (Table 2.1).
Table 2.1. Responses received to the OECD/EC survey
Country name |
Name of credit guarantee arrangement |
---|---|
Austria |
Austrian Wirtschaftsservice (AWS) |
Belgium |
Participatie Maatschappij Vlaanderen NV (PMV NV) |
Canada |
Canada Small Business Financing Program (CSBFP) |
|
Export Guarantee Program (EGP) |
Chile |
Corporación de Fomento de la Producción de Chile (CORFO), Banco Estado |
Czech Republic |
Czech-Moravian Guarantee and Development Bank |
Estonia |
KredEx Credit Insurance (KredEx) |
Finland |
Finnvera |
France |
Bpifrance |
Germany |
German Guarantee Banks |
Greece |
Entrepreneurship Fund - Guarantee Fund (ETEAN) |
|
Working Capital Program (ETEAN) |
|
Raw Material Guarantee Program (ETEAN) |
|
Tax and Insurance Guarantee Program (ETEAN) |
|
Guarantee Program for Issuance of Letters of Guarantee (ETEAN) |
Hungary |
Garantiqa, Agrár-Vállalkozási Hitelgarancia Alapítvány (AVHGA) |
Italy |
Central Guarantee Fund (CGF) for SMEs |
|
Confidi |
|
Istituto di servizi per il mercato agricolo alimentare (ISMEA) |
Japan |
Credit Guarantee Corporation |
Korea |
Korea Credit Guarantee Fund (KODIT) |
Lithuania |
Investiciju ir verslo garantijos (INVEGA) |
Mexico |
Nacional Financiera (NAFISA) |
Portugal |
SNGM (Sistema Nacional de Garantia Mútua) - assessment commissioned by the CGS, henceforth ‘Portugal1’ |
|
SNGM (Sistema Nacional de Garantia Mútua) - assessment commissioned and conducted by researchers, henceforth ‘Portugal2’ |
Romania |
National Credit Guarantee Fund for SME (FNGCIMM S.A.-IFN) |
Spain |
Sociedades de Garantía Recíproca (SGR) |
Switzerland |
Gewerbeorientiertes Bürgschaftswesen |
Turkey |
Kredi Garanti Fonu |
United Kingdom |
Enterprise Finance Guarantee - assessment in 2009, henceforth UK(2009) |
|
Enterprise Finance Guarantee - assessment in 2013, henceforth UK(2013) |
United States |
Small Business Administration (SBA) |
Note: Multiple responses from individual countries were invited, where relevant. Altogether 32 responses were obtained from 23 countries. Iceland provided a response but is not listed in the table as no CGS exists in the country. The United States is listed in the table although it provided only general information and did not answer specific survey questions.
Selected lessons from the survey
Independent evaluations versus self-evaluations
Responses from national authorities to the OECD/EC survey regarding the overall outcome of the evaluation range between “positive” and “positive/mixed”. Table 2.2 links the overall outcomes of the evaluations covered by respondents with the identity of the entities undertaking them. Only five evaluations are self-assessments and the majority of evaluations are performed by independent research institutions. The table shows that none of the evaluations identifies negative (or mixed-negative) effects. It also fails to show any clear and systematic links between the identity of the entity conducting the evaluation and the overall outcome. For example, the last row shows that self-evaluations result in either positive or mixed-positive results. In this regard, self-evaluations do not differ from other types of evaluations that were submitted to the OECD/EC survey; in this context, it should be noted that literature reviews suggest that self-assessments tend to result in better outcomes being identified than other types of studies (e.g. Schich, Maccaferri and Cariboni, 2016; Venetoklis, 2000).
In any case, it is useful to “pre-emptively” consider employing practices that can help minimise any potential bias toward positive outcomes in self-assessments. The involvement of independent researchers in the evaluation can help to limit the existence of such bias. This practice has already been adopted by many respondents to this survey, and is also consistent with the commentaries of the explanatory notes to the World Bank/FIRST Initiative Principles.
Table 2.2. Outcome of the study and entity undertaking the evaluation
Institution conducting the survey |
Overall outcome of the CGS evaluation |
||||
---|---|---|---|---|---|
Negative |
Negative / mixed |
Positive / mixed |
Positive |
Number of observations |
|
Research institution/university |
Belgium, |
Chile, Finland, Germany, Japan, Portugal 1, Portugal 2, Switzerland, UK (2009) |
10 |
||
UK (2013) |
|||||
Research institution/ university with CGS |
Austria |
1 |
|||
Research institution/ university with CGS and public authority |
France |
1 |
|||
Public authority |
Estonia, |
Korea, Canada (CSBFP), |
5 |
||
Italy (CGF) |
Italy (Confidi) |
||||
Public authority with CGS |
Turkey |
1 |
|||
CGS |
Canada (EGP), Hungary, Romania |
Lithuania, Mexico |
5 |
||
TOTAL |
0 |
0 |
9 |
14 |
23 |
Note: Based on the responses to the OECD/EC survey. ‘Portugal 1’ and ‘Portugal 2’ refer to evaluations of SNGM (Sistema Nacional de Garantia Mútua) undertaken by two different evaluators, ‘United Kingdom (2009)’ and ‘United Kingdom (2013)’ refer to the assessments of the Enterprise Finance Guarantee 2009 and 2013, respectively.
Frequency of evaluations
Concerning assessment frequency, the survey results suggests that evaluations are often, but not always undertaken regularly. In some cases, only one-off evaluations are performed and, in a few cases, no evaluations are available. According to the two sets of high-level principles, evaluations should be undertaken regularly (G20/OECD principles) or at least periodically (World Bank/FIRST Initiative principles). Thus, there is scope in several countries to increase the frequency of evaluations undertaken.
Objectives against which to conduct the evaluation
The G20/OECD High-Level Principles on SME Financing suggest evaluations should be performed based on “clearly defined, rigorous and measurable policy objectives” (Principle 11 “Monitor and evaluate public programmes to enhance SME finance”). When asked about what specific weaknesses were targeted by the CGS, almost all respondents referred to the lack of sufficient collateral on the part of SMEs, suggesting that the guarantee would substitute for a diagnosed lack of collateral. A general lack of collateral was considered as the specific weakness targeted by the CGS by 26 out of altogether 32 respondents (Figure 2.2). Other respondents suggested that the lack of collateral was confined to either specific firms or to firms in specific sectors, with other respondents suggesting that the CGS was meant to address the issue of the inadequacy of the type of collateral available.
Other shortcomings were also identified, although they seem to play a much less prominent role. Some of these shortcomings refer to social goals, the achievement of which tends to be more difficult to measure as part of an evaluation of CGS activities. Compared to economic variables that are more or less straightforward to estimate, the role of such social objectives seems to be quite limited overall.
Three concepts are often identified as criteria for the evaluation of CGS (OECD, 2013b): financial sustainability, financial additionality and economic additionality, although the dividing line between the three concepts is not always as clear-cut as the definitions of these three concepts below might suggest.
Financial sustainability refers to the ability of the programme to cover the costs of its operations and defaults.
Financial additionality is reflected in incremental credit flows to SME and/or improvements in terms and conditions. This concept relates to intermediate outcomes.
Economic additionality refers to economic effects, e.g. to the effects on variables such as employment, turnover, sales and probability of default, which might have been influenced causally by the credit guarantee. This concept relates to policy outcomes.
In terms of the objective of the evaluation, most respondents are assessing financial additionality and economic additionality, and far fewer financial sustainability. The circles size in Figure 2.3 is proportional to the number of respondents indicating the objectives against which the CGS activities are being evaluated. The figure also shows that many evaluations consider economic additionality in combination with financial additionality; some also consider the former in combination with financial sustainability. Compared to the (mostly academic) studies reviewed in Section 3, respondents to the OECD/EC survey seem to place relatively more emphasis on the evaluation of economic additionality as opposed to financial additionality.
More than half of survey responses reveal that a counterfactual analysis is conducted as part of evaluation studies. Figure 2.3 identifies these responses by black, as opposed to empty, dots. Typically, a counterfactual is constructed in evaluations where economic additionality is assessed. In principle, counterfactual analysis can also be developed in cases where the objective of CGS evaluation is to identify financial sustainability or financial additionality. For instance, the Swiss CGS is evaluated only against the objective of financial additionality; but is based on an analysis of the counterfactual.
Data collected for the evaluation
Survey responses confirm that no single database is sufficient to conduct a rigorous evaluation of the performance of CGS activities. Combinations of databases, e.g. administrative and commercial, as well as those maintained by CGS need to be used, and are being used. Ideally, the CGS should ensure that it collects and keeps relevant data pertaining to its own operations, to facilitate future evaluations (World Bank/First Initiative, 2015). In practice, this is not always the case, as highlighted by OECD/EC survey responses, and already confirmed in the literature review.
Firm level data, as opposed to data at higher levels of aggregation, allows more rigorous evaluations and their use has multiple advantages. First, firm-level data facilitates efforts to redesign existing programmes, which are essentially targeted at firms. They could also facilitate the understanding of which specific parts of programmes work and which parts do not, and what firms should be targeted or not. Second, the programme’s impact is easier to detect using firm-level data, especially as analysis at a more aggregated level might fail to identify significant effects, as a result of measurement problems. Third, conducting counterfactual analysis on firm-level data provides more reliable estimates, given the potentially larger number of observations available. In fact, the assumption that the entities in the “treated” and “untreated” group are identical is more plausible if made at the level of a firm for data at higher levels of aggregation, e.g. at the level of regions or countries, etc.
Survey responses reveal shortcomings in data collection for control groups, however. For example, data on firms that are not beneficiaries of CGS programmes are rarely collected. It would, however, be useful for CGSs to collect information on unsuccessful applicants. Lacking such data, an alternative approach is to construct the control group using data for firms that have not benefitted from the programme, although this approach does not allow differentiation between unsuccessful and successful applicants. It is important to differentiate between these two groups to facilitate the redesign of the programme taking into account information regarding previously unsuccessful applicants. For instance, deciding on the size of a new programme could be a function of the interest shown by unsuccessful applicants for a previous programme.
The recent evaluation of the UK Enterprise Finance Guarantee provides an example of an evaluation that constructed a counterfactual control group based on micro-level data. Statistical techniques are used to ensure that observed differences between the beneficiaries of the guarantee and the control group can be attributed to the impact of the guarantee (see Box 2.2).
Box 2.2. The UK Enterprise Finance Guarantee
In 2017, the United Kingdom completed an economic impact evaluation of the Enterprise Finance Guarantee (EFG) scheme. This evaluation builds on the previous series of assessments conducted in 2009 and 2013. The 2017 results show that the EFG scheme continues to create significant economic benefits to society. EFG supported loans to SMEs across 2010/11 to 2012/13 generated GBP 415 million of economic benefits, compared to GBP 82 million economic costs. Five-year societal benefit-to-cost ratios ranged from 7.2 (for the 2010/11 loan cohort) to 11.3 (for the 2012/13 loan cohort).
The cost benefit analysis takes into account only costs and benefits that are additional. In the context of a loan guarantee programme such as the EFG scheme, additional benefits refer to the economic benefits of loans: i) extended to borrowers that would not have been able to take out loans otherwise, ii) which do not displace the economic benefits that other businesses may have experienced in the absence of the scheme while iii) adjusting for firm survival. Further, the estimates of benefits were derived from an econometric analysis of EFG participants and a counterfactual group of non-participants that are otherwise similar to EFG participants. As such, the estimates of benefits can be attributed to EFG loans.
Baseline estimates of economic benefits were derived within a propensity score matching framework, whereby the difference-in-differences in the economic outcomes of EFG beneficiaries were compared to a matched sample of non-beneficiaries. Moreover, robustness of the estimates was tested econometrically which controls for firm-level fixed effects and time-varying shocks.
EFG beneficiaries demonstrated turnover and employment growth that was 7.3% per annum and 6.6% per annum faster than non-beneficiaries, respectively. Turnover and employment growth impacts were larger for relatively small and young firms, perhaps because they typically face financial constraints due to a combination of a lack of credit history and collateral shortages.
The central estimates for the impacts of EFG loans on survival probability show that EFG beneficiaries had a 0.6% lower annualised survival probability than non-beneficiaries. The lower annualised survival probability of EFG beneficiaries may reflect that, once provided with access to finance, some of the least productive of the EFG beneficiaries face firm deaths more rapidly. Interestingly, start-up EFG beneficiaries’ survival probabilities were 1.2% higher than non-beneficiaries, suggesting that access to finance through the EFG scheme was crucial when starting a business.
Financial additionality for the surveyed EFG beneficiaries was 63%. The level of financial additionality observed indicates that 37% of firms surveyed stated that they could have accessed external finance without the guarantee from the EFG scheme and that the loan size, interest rate and other terms and conditions would have been at least as competitive as a guaranteed loan under the EFG scheme.
Source: British Business Bank.
Using evaluation results for operational decisions
The final aim of any policy intervention evaluation is to provide policy makers with sound evidence on the effectiveness of the programme in different dimensions. It should also support informed operational decisions on the design elements of the programmes, potentially adjusting them as a function of the outcomes of the evaluation. The OECD/EC survey reveals that many but not all assessments are being used for such types of operational decisions (15 out of 23).
Figure 2.4 combines the information collected from the responses concerning the operational changes resulting from the evaluation with the information on the frequency of evaluations and on the level of data considered. It suggests that evaluation is more likely to lead to changes in the operational decisions, and hence feed into policy making, when the evaluations are conducted regularly and when firm-level data are considered. Two responses indicate that operational decisions can be taken even in the absence of these two factors, however.
Conclusions
The need to evaluate the performance and cost-effectiveness of SME support arrangements has been widely recognised, including in the recently developed G20/OECD High Level Principles on SME Financing (G20/OECD, 2015) and in public credit guarantee arrangements (The World Bank and FIRST Initiative, 2015). Despite this agreement among policy makers, there is no internationally agreed set of good practices on methods to evaluate the performance and cost-effectiveness of CGSs. Thus, to find out more about national approaches in this regard, a "OECD/EC Survey on Evaluating Publicly Supported Financial Guarantee Programmes for SMEs" was conducted to enable participants to learn what approaches others are using and what specific characteristics of evaluation methodologies are considered particularly helpful.
The responses highlight the wide range of different evaluation approaches across evaluated CGSs and across countries. Taking together the survey results, the results of the academic literature and the recently developed high-level principles (G20/OECD and World Bank/FIRST Initiative), one conclusion is that evaluations of CGS activities should be undertaken regularly and that evaluations should include the following key features:
A clear objective against which the added value of the programme is measured. Perhaps the most straightforward is financial additionality, which captures the added value of CGS activities in terms of increasing flow of funds (or reducing their costs). In addition, the effect of these activities on the economy (e.g. change in employment, investment, growth, etc.) could be considered. Also, it is important to assess whether the programme is financially sustainable, i.e., are CGS activities designed and managed in such a way that substantial financial losses (e.g. where premiums collected are not sufficient to cover claims) will be avoided. A more ambitious evaluation would also verify whether the initially diagnosed market failure that the CGS is supposed to address still persists, as well as what the effect of alternative policy choices might be;
To ensure effectiveness, independent evaluation is preferable to self-evaluation. However, self-evaluation effectiveness can be ensured by having an appropriate governance framework in place. Collaborative efforts with independent research or other institutions can also be conducive to evaluation effectiveness;
Counterfactual analysis should be developed to understand what would have happened in the absence of the CGS. In this context, it is key to collect detailed data not only on firms benefiting from guarantees, but also on unsuccessful applicants. In addition, data need to be collected not only on the variables of key policy interest (e.g. employment, growth), but also on additional variables capturing pre-existing heterogeneity across firms in the treated group and in the control group.
One of the key impediments to rigorous performance evaluations in practice is the lack of appropriate data. More data needs to be collected, not only on the variables of key policy interest (e.g. employment, growth), but also on additional variables capturing pre-existing heterogeneity across firms in the treated group and in the control group. Micro data (i.e., firm-level or contract level data) are preferred to aggregated data, as they facilitate a more rigorous analysis and the results lend themselves more naturally to changes in programme design. Furthermore, existing databases should be made available for the purposes of performance evaluations. Typically, no single database alone is sufficient to construct a robust counterfactual, and different databases need to be combined, typically requiring a matching datasets at the micro level. Such exercises are difficult and time-consuming, however.